Compare commits

...

34 Commits

Author SHA1 Message Date
Torkel Ödegaard
ce9dffa526 Fixed positioning issue 2025-12-05 16:35:32 -03:00
Torkel Ödegaard
807fedd12f SelectBase: Use standard portal container 2025-12-05 16:35:32 -03:00
Tito Lins
f67d63437b print prompt 2025-12-05 16:30:44 -03:00
Tito Lins
d153a075fd generate app token 2025-12-05 16:07:17 -03:00
Tito Lins
b059e95661 move prompt to external repo 2025-12-05 15:56:37 -03:00
Sarah Zinger
422c791042 SQL Expressions: Fix alerts with sql expressions that have a cte (#114852)
Fix for #114377 - fix alerts with sql expressions that have a cte
2025-12-05 15:56:37 -03:00
Bogdan Matei
2f187b71e1 Dashboard: Fix dropping panels in tabs and rows (#114893) 2025-12-05 15:56:37 -03:00
Santiago
fd6183178d Remote Alertmanager: Remove X-Remote-Alertmanager header (#114917)
Remote Alertmanager: Remove X-Remote-Alertmanager haeder
2025-12-05 15:56:37 -03:00
Ryan McKinley
008df9bab8 Dashboards: update manifest to avoid useoldmanifestkinds (#114910) 2025-12-05 15:56:37 -03:00
Alexander Zobnin
6bbaf5ecd4 Zanzana: Fix duplicated writes in one request (#114900)
* Zanzana: Fix duplicated writes

* add tests
2025-12-05 15:56:37 -03:00
Tung Nguyen
44e735abde OpenTSDB: Remove gf-form from opentsdb Annotation Editor (#112590)
* Chore: Remove gf-form in opentsdb AnnotationEditor

* Fix: small typo

* chore: remove stale eslint suppression rule
2025-12-05 15:56:37 -03:00
Andreas Christou
c0663f894a Azure: Fix dcount aggregation (#114666)
* Add parameter type field

* Use parameterType to filter columns for aggregation funcs

* Support selecting column for dcount aggregation
2025-12-05 15:56:37 -03:00
Ryan McKinley
0179bbd2da Folders: update manifest (avoid useoldmanifestkinds) (#114827) 2025-12-05 15:56:36 -03:00
Sergej-Vlasov
5799f473bc TransformSaveModelToScene: Force v1 for reports (#114767)
force v1 for reports
2025-12-05 15:56:36 -03:00
Andreas Christou
f81f97ed01 MSSQL: Remove the need for azure_auth_enabled (#114775)
Remove the need for azure_auth_enabled for MSSQL
2025-12-05 15:56:36 -03:00
Torkel Ödegaard
9ef1c5bcae Gauge: Only show spotlight in dark themes (#114524)
* Gauge: Only show spotlight in dark themes

* Update
2025-12-05 15:56:36 -03:00
Hugo Häggmark
7cbd9f1129 chore: reduce Loki barrel files (#114888) 2025-12-05 15:56:36 -03:00
Pepe Cano
055c4b13ae docs(alerting): add new Examples of trace-based alerts (#114511)
* docs(alerting): add new Examples of trace-based alerts

* fix vale issues
2025-12-05 15:56:35 -03:00
Hugo Häggmark
79e1eb852c chore: introduce toggle for decoupling plugins from bootdata (#114890)
chore: toggle for decoupling plugins from bootdata
2025-12-05 15:56:35 -03:00
grafana-pr-automation[bot]
e125be7588 I18n: Download translations from Crowdin (#114886)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-05 15:56:35 -03:00
Galen Kistler
043201dcfe TableRT: fix bug preventing users from selecting filter operators (#114860)
* fix: bug preventing users from selecting filter operators
2025-12-05 15:56:35 -03:00
Steve Simpson
5b0c57d348 Alerting: Add basic auth options to historian args (#114880) 2025-12-05 15:56:35 -03:00
Steve Simpson
77526f999b Alerting: Add command line parsing for historian options (#114865) 2025-12-05 15:56:34 -03:00
alerting-team[bot]
78c5f9141f Alerting: Update alerting module to de8c2bbf9eba591078e9d9d7c6cbbe4142ef2d0b (#114877)
[create-pull-request] automated change

Co-authored-by: yuri-tceretian <25988953+yuri-tceretian@users.noreply.github.com>
2025-12-05 15:56:34 -03:00
ismail simsek
e8ce9044e6 Postgresql: Fix variable interpolation logic when the variable has multiple values (#114058)
* fix the variable interpolation

* add jest config to grafana-sql

* fix broken tests

* add variable interpolation tests

* lint

* apply fix only to postresql datasource
2025-12-05 15:56:34 -03:00
beejeebus
1d2b294f95 Pass the feature flag instead of false to RegisterAPIService
Doh. The feature flag was not actually being used to enable the new DS
config CRUD APIs. This PR fixes that, hashtag facepalm.
2025-12-05 15:56:34 -03:00
Rafael Bortolon Paulovic
81fad0b457 fix(unified): key_path column default (#114859)
fix: key_path column default
2025-12-05 15:56:34 -03:00
Andrew Hackmann
a7ce8c9f36 Elasticsearch: Client refactor (#114745)
* split up client.go

* split up search_request.go

* remove double spaces
2025-12-05 15:56:34 -03:00
Costa Alexoglou
a5599d4f22 feat: add default permissions and DTO support for MT (#114829)
* feat: add default permissions and DTO support for MT

* chore: review comments

* chore: review comments
2025-12-05 15:56:34 -03:00
Collin Fingar
0ebfa25ee6 Library Panels: Update connection generation for V2 (#114504)
* Library Panels: Update connection generation for V2

* add test

---------

Co-authored-by: Haris Rozajac <haris.rozajac12@gmail.com>
2025-12-05 15:56:33 -03:00
Gabriel MABILLE
ecce6b20d7 grafana-iam: Fix missing UID (#114856) 2025-12-05 15:56:33 -03:00
Todd Treece
7fc430faae Plugins API: Add plugins to RBAC mapper (#114843) 2025-12-05 15:56:33 -03:00
Tito Lins
475be966f4 update prompt 2025-12-05 13:03:43 -03:00
Tito Lins
78f231725e test agent-demo 2025-12-05 11:22:54 -03:00
163 changed files with 5408 additions and 1446 deletions

View File

@@ -0,0 +1,77 @@
name: Usability Review Agent
description: Runs AI-powered usability testing using OpenAI Computer Use with Playwright
inputs:
target_url:
description: The URL to test for usability
required: true
openai_api_key:
description: OpenAI API key with Computer Use access
required: true
openai_org:
description: OpenAI organization ID
required: false
grafana_username:
description: Grafana username for authentication
required: false
grafana_password:
description: Grafana password for authentication
required: false
workflow_name:
description: The workflow or feature to test
required: false
default: "the application interface"
prompt_file:
description: Custom prompt file to use (relative to action directory)
required: false
default: "prompt.txt"
output_text_path:
description: Path to save the review output text
required: false
default: "usability-review.txt"
screenshot_path:
description: Path to save the final screenshot
required: false
default: "usability-screenshot.png"
outputs:
review_output:
description: Path to the review output file
value: ${{ inputs.output_text_path }}
screenshot_output:
description: Path to the screenshot file
value: ${{ inputs.screenshot_path }}
runs:
using: "composite"
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install dependencies
shell: bash
working-directory: ${{ github.action_path }}
run: |
python -m pip install -U pip
pip install -r requirements.txt
python -m playwright install --with-deps chromium
echo "System info:"
free -h || echo "free command not available"
df -h | head -5
- name: Run usability review agent
shell: bash
working-directory: ${{ github.action_path }}
env:
OPENAI_API_KEY: ${{ inputs.openai_api_key }}
OPENAI_ORG: ${{ inputs.openai_org }}
TARGET_URL: ${{ inputs.target_url }}
GRAFANA_USERNAME: ${{ inputs.grafana_username }}
GRAFANA_PASSWORD: ${{ inputs.grafana_password }}
OUTPUT_TEXT_PATH: ${{ github.workspace }}/${{ inputs.output_text_path }}
SCREENSHOT_PATH: ${{ github.workspace }}/${{ inputs.screenshot_path }}
PROMPT_FILE: ${{ github.action_path }}/${{ inputs.prompt_file }}
run: |
python run_agent.py

View File

@@ -0,0 +1,11 @@
from . import default
from . import contrib
from .computer import Computer
from .config import computers_config
__all__ = [
"default",
"contrib",
"Computer",
"computers_config",
]

View File

@@ -0,0 +1,29 @@
from typing import Protocol, List, Literal, Dict
class Computer(Protocol):
"""Defines the 'shape' (methods/properties) our loop expects."""
def get_environment(self) -> Literal["windows", "mac", "linux", "browser"]: ...
def get_dimensions(self) -> tuple[int, int]: ...
def screenshot(self) -> str: ...
def click(self, x: int, y: int, button: str = "left") -> None: ...
def double_click(self, x: int, y: int) -> None: ...
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None: ...
def type(self, text: str) -> None: ...
def wait(self, ms: int = 1000) -> None: ...
def move(self, x: int, y: int) -> None: ...
def keypress(self, keys: List[str]) -> None: ...
def drag(self, path: List[Dict[str, int]]) -> None: ...
def get_current_url() -> str: ...

View File

@@ -0,0 +1,10 @@
from .default import *
from .contrib import *
computers_config = {
"local-playwright": LocalPlaywrightBrowser,
"docker": DockerComputer,
"browserbase": BrowserbaseBrowser,
"scrapybara-browser": ScrapybaraBrowser,
"scrapybara-ubuntu": ScrapybaraUbuntu,
}

View File

@@ -0,0 +1,4 @@
from .browserbase import BrowserbaseBrowser
from .local_playwright import LocalPlaywrightBrowser
from .docker import DockerComputer
from .scrapybara import ScrapybaraBrowser, ScrapybaraUbuntu

View File

@@ -0,0 +1,200 @@
import os
from typing import Tuple, Dict, List, Union, Optional
from playwright.sync_api import Browser, Page, BrowserContext, Error as PlaywrightError
from ..shared.base_playwright import BasePlaywrightComputer
from browserbase import Browserbase
from dotenv import load_dotenv
import base64
load_dotenv()
class BrowserbaseBrowser(BasePlaywrightComputer):
"""
Browserbase is a headless browser platform that offers a remote browser API. You can use it to control thousands of browsers from anywhere.
You can find more information about Browserbase at https://www.browserbase.com/computer-use or view our OpenAI CUA Quickstart at https://docs.browserbase.com/integrations/openai-cua/introduction.
IMPORTANT: This Browserbase computer requires the use of the `goto` tool defined in playwright_with_custom_functions.py.
Make sure to include this tool in your configuration when using the Browserbase computer.
"""
def get_dimensions(self):
return self.dimensions
def __init__(
self,
width: int = 1024,
height: int = 768,
region: str = "us-west-2",
proxy: bool = False,
virtual_mouse: bool = True,
ad_blocker: bool = False,
):
"""
Initialize the Browserbase instance. Additional configuration options for features such as persistent cookies, ad blockers, file downloads and more can be found in the Browserbase API documentation: https://docs.browserbase.com/reference/api/create-a-session
Args:
width (int): The width of the browser viewport. Default is 1024.
height (int): The height of the browser viewport. Default is 768.
region (str): The region for the Browserbase session. Default is "us-west-2". Pick a region close to you for better performance. https://docs.browserbase.com/guides/multi-region
proxy (bool): Whether to use a proxy for the session. Default is False. Turn on proxies if you're browsing is frequently interrupted. https://docs.browserbase.com/features/proxies
virtual_mouse (bool): Whether to enable the virtual mouse cursor. Default is True.
ad_blocker (bool): Whether to enable the built-in ad blocker. Default is False.
"""
super().__init__()
self.bb = Browserbase(api_key=os.getenv("BROWSERBASE_API_KEY"))
self.project_id = os.getenv("BROWSERBASE_PROJECT_ID")
self.session = None
self.dimensions = (width, height)
self.region = region
self.proxy = proxy
self.virtual_mouse = virtual_mouse
self.ad_blocker = ad_blocker
def _get_browser_and_page(self) -> Tuple[Browser, Page]:
"""
Create a Browserbase session and connect to it.
Returns:
Tuple[Browser, Page]: A tuple containing the connected browser and page objects.
"""
# Create a session on Browserbase with specified parameters
width, height = self.dimensions
session_params = {
"project_id": self.project_id,
"browser_settings": {
"viewport": {"width": width, "height": height},
"blockAds": self.ad_blocker,
},
"region": self.region,
"proxies": self.proxy,
}
self.session = self.bb.sessions.create(**session_params)
# Print the live session URL
print(
f"Watch and control this browser live at https://www.browserbase.com/sessions/{self.session.id}"
)
# Connect to the remote session
browser = self._playwright.chromium.connect_over_cdp(
self.session.connect_url, timeout=60000
)
context = browser.contexts[0]
# Add event listeners for page creation and closure
context.on("page", self._handle_new_page)
# Only add the init script if virtual_mouse is True
if self.virtual_mouse:
context.add_init_script(
"""
// Only run in the top frame
if (window.self === window.top) {
function initCursor() {
const CURSOR_ID = '__cursor__';
// Check if cursor element already exists
if (document.getElementById(CURSOR_ID)) return;
const cursor = document.createElement('div');
cursor.id = CURSOR_ID;
Object.assign(cursor.style, {
position: 'fixed',
top: '0px',
left: '0px',
width: '20px',
height: '20px',
backgroundImage: 'url("data:image/svg+xml;utf8,<svg xmlns=\\'http://www.w3.org/2000/svg\\' viewBox=\\'0 0 24 24\\' fill=\\'black\\' stroke=\\'white\\' stroke-width=\\'1\\' stroke-linejoin=\\'round\\' stroke-linecap=\\'round\\'><polygon points=\\'2,2 2,22 8,16 14,22 17,19 11,13 20,13\\'/></svg>")',
backgroundSize: 'cover',
pointerEvents: 'none',
zIndex: '99999',
transform: 'translate(-2px, -2px)',
});
document.body.appendChild(cursor);
document.addEventListener("mousemove", (e) => {
cursor.style.top = e.clientY + "px";
cursor.style.left = e.clientX + "px";
});
}
// Use requestAnimationFrame for early execution
requestAnimationFrame(function checkBody() {
if (document.body) {
initCursor();
} else {
requestAnimationFrame(checkBody);
}
});
}
"""
)
page = context.pages[0]
page.on("close", self._handle_page_close)
page.goto("https://bing.com")
return browser, page
def _handle_new_page(self, page: Page):
"""Handle the creation of a new page."""
print("New page created")
self._page = page
page.on("close", self._handle_page_close)
def _handle_page_close(self, page: Page):
"""Handle the closure of a page."""
print("Page closed")
if self._page == page:
if self._browser.contexts[0].pages:
self._page = self._browser.contexts[0].pages[-1]
else:
print("Warning: All pages have been closed.")
self._page = None
def __exit__(self, exc_type, exc_val, exc_tb):
"""
Clean up resources when exiting the context manager.
Args:
exc_type: The type of the exception that caused the context to be exited.
exc_val: The exception instance that caused the context to be exited.
exc_tb: A traceback object encapsulating the call stack at the point where the exception occurred.
"""
if self._page:
self._page.close()
if self._browser:
self._browser.close()
if self._playwright:
self._playwright.stop()
if self.session:
print(
f"Session completed. View replay at https://browserbase.com/sessions/{self.session.id}"
)
def screenshot(self) -> str:
"""
Capture a screenshot of the current viewport using CDP.
Returns:
str: A base64 encoded string of the screenshot.
"""
try:
# Get CDP session from the page
cdp_session = self._page.context.new_cdp_session(self._page)
# Capture screenshot using CDP
result = cdp_session.send(
"Page.captureScreenshot", {"format": "png", "fromSurface": True}
)
return result["data"]
except PlaywrightError as error:
print(
f"CDP screenshot failed, falling back to standard screenshot: {error}"
)
return super().screenshot()

View File

@@ -0,0 +1,174 @@
import subprocess
import time
import shlex
class DockerComputer:
def get_environment(self):
return "linux"
def get_dimensions(self):
return (1280, 720) # Default fallback; will be updated in __enter__.
def __init__(
self,
container_name="cua-sample-app",
image="ghcr.io/openai/openai-cua-sample-app:latest",
display=":99",
port_mapping="5900:5900",
):
self.container_name = container_name
self.image = image
self.display = display
self.port_mapping = port_mapping
def __enter__(self):
# Check if the container is running
result = subprocess.run(
["docker", "ps", "-q", "-f", f"name={self.container_name}"],
capture_output=True,
text=True,
)
if not result.stdout.strip():
raise RuntimeError(
f"Container {self.container_name} is not running. Build and run with:\n"
f"docker build -t {self.container_name} .\n"
f"docker run --rm -it --name {self.container_name} "
f"-p {self.port_mapping} -e DISPLAY={self.display} {self.container_name}"
)
# Fetch display geometry
geometry = self._exec(
f"DISPLAY={self.display} xdotool getdisplaygeometry"
).strip()
if geometry:
w, h = geometry.split()
self.dimensions = (int(w), int(h))
# print("Starting Docker container...")
# # Run the container detached, removing it automatically when it stops
# subprocess.check_call(
# [
# "docker",
# "run",
# "-d",
# "--rm",
# "--name",
# self.container_name,
# "-p",
# self.port_mapping,
# self.image,
# ]
# )
# # Give the container a moment to start
# time.sleep(3)
# print("Entering DockerComputer context")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# print("Stopping Docker container...")
# subprocess.check_call(["docker", "stop", self.container_name])
# print("Exiting DockerComputer context")
pass
def _exec(self, cmd: str) -> str:
"""
Run 'cmd' in the container.
We wrap cmd in double quotes and escape any double quotes inside it,
so spaces or quotes don't break the shell call.
"""
# Escape any existing double quotes in cmd
safe_cmd = cmd.replace('"', '\\"')
# Then wrap the entire cmd in double quotes for `sh -c`
docker_cmd = f'docker exec {self.container_name} sh -c "{safe_cmd}"'
return subprocess.check_output(docker_cmd, shell=True).decode(
"utf-8", errors="ignore"
)
def screenshot(self) -> str:
"""
Takes a screenshot with ImageMagick (import), returning base64-encoded PNG.
Requires 'import'.
"""
# cmd = (
# f"export DISPLAY={self.display} && "
# "import -window root /tmp/screenshot.png && "
# "base64 /tmp/screenshot.png"
# )
cmd = (
f"export DISPLAY={self.display} && "
"import -window root png:- | base64 -w 0"
)
return self._exec(cmd)
def click(self, x: int, y: int, button: str = "left") -> None:
button_map = {"left": 1, "middle": 2, "right": 3}
b = button_map.get(button, 1)
self._exec(f"DISPLAY={self.display} xdotool mousemove {x} {y} click {b}")
def double_click(self, x: int, y: int) -> None:
self._exec(
f"DISPLAY={self.display} xdotool mousemove {x} {y} click --repeat 2 1"
)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
"""
For simple vertical scrolling: xdotool click 4 (scroll up) or 5 (scroll down).
"""
self._exec(f"DISPLAY={self.display} xdotool mousemove {x} {y}")
clicks = abs(scroll_y)
button = 4 if scroll_y < 0 else 5
for _ in range(clicks):
self._exec(f"DISPLAY={self.display} xdotool click {button}")
def type(self, text: str) -> None:
"""
Type the given text via xdotool, preserving spaces and quotes.
"""
# Escape single quotes in the user text: ' -> '\'\''
safe_text = text.replace("'", "'\\''")
# Then wrap everything in single quotes for xdotool
cmd = f"DISPLAY={self.display} xdotool type -- '{safe_text}'"
self._exec(cmd)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
def move(self, x: int, y: int) -> None:
self._exec(f"DISPLAY={self.display} xdotool mousemove {x} {y}")
def keypress(self, keys: list[str]) -> None:
mapping = {
"ENTER": "Return",
"LEFT": "Left",
"RIGHT": "Right",
"UP": "Up",
"DOWN": "Down",
"ESC": "Escape",
"SPACE": "space",
"BACKSPACE": "BackSpace",
"TAB": "Tab",
}
mapped_keys = [mapping.get(key, key) for key in keys]
combo = "+".join(mapped_keys)
self._exec(f"DISPLAY={self.display} xdotool key {combo}")
def drag(self, path: list[dict[str, int]]) -> None:
if not path:
return
start_x = path[0]["x"]
start_y = path[0]["y"]
self._exec(
f"DISPLAY={self.display} xdotool mousemove {start_x} {start_y} mousedown 1"
)
for point in path[1:]:
self._exec(
f"DISPLAY={self.display} xdotool mousemove {point['x']} {point['y']}"
)
self._exec(f"DISPLAY={self.display} xdotool mouseup 1")
def get_current_url(self):
return None

View File

@@ -0,0 +1,165 @@
import os
from playwright.sync_api import Browser, Page
from ..shared.base_playwright import BasePlaywrightComputer
class LocalPlaywrightBrowser(BasePlaywrightComputer):
"""Launches a local Chromium instance using Playwright."""
def __init__(self, headless: bool = False):
super().__init__()
self.headless = headless
def _get_browser_and_page(self) -> tuple[Browser, Page]:
width, height = self.get_dimensions()
launch_args = [
f"--window-size={width},{height}",
"--disable-extensions",
"--disable-file-system",
]
browser = self._playwright.chromium.launch(
chromium_sandbox=False,
headless=self.headless,
args=launch_args,
env={"DISPLAY": ":0"},
)
context = browser.new_context(
user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
locale="en-US",
timezone_id="UTC"
)
# Add event listeners for page creation and closure
context.on("page", self._handle_new_page)
page = context.new_page()
page.set_viewport_size({"width": width, "height": height})
page.on("close", self._handle_page_close)
# Add logging for debugging
page.on("console", lambda msg: print(f"Browser console: {msg.text}"))
page.on("pageerror", lambda err: print(f"Page error: {err}"))
target_url = os.environ.get("TARGET_URL", "https://grafana.com/docs/")
grafana_username = os.environ.get("GRAFANA_USERNAME")
grafana_password = os.environ.get("GRAFANA_PASSWORD")
# If credentials provided, log in first
if grafana_username and grafana_password:
from urllib.parse import urlparse, urljoin
base_url = f"{urlparse(target_url).scheme}://{urlparse(target_url).netloc}"
login_url = urljoin(base_url, "/login")
print(f"Logging in to: {login_url}")
page.goto(login_url, timeout=60000)
print(f"Page loaded, current URL: {page.url}")
print(f"Page title: {page.title()}")
try:
# Wait for login form - try multiple selector strategies
print("Waiting for login form to appear...")
# Try to find by placeholder first (more reliable for older versions)
username_field = page.get_by_placeholder("email or username")
password_field = page.get_by_placeholder("password", exact=True)
username_field.wait_for(state="visible", timeout=60000)
print("Login form detected")
# Take screenshot after form is visible
screenshot_path = os.environ.get("GITHUB_WORKSPACE", ".") + "/login_page.png"
page.screenshot(path=screenshot_path)
print(f"Screenshot saved as {screenshot_path}")
# Fill credentials using placeholder selectors
print(f"Filling username (length: {len(grafana_username)})")
username_field.fill(grafana_username)
print(f"Filling password (length: {len(grafana_password)})")
password_field.fill(grafana_password)
print("Credentials filled successfully")
# Click login button by text
print("Clicking login button...")
page.get_by_role("button", name="Log in").click()
print("Login form submitted")
# Wait for login to complete
print("Waiting for post-login navigation...")
# Try to wait for multiple possible indicators of successful login
# The page might redirect to setup guide, dashboard, or other pages
try:
# Wait for either: navigation away from login OR any logged-in UI element
page.locator('body:not(:has-text("Welcome to Grafana Cloud"))').or_(
page.locator('[aria-label="Profile"]')
).or_(
page.locator('a:has-text("Home")')
).first.wait_for(state="attached", timeout=15000)
print(f"Post-login navigation detected, current URL: {page.url}")
# Verify we actually left the login page
if "/login" in page.url:
raise Exception("Still on login page after navigation")
except Exception as wait_err:
print(f"Login completion wait failed: {wait_err}")
if "/login" in page.url:
raise Exception(f"Login failed - still on login page: {page.url}")
else:
print(f"Continuing anyway - URL shows we're logged in: {page.url}")
print(f"Login successful, current URL: {page.url}")
except Exception as e:
print(f"Login failed: {e}")
print(f"Current URL at error: {page.url}")
print(f"Page title at error: {page.title()}")
# Get page content for debugging
try:
body_text = page.locator('body').text_content()
print(f"Page body text (first 500 chars): {body_text[:500] if body_text else 'No body text'}")
except Exception as content_err:
print(f"Could not get page content: {content_err}")
error_screenshot_path = os.environ.get("GITHUB_WORKSPACE", ".") + "/login_error.png"
page.screenshot(path=error_screenshot_path)
print(f"Error screenshot saved as {error_screenshot_path}")
raise
# Set up console and error logging
page.on("console", lambda msg: print(f"Console [{msg.type}]: {msg.text}"))
page.on("pageerror", lambda err: print(f"Page error: {err}"))
print(f"Navigating to: {target_url}")
page.goto(target_url, wait_until="domcontentloaded", timeout=180000)
print(f"Page loaded, URL: {page.url}")
# Wait a bit for Grafana to initialize
print("Waiting for Grafana to initialize...")
page.wait_for_timeout(10000) # Wait 10 seconds for app to settle
print(f"Final URL after waiting: {page.url}")
return browser, page
def _handle_new_page(self, page: Page):
"""Handle the creation of a new page."""
print("New page created")
self._page = page
page.on("close", self._handle_page_close)
def _handle_page_close(self, page: Page):
"""Handle the closure of a page."""
print("Page closed")
if self._page == page:
if self._browser.contexts[0].pages:
self._page = self._browser.contexts[0].pages[-1]
else:
print("Warning: All pages have been closed.")
self._page = None

View File

@@ -0,0 +1,220 @@
import os
import time
from dotenv import load_dotenv
from scrapybara import Scrapybara
from playwright.sync_api import sync_playwright, Browser, Page
from utils import BLOCKED_DOMAINS
load_dotenv()
CUA_KEY_TO_SCRAPYBARA_KEY = {
"/": "slash",
"\\": "backslash",
"arrowdown": "Down",
"arrowleft": "Left",
"arrowright": "Right",
"arrowup": "Up",
"backspace": "BackSpace",
"capslock": "Caps_Lock",
"cmd": "Meta_L",
"delete": "Delete",
"end": "End",
"enter": "Return",
"esc": "Escape",
"home": "Home",
"insert": "Insert",
"option": "Alt_L",
"pagedown": "Page_Down",
"pageup": "Page_Up",
"tab": "Tab",
"win": "Meta_L",
}
class ScrapybaraBrowser:
"""
Scrapybara provides virtual desktops and browsers in the cloud. https://scrapybara.com
You can try OpenAI CUA for free at https://computer.new or read our CUA Quickstart at https://computer.new/cua.
"""
def get_environment(self):
return "browser"
def get_dimensions(self):
return (1024, 768)
def __init__(self):
self.client = Scrapybara(api_key=os.getenv("SCRAPYBARA_API_KEY"))
self._playwright = None
self._browser: Browser | None = None
self._page: Page | None = None
def __enter__(self):
print("Starting scrapybara browser")
blocked_domains = [
domain.replace("https://", "").replace("www.", "")
for domain in BLOCKED_DOMAINS
]
self.instance = self.client.start_browser(blocked_domains=blocked_domains)
print("Scrapybara browser started ₍ᐢ•(ܫ)•ᐢ₎")
print(
f"You can view and interact with the stream at {self.instance.get_stream_url().stream_url}"
)
self._playwright = sync_playwright().start()
self._browser = self._playwright.chromium.connect_over_cdp(
self.instance.get_cdp_url().cdp_url
)
self._page = self._browser.contexts[0].pages[0]
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print("Stopping scrapybara browser")
self.instance.stop()
print("Scrapybara browser stopped ₍ᐢ-(ェ)-ᐢ₎")
def goto(self, url: str) -> None:
self._page.goto(url)
def get_current_url(self) -> str:
return self.instance.get_current_url().current_url
def screenshot(self) -> str:
return self.instance.screenshot().base_64_image
def click(self, x: int, y: int, button: str = "left") -> None:
button = "middle" if button == "wheel" else button
self.instance.computer(
action="click_mouse",
click_type="click",
button=button,
coordinates=[x, y],
num_clicks=1,
)
def double_click(self, x: int, y: int) -> None:
self.instance.computer(
action="click_mouse",
click_type="click",
button="left",
coordinates=[x, y],
num_clicks=2,
)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
self.instance.computer(
action="scroll",
coordinates=[x, y],
delta_x=scroll_x // 20,
delta_y=scroll_y // 20,
)
def type(self, text: str) -> None:
self.instance.computer(action="type_text", text=text)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
# Scrapybara also has `self.instance.computer(action="wait", duration=ms / 1000)`
def move(self, x: int, y: int) -> None:
self.instance.computer(action="move_mouse", coordinates=[x, y])
def keypress(self, keys: list[str]) -> None:
mapped_keys = [
CUA_KEY_TO_SCRAPYBARA_KEY.get(key.lower(), key.lower()) for key in keys
]
self.instance.computer(action="press_key", keys=mapped_keys)
def drag(self, path: list[dict[str, int]]) -> None:
if not path:
return
path = [[point["x"], point["y"]] for point in path]
self.instance.computer(action="drag_mouse", path=path)
class ScrapybaraUbuntu:
"""
Scrapybara provides virtual desktops and browsers in the cloud.
You can try OpenAI CUA for free at https://computer.new or read our CUA Quickstart at https://computer.new/cua.
"""
def get_environment(self):
return "linux"
def get_dimensions(self):
return (1024, 768)
def __init__(self):
self.client = Scrapybara(api_key=os.getenv("SCRAPYBARA_API_KEY"))
def __enter__(self):
print("Starting Scrapybara Ubuntu instance")
blocked_domains = [
domain.replace("https://", "").replace("www.", "")
for domain in BLOCKED_DOMAINS
]
self.instance = self.client.start_ubuntu(blocked_domains=blocked_domains)
print("Scrapybara Ubuntu instance started ₍ᐢ•(ܫ)•ᐢ₎")
print(
f"You can view and interact with the stream at {self.instance.get_stream_url().stream_url}"
)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print("Stopping Scrapybara Ubuntu instance")
self.instance.stop()
print("Scrapybara Ubuntu instance stopped ₍ᐢ-(ェ)-ᐢ₎")
def screenshot(self) -> str:
return self.instance.screenshot().base_64_image
def click(self, x: int, y: int, button: str = "left") -> None:
button = "middle" if button == "wheel" else button
self.instance.computer(
action="click_mouse",
click_type="click",
button=button,
coordinates=[x, y],
num_clicks=1,
)
def double_click(self, x: int, y: int) -> None:
self.instance.computer(
action="click_mouse",
click_type="click",
button="left",
coordinates=[x, y],
num_clicks=2,
)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
self.instance.computer(
action="scroll",
coordinates=[x, y],
delta_x=scroll_x // 20,
delta_y=scroll_y // 20,
)
def type(self, text: str) -> None:
self.instance.computer(action="type_text", text=text)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
# Scrapybara also has `self.instance.computer(action="wait", duration=ms / 1000)`
def move(self, x: int, y: int) -> None:
self.instance.computer(action="move_mouse", coordinates=[x, y])
def keypress(self, keys: list[str]) -> None:
mapped_keys = [
CUA_KEY_TO_SCRAPYBARA_KEY.get(key.lower(), key.lower()) for key in keys
]
self.instance.computer(action="press_key", keys=mapped_keys)
def drag(self, path: list[dict[str, int]]) -> None:
if not path:
return
path = [[point["x"], point["y"]] for point in path]
self.instance.computer(action="drag_mouse", path=path)
def get_current_url(self):
return None

View File

@@ -0,0 +1,154 @@
import time
import base64
from typing import List, Dict, Literal
from playwright.sync_api import sync_playwright, Browser, Page
from utils import check_blocklisted_url
# Optional: key mapping if your model uses "CUA" style keys
CUA_KEY_TO_PLAYWRIGHT_KEY = {
"/": "Divide",
"\\": "Backslash",
"alt": "Alt",
"arrowdown": "ArrowDown",
"arrowleft": "ArrowLeft",
"arrowright": "ArrowRight",
"arrowup": "ArrowUp",
"backspace": "Backspace",
"capslock": "CapsLock",
"cmd": "Meta",
"ctrl": "Control",
"delete": "Delete",
"end": "End",
"enter": "Enter",
"esc": "Escape",
"home": "Home",
"insert": "Insert",
"option": "Alt",
"pagedown": "PageDown",
"pageup": "PageUp",
"shift": "Shift",
"space": " ",
"super": "Meta",
"tab": "Tab",
"win": "Meta",
}
class BasePlaywrightComputer:
"""
Abstract base for Playwright-based computers:
- Subclasses override `_get_browser_and_page()` to do local or remote connection,
returning (Browser, Page).
- This base class handles context creation (`__enter__`/`__exit__`),
plus standard "Computer" actions like click, scroll, etc.
- We also have extra browser actions: `goto(url)` and `back()`.
"""
def get_environment(self):
return "browser"
def get_dimensions(self):
return (1024, 768)
def __init__(self):
self._playwright = None
self._browser: Browser | None = None
self._page: Page | None = None
def __enter__(self):
# Start Playwright and call the subclass hook for getting browser/page
self._playwright = sync_playwright().start()
self._browser, self._page = self._get_browser_and_page()
# Set up network interception to flag URLs matching domains in BLOCKED_DOMAINS
def handle_route(route, request):
url = request.url
if check_blocklisted_url(url):
print(f"Flagging blocked domain: {url}")
route.abort()
else:
route.continue_()
self._page.route("**/*", handle_route)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if self._browser:
self._browser.close()
if self._playwright:
self._playwright.stop()
def get_current_url(self) -> str:
return self._page.url
# --- Common "Computer" actions ---
def screenshot(self) -> str:
"""Capture only the viewport (not full_page)."""
png_bytes = self._page.screenshot(full_page=False)
return base64.b64encode(png_bytes).decode("utf-8")
def click(self, x: int, y: int, button: str = "left") -> None:
match button:
case "back":
self.back()
case "forward":
self.forward()
case "wheel":
self._page.mouse.wheel(x, y)
case _:
button_mapping = {"left": "left", "right": "right"}
button_type = button_mapping.get(button, "left")
self._page.mouse.click(x, y, button=button_type)
def double_click(self, x: int, y: int) -> None:
self._page.mouse.dblclick(x, y)
def scroll(self, x: int, y: int, scroll_x: int, scroll_y: int) -> None:
self._page.mouse.move(x, y)
self._page.evaluate(f"window.scrollBy({scroll_x}, {scroll_y})")
def type(self, text: str) -> None:
self._page.keyboard.type(text)
def wait(self, ms: int = 1000) -> None:
time.sleep(ms / 1000)
def move(self, x: int, y: int) -> None:
self._page.mouse.move(x, y)
def keypress(self, keys: List[str]) -> None:
mapped_keys = [CUA_KEY_TO_PLAYWRIGHT_KEY.get(key.lower(), key) for key in keys]
for key in mapped_keys:
self._page.keyboard.down(key)
for key in reversed(mapped_keys):
self._page.keyboard.up(key)
def drag(self, path: List[Dict[str, int]]) -> None:
if not path:
return
self._page.mouse.move(path[0]["x"], path[0]["y"])
self._page.mouse.down()
for point in path[1:]:
self._page.mouse.move(point["x"], point["y"])
self._page.mouse.up()
# --- Extra browser-oriented actions ---
def goto(self, url: str) -> None:
try:
return self._page.goto(url)
except Exception as e:
print(f"Error navigating to {url}: {e}")
def back(self) -> None:
return self._page.go_back()
def forward(self) -> None:
return self._page.go_forward()
# --- Subclass hook ---
def _get_browser_and_page(self) -> tuple[Browser, Page]:
"""Subclasses must implement, returning (Browser, Page)."""
raise NotImplementedError

View File

@@ -0,0 +1,24 @@
openai>=1.56.0
annotated-types==0.7.0
anyio==4.8.0
browserbase==1.2.0
certifi==2025.1.31
charset-normalizer==3.4.1
distro==1.9.0
greenlet==3.1.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
idna==3.10
jiter==0.8.2
pillow==11.1.0
playwright==1.50.0
pydantic==2.10.6
pydantic_core==2.27.2
pyee==12.1.1
python-dotenv==1.0.1
requests==2.32.3
scrapybara>=2.3.6
sniffio==1.3.1
typing_extensions==4.12.2
urllib3==2.3.0

View File

@@ -0,0 +1,156 @@
import os
import base64
from computers import Computer
from computers.default import LocalPlaywrightBrowser
from utils import create_response, check_blocklisted_url
def load_prompt():
"""Load prompt from prompt file (defaults to prompt.txt, can be overridden with PROMPT_FILE env var)."""
prompt_file = os.environ.get("PROMPT_FILE")
if not prompt_file:
prompt_file = os.path.join(os.path.dirname(__file__), "prompt.txt")
if not os.path.exists(prompt_file):
raise FileNotFoundError(f"Prompt file not found: {prompt_file}")
with open(prompt_file, "r", encoding="utf-8") as f:
return f.read().strip()
def acknowledge_safety_check_callback(message: str) -> bool:
# Auto-approve in CI/non-interactive environments
print(f"Safety Check Warning: {message} - Auto-approving in CI mode")
return True
def handle_item(item, computer: Computer):
"""Handle each item; may cause a computer action + screenshot."""
if item["type"] == "message": # print messages
print(item["content"][0]["text"])
if item["type"] == "computer_call": # perform computer actions
action = item["action"]
action_type = action["type"]
action_args = {k: v for k, v in action.items() if k != "type"}
print(f"{action_type}({action_args})")
# give our computer environment action to perform
getattr(computer, action_type)(**action_args)
screenshot_base64 = computer.screenshot()
pending_checks = item.get("pending_safety_checks", [])
for check in pending_checks:
if not acknowledge_safety_check_callback(check["message"]):
raise ValueError(f"Safety check failed: {check['message']}")
# return value informs model of the latest screenshot
call_output = {
"type": "computer_call_output",
"call_id": item["call_id"],
"acknowledged_safety_checks": pending_checks,
"output": {
"type": "input_image",
"image_url": f"data:image/png;base64,{screenshot_base64}",
},
}
# additional URL safety checks for browser environments
if computer.get_environment() == "browser":
current_url = computer.get_current_url()
call_output["output"]["current_url"] = current_url
check_blocklisted_url(current_url)
return [call_output]
return []
def main():
"""Run the CUA (Computer Use Assistant) loop, using Local Playwright."""
output_text_path = os.environ.get("OUTPUT_TEXT_PATH", "output.txt")
screenshot_path = os.environ.get("SCREENSHOT_PATH", "output.png")
all_messages = [] # Collect all model messages
last_screenshot_base64 = None
with LocalPlaywrightBrowser(headless=True) as computer:
dimensions = computer.get_dimensions()
tools = [
{
"type": "computer-preview",
"display_width": dimensions[0],
"display_height": dimensions[1],
"environment": computer.get_environment(),
}
]
items = []
# Load the task prompt from prompt.txt
user_input = load_prompt()
items.append({"role": "user", "content": user_input})
while True: # keep looping until we get a final response
response = create_response(
model="computer-use-preview",
input=items,
tools=tools,
truncation="auto",
)
if "output" not in response:
print(response)
raise ValueError("No output from model")
items += response["output"]
for item in response["output"]:
# Collect all message output from the model
if item.get("type") == "message":
content = item.get("content", [])
for content_item in content:
if isinstance(content_item, dict) and "text" in content_item:
text = content_item["text"]
all_messages.append(text)
result = handle_item(item, computer)
items += result
# Capture last screenshot from computer_call outputs
if result and len(result) > 0:
for output_item in result:
if output_item.get("type") == "computer_call_output":
output = output_item.get("output", {})
if output.get("type") == "input_image":
image_url = output.get("image_url", "")
if image_url.startswith("data:image/png;base64,"):
last_screenshot_base64 = image_url.split(",", 1)[1]
if items[-1].get("role") == "assistant":
break
# Take one final screenshot before closing
if not last_screenshot_base64:
try:
last_screenshot_base64 = computer.screenshot() # Returns base64 string directly
except:
pass
# Save the last screenshot to file
if last_screenshot_base64:
os.makedirs(os.path.dirname(screenshot_path) or ".", exist_ok=True)
with open(screenshot_path, "wb") as f:
f.write(base64.b64decode(last_screenshot_base64))
# Save all model output messages to file
os.makedirs(os.path.dirname(output_text_path) or ".", exist_ok=True)
with open(output_text_path, "w") as f:
if all_messages:
# Join all messages with double newlines for readability
f.write("\n\n".join(all_messages))
else:
# Fallback: save error message if no messages were captured
f.write("No model output messages were captured.")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,37 @@
import os
import requests
from urllib.parse import urlparse
# Example blocked domains - customize for your use case
BLOCKED_DOMAINS = [
"example-malicious-site.com",
]
def create_response(**kwargs):
url = "https://api.openai.com/v1/responses"
headers = {
"Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}",
"Content-Type": "application/json"
}
openai_org = os.getenv("OPENAI_ORG")
if openai_org:
headers["Openai-Organization"] = openai_org
response = requests.post(url, headers=headers, json=kwargs)
if response.status_code != 200:
print(f"Error: {response.status_code} {response.text}")
return response.json()
def check_blocklisted_url(url: str) -> None:
"""Raise ValueError if the given URL (including subdomains) is in the blocklist."""
hostname = urlparse(url).hostname or ""
if any(
hostname == blocked or hostname.endswith(f".{blocked}")
for blocked in BLOCKED_DOMAINS
):
raise ValueError(f"Blocked URL: {url}")

189
.github/scripts/README-metrics.md vendored Normal file
View File

@@ -0,0 +1,189 @@
# BabyBot Metrics Export
Export and analyze BabyBot usability review metrics from GitHub.
## Quick Start
```bash
# Export to both CSV and JSON
python .github/scripts/export-babybot-metrics.py
# Export only CSV
python .github/scripts/export-babybot-metrics.py --format csv --output my-metrics
# Export only JSON
python .github/scripts/export-babybot-metrics.py --format json
```
## Output Files
### CSV Format (`babybot-metrics.csv`)
Detailed row-per-comment data, suitable for Excel, Google Sheets, or data analysis tools.
| Column | Description |
|--------|-------------|
| comment_id | GitHub comment ID |
| pr_number | PR number |
| comment_type | `resolvable` or `general` |
| severity | Critical, Major, or Minor |
| confidence | Low, Medium, or High |
| file | File path (for resolvable comments) |
| created_at | Timestamp |
| url | Link to comment |
| total_reactions | Number of reactions (👍, ❤️, 🎉, etc.) |
| has_reply | Boolean - whether comment has replies |
| engagement_score | Calculated engagement score |
### JSON Format (`babybot-metrics.json`)
Aggregated metrics with summaries by PR, severity, and confidence.
```json
{
"total_comments": 42,
"resolvable_comments": 38,
"general_comments": 4,
"by_severity": {
"Critical": 5,
"Major": 20,
"Minor": 17
},
"by_confidence": {
"High": 10,
"Medium": 25,
"Low": 7
},
"engagement": {
"comments_with_reactions": 15,
"comments_with_replies": 8,
"total_reactions": 45,
"avg_reactions_per_comment": 1.07,
"engagement_rate": 35.7
},
"by_pr": {
"114646": {
"count": 3,
"severities": {"Major": 2, "Minor": 1},
"engaged": 2
}
}
}
```
## Dashboard Integration Options
### 1. Google Sheets Dashboard
```bash
# Export CSV and upload to Google Sheets
python .github/scripts/export-babybot-metrics.py --format csv
# Upload babybot-metrics.csv to Google Sheets
# Create pivot tables and charts
```
### 2. Grafana Dashboard
```bash
# Export JSON and serve via HTTP
python .github/scripts/export-babybot-metrics.py --format json
# Use JSON API data source in Grafana
```
### 3. Automated Weekly Reports (GitHub Actions)
See `.github/workflows/babybot-weekly-report.yml` (create this workflow)
### 4. Tableau/Power BI
Import the CSV file directly into your BI tool.
## Metrics Tracked
- **Total comments posted** by BabyBot
- **Resolvable vs general comments** (indicates attachment success rate)
- **Issues by severity** (Critical/Major/Minor breakdown)
- **Issues by confidence** (High/Medium/Low)
- **PRs reviewed** (count and distribution)
- **Comments per PR** (average and trends)
- **Engagement metrics:**
- Comments with reactions (👍, ❤️, 🎉, etc.)
- Comments with replies (developer responses)
- Total reactions count
- Average reactions per comment
- Engagement rate % (how many comments get any response)
## Tracking Comment Engagement (Proxy for "Resolved")
Since GitHub doesn't expose "resolved" status via API, we track **engagement** as a proxy:
### Reactions as Resolution Indicators
Establish a reaction convention with your team:
- ✅ 👍 (`:+1:`) = Acknowledged/Understood
- 🎉 (`:hooray:`) = Fixed/Resolved
- 👀 (`:eyes:`) = Looking into it
- ❤️ (`:heart:`) = Appreciated/Helpful
### Query Engagement
```bash
# Get all BabyBot comments with reactions
gh api repos/grafana/grafana/pulls/comments --paginate \
--jq '.[] | select(.body | contains("BabyBot 🍼")) | {id: .id, reactions: .reactions, pr: .pull_request_url}'
# Count comments with specific reactions (e.g., "resolved" markers)
gh api repos/grafana/grafana/pulls/comments --paginate \
--jq '[.[] | select(.body | contains("BabyBot 🍼")) | .reactions.hooray] | add'
```
### Export shows:
- `total_reactions`: All reactions on the comment
- `has_reply`: Whether developers responded with a comment
- `engagement_score`: Weighted score (reactions + reply bonus)
High engagement score = comment was noticed and actioned! 📊
## Scheduling Automatic Exports
Add to `.github/workflows/babybot-weekly-report.yml`:
```yaml
name: BabyBot Weekly Metrics
on:
schedule:
- cron: '0 9 * * MON' # Every Monday at 9am
workflow_dispatch: # Manual trigger
jobs:
export-metrics:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Export metrics
run: python .github/scripts/export-babybot-metrics.py
- name: Upload to artifact
uses: actions/upload-artifact@v4
with:
name: babybot-metrics-${{ github.run_number }}
path: babybot-metrics.*
- name: Post to Slack
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
run: |
# Parse JSON and send summary to Slack
SUMMARY=$(cat babybot-metrics.json | jq -r '"Total Comments: \(.total_comments), Critical: \(.by_severity.Critical // 0), Major: \(.by_severity.Major // 0)"')
curl -X POST $SLACK_WEBHOOK_URL \
-H 'Content-type: application/json' \
-d "{\"text\": \"📊 Weekly BabyBot Metrics: $SUMMARY\"}"
```
## Example Queries
```bash
# Count by PR
jq '.by_pr | to_entries | map({pr: .key, count: .value.count})' babybot-metrics.json
# Average comments per PR
jq '[.by_pr[].count] | add / length' babybot-metrics.json
# Critical issues percentage
jq '(.by_severity.Critical / .total_comments * 100)' babybot-metrics.json
```

323
.github/scripts/export-babybot-metrics.py vendored Executable file
View File

@@ -0,0 +1,323 @@
#!/usr/bin/env python3
"""
Export BabyBot usability review metrics from GitHub
Usage: python export-babybot-metrics.py [--format csv|json] [--output filename]
"""
import subprocess
import json
import csv
import argparse
from datetime import datetime
from collections import defaultdict
def get_prs_with_usability_review_label():
"""Get all PRs with the usability-review label"""
cmd = [
'gh', 'api', 'repos/grafana/grafana/issues',
'--paginate',
'-f', 'state=all',
'-f', 'labels=usability-review',
'--jq', '.[] | .number'
]
result = subprocess.run(cmd, capture_output=True, text=True)
pr_numbers = []
for line in result.stdout.strip().split('\n'):
if line:
try:
pr_numbers.append(line.strip())
except:
continue
return pr_numbers
def get_review_comments(pr_numbers):
"""Get all BabyBot review comments (resolvable ones) from specific PRs"""
comments = []
for pr_number in pr_numbers:
cmd = [
'gh', 'api', f'repos/grafana/grafana/pulls/{pr_number}/comments',
'--jq', '''
.[] |
select(.body | contains("BabyBot 🍼")) |
{
id: .id,
pr_number: (.pull_request_url | split("/") | .[-1]),
file: .path,
line: .line,
created_at: .created_at,
updated_at: .updated_at,
body: .body,
html_url: .html_url,
reactions: .reactions,
in_reply_to_id: .in_reply_to_id
}
'''
]
result = subprocess.run(cmd, capture_output=True, text=True)
for line in result.stdout.strip().split('\n'):
if line:
try:
comments.append(json.loads(line))
except json.JSONDecodeError:
continue
return comments
def get_general_comments(pr_numbers):
"""Get BabyBot general comments (fallback ones) from specific PRs"""
comments = []
for pr_number in pr_numbers:
cmd = [
'gh', 'api', f'repos/grafana/grafana/issues/{pr_number}/comments',
'--jq', '''
.[] |
select(.body | contains("BabyBot 🍼")) |
{
id: .id,
pr_number: (.html_url | split("/") | .[-3]),
created_at: .created_at,
updated_at: .updated_at,
body: .body,
html_url: .html_url,
reactions: .reactions
}
'''
]
result = subprocess.run(cmd, capture_output=True, text=True)
for line in result.stdout.strip().split('\n'):
if line:
try:
comments.append(json.loads(line))
except json.JSONDecodeError:
continue
return comments
def extract_severity(body):
"""Extract severity from comment body"""
if '‼️ Critical' in body:
return 'Critical'
elif '⚠️ Major' in body:
return 'Major'
elif '🟢 Minor' in body:
return 'Minor'
return 'Unknown'
def extract_confidence(body):
"""Extract confidence level from comment body"""
import re
match = re.search(r'\*\*Confidence:\*\*\s*(Low|Medium|High)', body)
return match.group(1) if match else 'Unknown'
def get_replies_to_comment(pr_number, comment_id):
"""Get all replies to a specific comment"""
cmd = [
'gh', 'api', f'repos/grafana/grafana/pulls/{pr_number}/comments',
'--jq', f'.[] | select(.in_reply_to_id == {comment_id})'
]
try:
result = subprocess.run(cmd, capture_output=True, text=True)
replies = []
for line in result.stdout.strip().split('\n'):
if line:
try:
replies.append(json.loads(line))
except json.JSONDecodeError:
continue
return replies
except:
return []
def calculate_engagement_score(comment):
"""Calculate engagement score based on reactions and replies"""
reactions = comment.get('reactions', {})
# Count all reaction types
total_reactions = 0
if isinstance(reactions, dict):
total_reactions = reactions.get('total_count', 0)
# Weight certain reactions more heavily
engagement_reactions = (
reactions.get('+1', 0) + # thumbs up
reactions.get('hooray', 0) + # party
reactions.get('heart', 0) # heart
)
# Check if there are replies
has_reply = comment.get('in_reply_to_id') is not None
# Simple engagement score: reactions + bonus for replies
score = total_reactions + (5 if has_reply else 0)
return {
'total_reactions': total_reactions,
'engagement_reactions': engagement_reactions,
'has_reply': has_reply,
'score': score
}
def aggregate_metrics(review_comments, general_comments):
"""Aggregate metrics by PR and severity"""
metrics = {
'total_comments': len(review_comments) + len(general_comments),
'resolvable_comments': len(review_comments),
'general_comments': len(general_comments),
'by_pr': defaultdict(lambda: {'count': 0, 'severities': defaultdict(int), 'engaged': 0}),
'by_severity': defaultdict(int),
'by_confidence': defaultdict(int),
'engagement': {
'comments_with_reactions': 0,
'comments_with_replies': 0,
'total_reactions': 0,
'avg_reactions_per_comment': 0,
'engagement_rate': 0
},
'export_date': datetime.now().isoformat()
}
all_comments = review_comments + general_comments
total_reactions = 0
comments_with_reactions = 0
comments_with_replies = 0
for comment in all_comments:
pr_num = comment['pr_number']
severity = extract_severity(comment['body'])
confidence = extract_confidence(comment['body'])
# Calculate engagement
engagement = calculate_engagement_score(comment)
metrics['by_pr'][pr_num]['count'] += 1
metrics['by_pr'][pr_num]['severities'][severity] += 1
metrics['by_severity'][severity] += 1
metrics['by_confidence'][confidence] += 1
# Track engagement
if engagement['total_reactions'] > 0:
comments_with_reactions += 1
total_reactions += engagement['total_reactions']
metrics['by_pr'][pr_num]['engaged'] += 1
if engagement['has_reply']:
comments_with_replies += 1
# Calculate engagement metrics
total = len(all_comments)
metrics['engagement']['comments_with_reactions'] = comments_with_reactions
metrics['engagement']['comments_with_replies'] = comments_with_replies
metrics['engagement']['total_reactions'] = total_reactions
metrics['engagement']['avg_reactions_per_comment'] = round(total_reactions / total, 2) if total > 0 else 0
metrics['engagement']['engagement_rate'] = round((comments_with_reactions / total) * 100, 1) if total > 0 else 0
return metrics
def export_to_csv(metrics, review_comments, general_comments, filename):
"""Export detailed metrics to CSV"""
all_comments = review_comments + general_comments
with open(filename, 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=[
'comment_id', 'pr_number', 'comment_type', 'severity',
'confidence', 'file', 'created_at', 'url', 'total_reactions',
'has_reply', 'engagement_score'
])
writer.writeheader()
for comment in all_comments:
engagement = calculate_engagement_score(comment)
writer.writerow({
'comment_id': comment['id'],
'pr_number': comment['pr_number'],
'comment_type': 'resolvable' if 'file' in comment else 'general',
'severity': extract_severity(comment['body']),
'confidence': extract_confidence(comment['body']),
'file': comment.get('file', 'N/A'),
'created_at': comment['created_at'],
'url': comment['html_url'],
'total_reactions': engagement['total_reactions'],
'has_reply': engagement['has_reply'],
'engagement_score': engagement['score']
})
print(f"✅ Exported detailed metrics to {filename}")
def export_to_json(metrics, filename):
"""Export aggregated metrics to JSON"""
with open(filename, 'w') as f:
json.dump(metrics, f, indent=2)
print(f"✅ Exported aggregated metrics to {filename}")
def print_summary(metrics):
"""Print summary to console"""
print("\n📊 BabyBot Usability Review Metrics")
print("=" * 50)
print(f"Total Comments: {metrics['total_comments']}")
print(f" - Resolvable (on files): {metrics['resolvable_comments']}")
print(f" - General: {metrics['general_comments']}")
print(f"\nBy Severity:")
for severity, count in metrics['by_severity'].items():
print(f" - {severity}: {count}")
print(f"\nBy Confidence:")
for confidence, count in metrics['by_confidence'].items():
print(f" - {confidence}: {count}")
print(f"\nEngagement:")
eng = metrics['engagement']
print(f" - Comments with reactions: {eng['comments_with_reactions']}")
print(f" - Comments with replies: {eng['comments_with_replies']}")
print(f" - Total reactions: {eng['total_reactions']}")
print(f" - Avg reactions per comment: {eng['avg_reactions_per_comment']}")
print(f" - Engagement rate: {eng['engagement_rate']}%")
print(f"\nPRs Reviewed: {len(metrics['by_pr'])}")
print("=" * 50)
def main():
parser = argparse.ArgumentParser(description='Export BabyBot metrics')
parser.add_argument('--format', choices=['csv', 'json', 'both'], default='both',
help='Export format (default: both)')
parser.add_argument('--output', default='babybot-metrics',
help='Output filename (without extension)')
args = parser.parse_args()
print("🔍 Fetching PRs with 'usability-review' label...")
pr_numbers = get_prs_with_usability_review_label()
print(f"Found {len(pr_numbers)} PRs with usability-review label")
if not pr_numbers:
print("No PRs found with usability-review label. Exiting.")
return
print("\n🔍 Fetching BabyBot comments from those PRs...")
review_comments = get_review_comments(pr_numbers)
general_comments = get_general_comments(pr_numbers)
print(f"Found {len(review_comments)} review comments and {len(general_comments)} general comments")
print("\n📈 Aggregating metrics...")
metrics = aggregate_metrics(review_comments, general_comments)
print_summary(metrics)
if args.format in ['csv', 'both']:
export_to_csv(metrics, review_comments, general_comments, f"{args.output}.csv")
if args.format in ['json', 'both']:
export_to_json(metrics, f"{args.output}.json")
print("\n✨ Done!")
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,127 @@
name: BabyBot Weekly Metrics Report
on:
schedule:
- cron: '0 9 * * MON' # Every Monday at 9am UTC
workflow_dispatch: # Allow manual trigger
permissions:
contents: read
id-token: write
jobs:
export-metrics:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Export BabyBot metrics
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python .github/scripts/export-babybot-metrics.py --format both --output babybot-metrics-$(date +%Y-%m-%d)
- name: Upload metrics artifacts
uses: actions/upload-artifact@v4
with:
name: babybot-metrics-${{ github.run_number }}
path: |
babybot-metrics-*.csv
babybot-metrics-*.json
retention-days: 90
- name: Generate summary
id: summary
run: |
METRICS_FILE=$(ls babybot-metrics-*.json | head -1)
TOTAL=$(jq -r '.total_comments' $METRICS_FILE)
RESOLVABLE=$(jq -r '.resolvable_comments' $METRICS_FILE)
CRITICAL=$(jq -r '.by_severity.Critical // 0' $METRICS_FILE)
MAJOR=$(jq -r '.by_severity.Major // 0' $METRICS_FILE)
MINOR=$(jq -r '.by_severity.Minor // 0' $METRICS_FILE)
PRS=$(jq -r '.by_pr | length' $METRICS_FILE)
echo "total=$TOTAL" >> $GITHUB_OUTPUT
echo "resolvable=$RESOLVABLE" >> $GITHUB_OUTPUT
echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
echo "major=$MAJOR" >> $GITHUB_OUTPUT
echo "minor=$MINOR" >> $GITHUB_OUTPUT
echo "prs=$PRS" >> $GITHUB_OUTPUT
- name: Post summary to Slack
if: env.SLACK_WEBHOOK_URL != ''
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
TOTAL: ${{ steps.summary.outputs.total }}
RESOLVABLE: ${{ steps.summary.outputs.resolvable }}
CRITICAL: ${{ steps.summary.outputs.critical }}
MAJOR: ${{ steps.summary.outputs.major }}
MINOR: ${{ steps.summary.outputs.minor }}
PRS: ${{ steps.summary.outputs.prs }}
run: |
curl -X POST $SLACK_WEBHOOK_URL \
-H 'Content-type: application/json' \
-d "{
\"channel\": \"#hackathon15-agentic-usability-review\",
\"username\": \"BabyBot Metrics\",
\"icon_emoji\": \":bar_chart:\",
\"blocks\": [
{
\"type\": \"header\",
\"text\": {
\"type\": \"plain_text\",
\"text\": \"📊 BabyBot Weekly Metrics Report\"
}
},
{
\"type\": \"section\",
\"text\": {
\"type\": \"mrkdwn\",
\"text\": \"*Total Comments:* $TOTAL\\n*Resolvable:* $RESOLVABLE\\n*PRs Reviewed:* $PRS\"
}
},
{
\"type\": \"section\",
\"text\": {
\"type\": \"mrkdwn\",
\"text\": \"*By Severity*\\n• ‼️ Critical: $CRITICAL\\n• ⚠️ Major: $MAJOR\\n• 🟢 Minor: $MINOR\"
}
},
{
\"type\": \"actions\",
\"elements\": [
{
\"type\": \"button\",
\"text\": {
\"type\": \"plain_text\",
\"text\": \"Download Full Report\"
},
\"url\": \"https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\"
}
]
}
]
}"
- name: Create summary comment (optional - for visibility)
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "## 📊 BabyBot Metrics Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Total Comments:** ${{ steps.summary.outputs.total }}" >> $GITHUB_STEP_SUMMARY
echo "**Resolvable Comments:** ${{ steps.summary.outputs.resolvable }}" >> $GITHUB_STEP_SUMMARY
echo "**PRs Reviewed:** ${{ steps.summary.outputs.prs }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### By Severity" >> $GITHUB_STEP_SUMMARY
echo "- ‼️ Critical: ${{ steps.summary.outputs.critical }}" >> $GITHUB_STEP_SUMMARY
echo "- ⚠️ Major: ${{ steps.summary.outputs.major }}" >> $GITHUB_STEP_SUMMARY
echo "- 🟢 Minor: ${{ steps.summary.outputs.minor }}" >> $GITHUB_STEP_SUMMARY

613
.github/workflows/usability-review.yml vendored Normal file
View File

@@ -0,0 +1,613 @@
name: Usability Review Agent
on:
issue_comment:
types: [created]
pull_request:
types: [labeled]
permissions:
contents: read
pull-requests: write
issues: write
id-token: write
env:
OUTPUT_TEXT_PATH: usability-review.txt
SCREENSHOT_PATH: usability-screenshot.png
jobs:
usability-review:
# Run when either: (1) someone comments '/test-me-baby-one-more-time' on a PR, or (2) 'usability-review' label is added
if: ${{ (github.event.issue.pull_request && startsWith(github.event.comment.body, '/test-me-baby-one-more-time')) || github.event.label.name == 'usability-review' }}
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: React to comment
if: github.event_name == 'issue_comment'
uses: actions/github-script@v7
with:
script: |
github.rest.reactions.createForIssueComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: context.payload.comment.id,
content: 'rocket'
});
- name: Post acknowledgment
if: github.event_name == 'issue_comment'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.issue.number }}
run: |
gh pr comment $PR_NUMBER --body "🤖 Starting usability review... This may take a few minutes."
- name: Set PR number
id: pr-number
run: |
if [ "${{ github.event_name }}" = "issue_comment" ]; then
echo "number=${{ github.event.issue.number }}" >> $GITHUB_OUTPUT
else
echo "number=${{ github.event.pull_request.number }}" >> $GITHUB_OUTPUT
fi
- name: Extract target URL from ephemeral instance comment
id: extract-url
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
run: |
# Extract ephemeral instance URL from PR comments
# Looks for the most recent comment containing: https://ephemeral{alphanumeric}.grafana-dev.net
#
# TODO: Future improvement - auto-trigger deployment if no URL found:
# 1. Comment /deploy-to-hg on the PR
# 2. Wait ~20 minutes for build to complete
# 3. Retry extracting URL
# 4. Continue with usability review
# Get all PR comments, newest first
COMMENTS=$(gh pr view $PR_NUMBER --json comments --jq '.comments | reverse | .[].body')
# Find the most recent comment with ephemeral instance URL (alphanumeric subdomain)
TARGET_URL=$(echo "$COMMENTS" | grep -oE 'https://ephemeral[a-zA-Z0-9]+\.grafana-dev\.net' | head -1)
if [ -z "$TARGET_URL" ]; then
echo "::error::No ephemeral instance URL found in PR comments"
echo "::notice::Please deploy an ephemeral instance first by commenting /deploy-to-hg"
exit 1
fi
echo "Found target URL: $TARGET_URL"
echo "TARGET_URL=$TARGET_URL" >> $GITHUB_ENV
- name: "Get vault secrets (dev)"
id: vault-secrets-dev
uses: grafana/shared-workflows/actions/get-vault-secrets@get-vault-secrets/v1.3.0
with:
vault_instance: "dev"
repo_secrets: |
GCOM_HOST=usability-review-agent:GCOM_HOST
GCOM_TOKEN=usability-review-agent:GCOM_TOKEN
OPENAI_API_KEY=usability-review-agent:OPENAI_API_KEY
GRAFANA_USERNAME=usability-review-agent:GRAFANA_USERNAME
GRAFANA_PASSWORD=usability-review-agent:GRAFANA_PASSWORD
SLACK_WEBHOOK_URL=usability-review-agent:SLACK_WEBHOOK_URL
- name: "Get vault secrets (ops) - GitHub App credentials"
id: vault-secrets-ops
uses: grafana/shared-workflows/actions/get-vault-secrets@main
with:
repo_secrets: |
APP_ID=ephemeral-instances-bot:app-id
APP_PEM=ephemeral-instances-bot:app-private-key
- name: "Generate GitHub App token for private repo access"
id: generate_token
uses: tibdex/github-app-token@b62528385c34dbc9f38e5f4225ac829252d1ea92
with:
app_id: ${{ env.APP_ID }}
private_key: ${{ env.APP_PEM }}
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install OpenAI package
run: |
python -m pip install -U pip
pip install openai
- name: Get PR details
id: pr-details
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
run: |
# Get PR title, body, and changed files
PR_DATA=$(gh pr view $PR_NUMBER --json title,body,files)
PR_TITLE=$(echo "$PR_DATA" | jq -r '.title')
PR_BODY=$(echo "$PR_DATA" | jq -r '.body // ""')
# Get list of changed files
CHANGED_FILES=$(echo "$PR_DATA" | jq -r '.files[].path' | head -20)
# Save to files for next step
echo "$PR_TITLE" > pr_title.txt
echo "$PR_BODY" > pr_body.txt
echo "$CHANGED_FILES" > changed_files.txt
echo "PR Title: $PR_TITLE"
echo "Changed files count: $(echo "$CHANGED_FILES" | wc -l)"
- name: Generate PR summary
id: pr-summary
env:
OPENAI_API_KEY: ${{ env.OPENAI_API_KEY }}
run: |
python - <<'PY'
import os
from openai import OpenAI
client = OpenAI()
# Read PR details
with open("pr_title.txt", "r") as f:
pr_title = f.read().strip()
with open("pr_body.txt", "r") as f:
pr_body = f.read().strip()
with open("changed_files.txt", "r") as f:
changed_files = f.read().strip()
# Generate summary of what the PR does
system = "You are a technical analyst summarizing pull request changes."
user = f"""Analyze this PR and provide a 2-3 sentence summary of what feature/change is being implemented:
PR Title: {pr_title}
PR Description: {pr_body if pr_body else "(No description provided)"}
Changed Files:
{changed_files}
Focus on what the user-facing impact is and what functionality is being added or modified."""
resp = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system},
{"role": "user", "content": user},
],
)
pr_summary = resp.choices[0].message.content.strip()
with open("pr_summary.txt", "w") as f:
f.write(pr_summary)
print(f"PR Summary: {pr_summary}")
PY
- name: Fetch prompt from external repo
env:
GH_TOKEN: ${{ steps.generate_token.outputs.token }}
run: |
# Fetch prompt from private repo using GitHub API
gh api \
-H "Accept: application/vnd.github.raw" \
/repos/grafana/usability-review-agent-github-action/contents/prompts/single/reviewer.md \
> base_prompt.txt
echo "Fetched prompt from external repo"
echo "Prompt size: $(wc -c < base_prompt.txt) bytes"
echo "First 100 chars: $(head -c 100 base_prompt.txt)"
- name: Build dynamic prompt
id: build-prompt
run: |
# Read the PR summary
PR_SUMMARY=$(cat pr_summary.txt)
# Read base prompt template from external repo
BASE_PROMPT=$(cat base_prompt.txt)
# Build dynamic prompt with PR context
cat > dynamic_prompt.txt <<EOF
## Context
You are reviewing a pull request that makes the following changes:
$PR_SUMMARY
Your task is to test the workflow and provide usability feedback.
## Instructions
$BASE_PROMPT
EOF
# Copy to action directory so it can be used
cp dynamic_prompt.txt .github/actions/usability-review-agent/dynamic_prompt.txt
echo "Dynamic prompt created"
- name: Extract instance slug from URL
run: |
INSTANCE_SLUG=$(echo "${{ env.TARGET_URL }}" | sed -E 's|https?://([^.]+)\..*|\1|')
echo "Instance slug: $INSTANCE_SLUG"
echo "INSTANCE_SLUG=$INSTANCE_SLUG" >> $GITHUB_ENV
- name: Enable basic auth on instance
env:
GCOM_HOST: ${{ env.GCOM_HOST }}
GCOM_TOKEN: ${{ env.GCOM_TOKEN }}
INSTANCE_SLUG: ${{ env.INSTANCE_SLUG }}
run: |
# Check current config
CURRENT_CONFIG=$(curl -s "${GCOM_HOST}/api/instances/${INSTANCE_SLUG}/config" \
-H "Authorization: Bearer ${GCOM_TOKEN}")
DISABLE_LOGIN_FORM=$(echo "$CURRENT_CONFIG" | jq -r '.auth.disable_login_form // true')
echo "Current disable_login_form: $DISABLE_LOGIN_FORM"
if [ "$DISABLE_LOGIN_FORM" != "false" ]; then
echo "Enabling basic auth..."
curl -X POST "${GCOM_HOST}/api/instances/${INSTANCE_SLUG}/config" \
-H "Authorization: Bearer ${GCOM_TOKEN}" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d 'config[auth][disable_login_form]=false'
echo "Waiting for pod restart..."
sleep 300
# Verify instance is ready
for i in {1..10}; do
if curl -sf https://${INSTANCE_SLUG}.grafana-dev.net/api/health; then
echo "Instance ready!"
break
fi
echo "Attempt $i failed, waiting..."
sleep 10
done
else
echo "Basic auth already enabled, skipping..."
fi
- name: Create test user for playwright
# Using system realm token with stack-users:write scope
env:
GCOM_HOST: ${{ env.GCOM_HOST }}
GCOM_TOKEN: ${{ env.GCOM_TOKEN }}
INSTANCE_SLUG: ${{ env.INSTANCE_SLUG }}
GRAFANA_USERNAME: ${{ env.GRAFANA_USERNAME }}
GRAFANA_PASSWORD: ${{ env.GRAFANA_PASSWORD }}
run: |
pip install requests
python3 <<'PYTHON'
import os, requests, json
gcom_host = os.environ['GCOM_HOST']
gcom_token = os.environ['GCOM_TOKEN']
instance = os.environ['INSTANCE_SLUG']
username = os.environ['GRAFANA_USERNAME']
password = os.environ['GRAFANA_PASSWORD']
# Create user via GCOM API (proxies to instance /api/admin/users)
resp = requests.post(
f"{gcom_host}/api/instances/{instance}/api/admin/users",
headers={"Authorization": f"Bearer {gcom_token}"},
json={"name": "Usability Review Agent", "login": username, "password": password}
)
if resp.status_code == 412:
print(f"Test user already exists (412), skipping creation")
elif resp.status_code >= 400:
print(f"Error creating user: {resp.status_code} {resp.text}")
resp.raise_for_status()
else:
user_id = resp.json()['id']
print(f"Created new test user (id: {user_id})")
# Make user admin via GCOM API (proxies to instance)
perm_resp = requests.put(
f"{gcom_host}/api/instances/{instance}/api/admin/users/{user_id}/permissions",
headers={"Authorization": f"Bearer {gcom_token}"},
json={"isGrafanaAdmin": True}
)
if perm_resp.status_code >= 400:
print(f"Warning: Failed to set admin permissions: {perm_resp.status_code}")
else:
print(f"Set admin permissions for test user")
print(f"Test user setup complete")
PYTHON
- name: Run usability review agent
uses: ./.github/actions/usability-review-agent
with:
target_url: ${{ env.TARGET_URL }}
openai_api_key: ${{ env.OPENAI_API_KEY }}
grafana_username: ${{ env.GRAFANA_USERNAME }}
grafana_password: ${{ env.GRAFANA_PASSWORD }}
workflow_name: "the application interface"
prompt_file: "dynamic_prompt.txt"
output_text_path: ${{ env.OUTPUT_TEXT_PATH }}
screenshot_path: ${{ env.SCREENSHOT_PATH }}
- name: Upload review artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: usability-review-results
path: |
${{ env.OUTPUT_TEXT_PATH }}
${{ env.SCREENSHOT_PATH }}
login_page.png
login_error.png
if-no-files-found: warn
retention-days: 7
- name: Post review results to PR
if: always()
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
RUN_ID: ${{ github.run_id }}
run: |
COMMENT_FILE=$(mktemp)
echo "## 🔍 Usability Review Results" > "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
echo "**Target URL:** ${{ env.TARGET_URL }}" >> "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
echo "---" >> "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
if [ -s ${{ env.OUTPUT_TEXT_PATH }} ]; then
cat ${{ env.OUTPUT_TEXT_PATH }} >> "$COMMENT_FILE"
else
echo "⚠️ No review output was generated." >> "$COMMENT_FILE"
fi
echo "" >> "$COMMENT_FILE"
echo "---" >> "$COMMENT_FILE"
echo "" >> "$COMMENT_FILE"
echo "📸 [View screenshot and full artifacts](https://github.com/${{ github.repository }}/actions/runs/$RUN_ID)" >> "$COMMENT_FILE"
gh pr comment $PR_NUMBER --body-file "$COMMENT_FILE"
rm "$COMMENT_FILE"
- name: Post individual suggestions as separate comments
if: always()
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
OUTPUT_TEXT_PATH: ${{ env.OUTPUT_TEXT_PATH }}
run: |
python3 <<'PYTHON'
import os
import subprocess
import re
import json
# Read the AI output
output_path = os.environ['OUTPUT_TEXT_PATH']
if not os.path.exists(output_path) or os.path.getsize(output_path) == 0:
print("No output file found, skipping individual comments")
exit(0)
with open(output_path, 'r') as f:
content = f.read()
# Extract table rows (skip header and separator)
# New format: | Issue | Severity | Impact | Confidence | Suggestion |
table_pattern = r'\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|'
matches = re.findall(table_pattern, content)
if len(matches) < 3:
print(f"Not enough table rows found ({len(matches)}), skipping individual comments")
exit(0)
# Skip first 2 matches (header + separator)
issue_rows = matches[2:] # Get all issues after header
pr_number = os.environ['PR_NUMBER']
# Get changed files in the PR to attach comments to
pr_files_result = subprocess.run(
['gh', 'pr', 'view', pr_number, '--json', 'files'],
capture_output=True,
text=True,
check=True
)
pr_data = json.loads(pr_files_result.stdout)
changed_files = [f['path'] for f in pr_data.get('files', [])]
# Prefer frontend files for UI issues
frontend_files = [f for f in changed_files if any(ext in f for ext in ['.tsx', '.ts', '.jsx', '.js', '.vue', '.css', '.scss'])]
target_files = frontend_files if frontend_files else changed_files
if not target_files:
print("No changed files found, posting as general comments instead")
target_files = [None] * len(issue_rows)
for idx, (issue, severity, impact, confidence, suggestion) in enumerate(issue_rows):
# Clean up whitespace
issue = issue.strip()
severity = severity.strip()
impact = impact.strip()
confidence = confidence.strip()
suggestion = suggestion.strip()
# Extract issue title from markdown (bold text before <br />)
issue_title_match = re.search(r'\*\*(.+?)\*\*', issue)
issue_title = issue_title_match.group(1) if issue_title_match else issue
# Create individual comment
comment = f"""### {severity} {issue_title}
**Confidence:** {confidence}
**Impact:** {impact}
**Suggestion:** {suggestion}
---
_Posted by BabyBot 🍼 - Usability Review Agent_"""
# Try to post as review comment (resolvable) on a changed file
target_file = target_files[idx % len(target_files)] if target_files[0] is not None else None
if target_file:
# Post as review comment on the file (resolvable!)
try:
# Create a review comment on line 1 of the file
subprocess.run(
['gh', 'pr', 'review', pr_number, '--comment', '--body', comment, '--file', target_file, '--line', '1'],
check=True,
capture_output=True
)
print(f"Posted resolvable comment for: {issue_title} (on {target_file})")
except subprocess.CalledProcessError as e:
# Fallback to general comment if review comment fails
print(f"Failed to post review comment, falling back to general comment: {e}")
subprocess.run(
['gh', 'pr', 'comment', pr_number, '--body', comment],
check=True
)
print(f"Posted general comment for: {issue_title}")
else:
# Post as general comment
subprocess.run(
['gh', 'pr', 'comment', pr_number, '--body', comment],
check=True
)
print(f"Posted general comment for: {issue_title}")
PYTHON
- name: Send Slack notification on success
if: always()
env:
SLACK_WEBHOOK_URL: ${{ env.SLACK_WEBHOOK_URL }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ steps.pr-number.outputs.number }}
OUTPUT_TEXT_PATH: ${{ env.OUTPUT_TEXT_PATH }}
run: |
# Check if Slack webhook is configured
if [ -z "$SLACK_WEBHOOK_URL" ]; then
echo "Slack webhook not configured, skipping notification"
exit 0
fi
# Check if review was successful
if [ ! -s ${{ env.OUTPUT_TEXT_PATH }} ]; then
echo "No review output found, skipping success notification"
exit 0
fi
# Fetch PR details and export for Python
PR_DATA=$(gh pr view $PR_NUMBER --json title,url,author)
export PR_TITLE=$(echo "$PR_DATA" | jq -r '.title')
export PR_URL=$(echo "$PR_DATA" | jq -r '.url')
export PR_AUTHOR=$(echo "$PR_DATA" | jq -r '.author.login')
python3 <<'PYTHON'
import os
import json
import subprocess
import re
# Read the AI output to extract issues
with open(os.environ['OUTPUT_TEXT_PATH'], 'r') as f:
content = f.read()
# Extract table rows for issues
# New format: | Issue | Severity | Impact | Confidence | Suggestion |
table_pattern = r'\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|([^|]+)\|'
matches = re.findall(table_pattern, content)
# Build issues text
issues_text = ""
if len(matches) >= 3:
issue_rows = matches[2:] # Skip header rows
for issue, severity, impact, confidence, suggestion in issue_rows:
issue = issue.strip()
severity = severity.strip()
suggestion = suggestion.strip()
# Extract issue title from markdown
issue_title_match = re.search(r'\*\*(.+?)\*\*', issue)
issue_title = issue_title_match.group(1) if issue_title_match else issue
issues_text += f"• *{severity}* {issue_title}\n → {suggestion}\n"
else:
issues_text = "No specific issues found"
pr_number = os.environ['PR_NUMBER']
pr_title = os.environ['PR_TITLE']
pr_url = os.environ['PR_URL']
pr_author = os.environ['PR_AUTHOR']
# Build Slack message
slack_payload = {
"channel": "#hackathon15-agentic-usability-review",
"username": "Usability Review Agent",
"icon_emoji": ":mag:",
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": f"✅ Usability Review Complete: PR #{pr_number}",
"emoji": True
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"*<{pr_url}|{pr_title}>*\nby @{pr_author}"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"*Issues & Suggestions*\n{issues_text}"
}
},
{
"type": "actions",
"elements": [
{
"type": "button",
"text": {
"type": "plain_text",
"text": "View Full Review",
"emoji": True
},
"url": pr_url
}
]
}
]
}
# Send to Slack
webhook_url = os.environ['SLACK_WEBHOOK_URL']
subprocess.run(
['curl', '-X', 'POST', '-H', 'Content-type: application/json',
'--data', json.dumps(slack_payload), webhook_url],
check=True
)
print("Slack notification sent successfully")
PYTHON

View File

@@ -149,7 +149,7 @@ require (
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/google/wire v0.7.0 // indirect
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 // indirect

View File

@@ -606,8 +606,8 @@ github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2z
github.com/gorilla/mux v1.7.1/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232 h1:I9l/BxoqxTlPUVx05t8OsqbdP/qwqOeD2E5makeeIz0=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/advisor/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -4,11 +4,12 @@ go 1.25.5
require (
github.com/go-kit/log v0.2.1
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4
github.com/grafana/grafana-app-sdk v0.48.4
github.com/grafana/grafana-app-sdk/logging v0.48.3
github.com/prometheus/client_golang v1.23.2
github.com/spf13/pflag v1.0.10
github.com/stretchr/testify v1.11.1
go.opentelemetry.io/otel v1.38.0
go.opentelemetry.io/otel/trace v1.38.0
@@ -113,7 +114,6 @@ require (
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c // indirect
github.com/shurcooL/vfsgen v0.0.0-20230704071429-0000e147ea92 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/uber/jaeger-client-go v2.30.0+incompatible // indirect
github.com/uber/jaeger-lib v2.4.1+incompatible // indirect

View File

@@ -218,8 +218,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232 h1:I9l/BxoqxTlPUVx05t8OsqbdP/qwqOeD2E5makeeIz0=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 h1:jSojuc7njleS3UOz223WDlXOinmuLAIPI0z2vtq8EgI=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4/go.mod h1:VahT+GtfQIM+o8ht2StR6J9g+Ef+C2Vokh5uuSmOD/4=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=

View File

@@ -1,2 +1,4 @@
module: "github.com/grafana/grafana/apps/alerting/historian/kinds"
language: version: "v0.8.2"
language: {
version: "v0.11.0"
}

View File

@@ -1,10 +1,20 @@
package config
import (
"net/url"
"time"
"github.com/spf13/pflag"
"github.com/grafana/alerting/notify/historian/lokiclient"
"github.com/grafana/grafana-app-sdk/simple"
)
const (
lokiDefaultMaxQueryLength = 721 * time.Hour // 30d1h, matches the default value in Loki
lokiDefaultMaxQuerySize = 65536 // 64kb
)
type NotificationConfig struct {
Enabled bool
Loki lokiclient.LokiConfig
@@ -14,3 +24,52 @@ type RuntimeConfig struct {
GetAlertStateHistoryHandler simple.AppCustomRouteHandler
Notification NotificationConfig
}
func (n *NotificationConfig) AddFlagsWithPrefix(prefix string, flags *pflag.FlagSet) {
flags.BoolVar(&n.Enabled, prefix+".enabled", false, "Enable notification query endpoints")
addLokiFlags(&n.Loki, prefix+".loki", flags)
}
func (r *RuntimeConfig) AddFlagsWithPrefix(prefix string, flags *pflag.FlagSet) {
r.Notification.AddFlagsWithPrefix(prefix+".notification", flags)
}
func (r *RuntimeConfig) AddFlags(flags *pflag.FlagSet) {
r.AddFlagsWithPrefix("alerting.historian", flags)
}
type urlVar struct {
u **url.URL
}
// String implements flag.Value
func (v urlVar) String() string {
if v.u == nil || *v.u == nil {
return ""
}
return (*v.u).Redacted()
}
// Set implements flag.Value
func (v urlVar) Set(s string) error {
u, err := url.Parse(s)
if err != nil {
return err
}
*v.u = u
return nil
}
// Type implements flag.Value
func (v urlVar) Type() string {
return "url"
}
func addLokiFlags(l *lokiclient.LokiConfig, prefix string, flags *pflag.FlagSet) {
flags.Var(urlVar{&l.ReadPathURL}, prefix+".read-url", "URL to Loki instance for performing queries")
flags.StringVar(&l.BasicAuthUser, prefix+".user", "", "Basic auth Username to authenticate to the Loki instance")
flags.StringVar(&l.BasicAuthPassword, prefix+".password", "", "Basic auth password to authenticate to the Loki instance")
flags.StringVar(&l.TenantID, prefix+".tenant-id", "", "Value to use for X-Scope-OrgID")
flags.DurationVar(&l.MaxQueryLength, prefix+".max-query-length", lokiDefaultMaxQueryLength, "Maximum allowed time range for queries")
flags.IntVar(&l.MaxQuerySize, prefix+".max-query-size", lokiDefaultMaxQuerySize, "Maximum allowed size of a query string passed to Loki")
}

View File

@@ -0,0 +1,93 @@
package config
import (
"net/url"
"testing"
"time"
"github.com/grafana/alerting/notify/historian/lokiclient"
"github.com/spf13/pflag"
"github.com/stretchr/testify/require"
)
func TestRuntimeConfig(t *testing.T) {
lokiURL := mustParseURL("http://localhost:3100")
tests := []struct {
name string
args []string
expected RuntimeConfig
}{
{
name: "default config",
args: []string{},
expected: RuntimeConfig{
Notification: NotificationConfig{
Enabled: false,
Loki: lokiclient.LokiConfig{
ReadPathURL: nil,
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
},
},
},
{
name: "with notification enabled",
args: []string{"--alerting.historian.notification.enabled"},
expected: RuntimeConfig{
Notification: NotificationConfig{
Enabled: true,
Loki: lokiclient.LokiConfig{
ReadPathURL: nil,
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
},
},
},
{
name: "with loki options",
args: []string{
"--alerting.historian.notification.loki.read-url=http://localhost:3100",
"--alerting.historian.notification.loki.user=foo",
"--alerting.historian.notification.loki.password=bar",
"--alerting.historian.notification.loki.tenant-id=baz",
},
expected: RuntimeConfig{
Notification: NotificationConfig{
Enabled: false,
Loki: lokiclient.LokiConfig{
ReadPathURL: lokiURL,
BasicAuthUser: "foo",
BasicAuthPassword: "bar",
TenantID: "baz",
MaxQueryLength: 721 * time.Hour,
MaxQuerySize: 65536,
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cfg := &RuntimeConfig{}
flags := pflag.NewFlagSet("test", pflag.ContinueOnError)
cfg.AddFlags(flags)
err := flags.Parse(tt.args)
require.NoError(t, err)
require.Equal(t, tt.expected, *cfg)
})
}
}
func mustParseURL(s string) *url.URL {
u, err := url.Parse(s)
if err != nil {
panic(err)
}
return u
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/alerting/notifications/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,2 +1,4 @@
module: "github.com/grafana/grafana/apps/annotation/kinds"
language: version: "v0.8.2"
language: {
version: "v0.11.0"
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/preferences/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,2 +1,4 @@
module: "github.com/grafana/grafana/apps/correlations/kinds"
language: version: "v0.8.2"
language: {
version: "v0.11.0"
}

View File

@@ -12,8 +12,7 @@ do-generate: install-app-sdk update-app-sdk ## Run Grafana App SDK code generati
--grouping=group \
--defencoding=none \
--genoperatorstate=false \
--noschemasinmanifest \
--useoldmanifestkinds
--noschemasinmanifest
.PHONY: post-generate-cleanup
post-generate-cleanup: ## Clean up the generated code

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/sdkkinds/dashboard"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,12 +1,5 @@
package kinds
import (
v0 "github.com/grafana/grafana/sdkkinds/dashboard/v0alpha1"
v1 "github.com/grafana/grafana/sdkkinds/dashboard/v1beta1"
v2alpha1 "github.com/grafana/grafana/sdkkinds/dashboard/v2alpha1"
v2beta1 "github.com/grafana/grafana/sdkkinds/dashboard/v2beta1"
)
// Status is the shared status of all dashboard versions.
DashboardStatus: {
// Optional conversion status.
@@ -31,50 +24,3 @@ ConversionStatus: {
// The original value map[string]any
source?: _
}
dashboard: {
kind: "Dashboard"
pluralName: "Dashboards"
current: "v1beta1"
codegen: {
ts: {
enabled: true
config: {
enumsAsUnionTypes: true
}
}
go: {
enabled: true
config: {
allowMarshalEmptyDisjunctions: true
}
}
}
versions: {
"v0alpha1": {
schema: {
spec: v0.DashboardSpec
status: DashboardStatus
}
}
"v1beta1": {
schema: {
spec: v1.DashboardSpec
status: DashboardStatus
}
}
"v2alpha1": {
schema: {
spec: v2alpha1.DashboardSpec
status: DashboardStatus
}
}
"v2beta1": {
schema: {
spec: v2beta1.DashboardSpec
status: DashboardStatus
}
}
}
}

View File

@@ -1,10 +1,82 @@
package kinds
import (
v0 "github.com/grafana/grafana/sdkkinds/dashboard/v0alpha1"
v1 "github.com/grafana/grafana/sdkkinds/dashboard/v1beta1"
v2alpha1 "github.com/grafana/grafana/sdkkinds/dashboard/v2alpha1"
v2beta1 "github.com/grafana/grafana/sdkkinds/dashboard/v2beta1"
)
manifest: {
appName: "dashboard"
groupOverride: "dashboard.grafana.app"
kinds: [
dashboard,
snapshot,
]
appName: "dashboard"
groupOverride: "dashboard.grafana.app"
preferredVersion: "v1beta1"
versions: {
"v0alpha1": {
codegen: {
ts: {enabled: false}
go: {enabled: true}
}
kinds: [
{
kind: "Dashboard"
pluralName: "Dashboards"
schema: {
spec: v0.DashboardSpec
status: DashboardStatus
}
},
snapshotV0alpha1, // Only exists in v0alpha (for now)
]
}
"v1beta1": {
codegen: {
ts: {enabled: false}
go: {enabled: true}
}
kinds: [
{
kind: "Dashboard"
pluralName: "Dashboards"
schema: {
spec: v1.DashboardSpec
status: DashboardStatus
}
}
]
}
"v2alpha1": {
codegen: {
ts: {enabled: false}
go: {enabled: true}
}
kinds: [
{
kind: "Dashboard"
pluralName: "Dashboards"
schema: {
spec: v2alpha1.DashboardSpec
status: DashboardStatus
}
}
]
}
"v2beta1": {
codegen: {
ts: {enabled: false}
go: {enabled: true}
}
kinds: [
{
kind: "Dashboard"
pluralName: "Dashboards"
schema: {
spec: v2beta1.DashboardSpec
status: DashboardStatus
}
}
]
}
}
}

View File

@@ -1,46 +1,30 @@
package kinds
snapshot: {
snapshotV0alpha1: {
kind: "Snapshot"
pluralName: "Snapshots"
scope: "Namespaced"
current: "v0alpha1"
codegen: {
ts: {
enabled: true
}
go: {
enabled: true
}
}
versions: {
"v0alpha1": {
schema: {
spec: {
// Snapshot title
title?: string
// Optionally auto-remove the snapshot at a future date (Unix timestamp in seconds)
expires?: int64 | *0
// When set to true, the snapshot exists in a remote server
external?: bool | *false
// The external URL where the snapshot can be seen
externalUrl?: string
// The URL that created the dashboard originally
originalUrl?: string
// Snapshot creation timestamp
timestamp?: string
schema: {
spec: {
// Snapshot title
title?: string
// Optionally auto-remove the snapshot at a future date (Unix timestamp in seconds)
expires?: int64 | *0
// When set to true, the snapshot exists in a remote server
external?: bool | *false
// The external URL where the snapshot can be seen
externalUrl?: string
// The URL that created the dashboard originally
originalUrl?: string
// Snapshot creation timestamp
timestamp?: string
// The raw dashboard (unstructured for now)
dashboard?: [string]: _
}
}
// The raw dashboard (unstructured for now)
dashboard?: [string]: _
}
}
}
}

View File

@@ -6,6 +6,8 @@ import (
"github.com/grafana/grafana/pkg/apimachinery/utils"
)
const GrafanaDatasourceUID = "grafana"
// SetPluginIDMeta sets the repo name to "plugin" and the path to the plugin ID
func SetPluginIDMeta(obj *unstructured.Unstructured, pluginID string) {
if pluginID == "" {

View File

@@ -12,11 +12,11 @@ import (
"k8s.io/apiserver/pkg/endpoints/request"
"github.com/grafana/authlib/types"
"github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard"
dashv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
dashv2alpha1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1"
schemaversion "github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/tsdb/grafanads"
)
// getDefaultDatasourceType gets the default datasource type using the datasource provider
@@ -58,7 +58,7 @@ func getDatasourceTypeByUID(ctx context.Context, uid string, provider schemavers
// datasource: { type: "datasource" } with no UID, it should resolve to uid: "grafana".
func resolveGrafanaDatasourceUID(dsType, dsUID string) string {
if dsType == "datasource" && dsUID == "" {
return grafanads.DatasourceUID
return dashboard.GrafanaDatasourceUID
}
return dsUID
}

View File

@@ -7,11 +7,11 @@ import (
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/runtime"
"github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard"
dashv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
dashv2alpha1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1"
"github.com/grafana/grafana/apps/dashboard/pkg/migration"
migrationtestutil "github.com/grafana/grafana/apps/dashboard/pkg/migration/testutil"
"github.com/grafana/grafana/pkg/tsdb/grafanads"
)
// TestV1beta1ToV2alpha1 tests conversion from v1beta1 to v2alpha1 with various datasource scenarios
@@ -77,7 +77,7 @@ func TestV1beta1ToV2alpha1(t *testing.T) {
// Verify datasource UID is resolved to "grafana"
assert.NotNil(t, query.Spec.Datasource.Uid)
assert.Equal(t, grafanads.DatasourceUID, *query.Spec.Datasource.Uid, "type: 'datasource' with no UID should resolve to uid: 'grafana'")
assert.Equal(t, dashboard.GrafanaDatasourceUID, *query.Spec.Datasource.Uid, "type: 'datasource' with no UID should resolve to uid: 'grafana'")
// Verify query kind matches datasource type
assert.Equal(t, "datasource", query.Spec.Query.Kind)

View File

@@ -1,2 +1,4 @@
module: "github.com/grafana/grafana/apps/example/kinds"
language: version: "v0.8.2"
language: {
version: "v0.11.0"
}

View File

@@ -8,5 +8,6 @@ generate: install-app-sdk update-app-sdk ## Run Grafana App SDK code generation
--grouping=group \
--defencoding=none \
--genoperatorstate=false \
--noschemasinmanifest \
--useoldmanifestkinds
--noschemasinmanifest

View File

@@ -10,60 +10,35 @@ require (
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.13.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/getkin/kin-openapi v0.133.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-openapi/jsonpointer v0.22.1 // indirect
github.com/go-openapi/jsonreference v0.21.2 // indirect
github.com/go-openapi/swag v0.23.1 // indirect
github.com/go-openapi/swag/jsonname v0.25.1 // indirect
github.com/go-test/deep v1.1.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/gnostic-models v0.7.0 // indirect
github.com/grafana/grafana-app-sdk/logging v0.48.3 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 // indirect
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 // indirect
github.com/perimeterx/marshmallow v1.1.5 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.67.3 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/stretchr/testify v1.11.1 // indirect
github.com/woodsbury/decimal128 v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
go.opentelemetry.io/otel v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/oauth2 v0.33.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.14.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/client-go v0.34.2 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.1 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)

View File

@@ -1,7 +1,3 @@
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
@@ -10,8 +6,6 @@ github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bF
github.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/getkin/kin-openapi v0.133.0 h1:pJdmNohVIJ97r4AUFtEXRXwESr8b0bD721u/Tz6k8PQ=
github.com/getkin/kin-openapi v0.133.0/go.mod h1:boAciF6cXk5FhPqe/NQeBTeenbjqU4LhWBf09ILVvWE=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-openapi/jsonpointer v0.22.1 h1:sHYI1He3b9NqJ4wXLoJDKmUmHkWy/L7rtEo92JUxBNk=
@@ -22,8 +16,6 @@ github.com/go-openapi/swag v0.23.1 h1:lpsStH0n2ittzTnbaSloVZLuB5+fvSY/+hnagBjSNZ
github.com/go-openapi/swag v0.23.1/go.mod h1:STZs8TbRvEQQKUA+JZNAm3EWlgaOBGpyFDqQnDHMef0=
github.com/go-openapi/swag/jsonname v0.25.1 h1:Sgx+qbwa4ej6AomWC6pEfXrA6uP2RkaNjA9BR8a1RJU=
github.com/go-openapi/swag/jsonname v0.25.1/go.mod h1:71Tekow6UOLBD3wS7XhdT98g5J5GR13NOTQ9/6Q11Zo=
github.com/go-test/deep v1.1.1 h1:0r/53hagsehfO4bzD2Pgr/+RgHqhmf+k1Bpse2cTu1U=
github.com/go-test/deep v1.1.1/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
@@ -31,33 +23,20 @@ github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7O
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.4 h1:t9r+Y6E7D832ZxQ2c1n0lp6cvsYKhhrAodVYzE1y0s0=
github.com/grafana/grafana-app-sdk v0.48.4/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250514132646-acbc7b54ed9e h1:BTKk7LHuG1kmAkucwTA7DuMbKpKvJTKrGdBmUNO4dfQ=
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250514132646-acbc7b54ed9e/go.mod h1:IA4SOwun8QyST9c5UNs/fN37XL6boXXDvRYFcFwbipg=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4=
github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -66,29 +45,9 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 h1:G7ERwszslrBzRxj//JalHPu/3yz+De2J+4aLtSRlHiY=
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037/go.mod h1:2bpvgLBZEtENV5scfDFEtB/5+1M4hkQhDQrccEJ/qGw=
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 h1:bQx3WeLcUWy+RletIKwUIt4x3t8n2SxavmoclizMb8c=
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.3 h1:shd26MlnwTw5jksTDhC7rTQIteBxy+ZZDr3t7F2xN2Q=
github.com/prometheus/common v0.67.3/go.mod h1:gP0fq6YjjNCLssJCQp0yk4M8W6ikLURwkdd/YKtTbyI=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
@@ -99,20 +58,10 @@ github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/woodsbury/decimal128 v1.3.0 h1:8pffMNWIlC0O5vbyHWFZAt5yWvWcrHA+3ovIIjVWss0=
github.com/woodsbury/decimal128 v1.3.0/go.mod h1:C5UTmyTjW3JftjUFzOVhC20BEQa2a4ZKOB5I6Zjb+ds=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
@@ -128,24 +77,16 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
@@ -159,18 +100,12 @@ google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.34.2 h1:fsSUNZhV+bnL6Aqrp6O7lMTy6o5x2C4XLjnh//8SLYY=
k8s.io/api v0.34.2/go.mod h1:MMBPaWlED2a8w4RSeanD76f7opUoypY8TFYkSM+3XHw=
k8s.io/apimachinery v0.34.2 h1:zQ12Uk3eMHPxrsbUJgNF8bTauTVR2WgqJsTmwTE/NW4=
k8s.io/apimachinery v0.34.2/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw=
k8s.io/client-go v0.34.2 h1:Co6XiknN+uUZqiddlfAjT68184/37PS4QAzYvQvDR8M=
k8s.io/client-go v0.34.2/go.mod h1:2VYDl1XXJsdcAxw7BenFslRQX28Dxz91U9MWKjX97fE=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 h1:Y3gxNAuB0OBLImH611+UDZcmKS3g6CthxToOb37KgwE=

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/folder/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,26 +1,13 @@
package folder
folder: {
foldersV1beta1: {
kind: "Folder"
pluralName: "Folders"
current: "v1beta1"
versions: {
"v1beta1": {
codegen: {
ts: {
enabled: false // Not sure if it should be enabled or not, currently it is.
}
go: {
enabled: true
}
}
schema: {
spec: {
title: string
description?: string
}
status: {} // nothing
}
schema: {
spec: {
title: string
description?: string
}
}
}
}

View File

@@ -3,7 +3,15 @@ package folder
manifest: {
appName: "folder"
groupOverride: "folder.grafana.app"
kinds: [
folder,
]
}
versions: {
"v1beta1": {
codegen: {
ts: {enabled: false}
go: {enabled: true}
}
kinds: [
foldersV1beta1,
]
}
}
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"github.com/grafana/grafana-app-sdk/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type FolderClient struct {
@@ -76,24 +75,6 @@ func (c *FolderClient) Patch(ctx context.Context, identifier resource.Identifier
return c.client.Patch(ctx, identifier, req, opts)
}
func (c *FolderClient) UpdateStatus(ctx context.Context, identifier resource.Identifier, newStatus FolderStatus, opts resource.UpdateOptions) (*Folder, error) {
return c.client.Update(ctx, &Folder{
TypeMeta: metav1.TypeMeta{
Kind: FolderKind().Kind(),
APIVersion: GroupVersion.Identifier(),
},
ObjectMeta: metav1.ObjectMeta{
ResourceVersion: opts.ResourceVersion,
Namespace: identifier.Namespace,
Name: identifier.Name,
},
Status: newStatus,
}, resource.UpdateOptions{
Subresource: "status",
ResourceVersion: opts.ResourceVersion,
})
}
func (c *FolderClient) Delete(ctx context.Context, identifier resource.Identifier, opts resource.DeleteOptions) error {
return c.client.Delete(ctx, identifier, opts)
}

View File

@@ -21,8 +21,6 @@ type Folder struct {
// Spec is the spec of the Folder
Spec FolderSpec `json:"spec" yaml:"spec"`
Status FolderStatus `json:"status" yaml:"status"`
}
func (o *Folder) GetSpec() any {
@@ -39,15 +37,11 @@ func (o *Folder) SetSpec(spec any) error {
}
func (o *Folder) GetSubresources() map[string]any {
return map[string]any{
"status": o.Status,
}
return map[string]any{}
}
func (o *Folder) GetSubresource(name string) (any, bool) {
switch name {
case "status":
return o.Status, true
default:
return nil, false
}
@@ -55,13 +49,6 @@ func (o *Folder) GetSubresource(name string) (any, bool) {
func (o *Folder) SetSubresource(name string, value any) error {
switch name {
case "status":
cast, ok := value.(FolderStatus)
if !ok {
return fmt.Errorf("cannot set status type %#v, not of type FolderStatus", value)
}
o.Status = cast
return nil
default:
return fmt.Errorf("subresource '%s' does not exist", name)
}
@@ -233,7 +220,6 @@ func (o *Folder) DeepCopyInto(dst *Folder) {
dst.TypeMeta.Kind = o.TypeMeta.Kind
o.ObjectMeta.DeepCopyInto(&dst.ObjectMeta)
o.Spec.DeepCopyInto(&dst.Spec)
o.Status.DeepCopyInto(&dst.Status)
}
// Interface compliance compile-time check
@@ -305,15 +291,3 @@ func (s *FolderSpec) DeepCopy() *FolderSpec {
func (s *FolderSpec) DeepCopyInto(dst *FolderSpec) {
resource.CopyObjectInto(dst, s)
}
// DeepCopy creates a full deep copy of FolderStatus
func (s *FolderStatus) DeepCopy() *FolderStatus {
cpy := &FolderStatus{}
s.DeepCopyInto(cpy)
return cpy
}
// DeepCopyInto deep copies FolderStatus into another FolderStatus object
func (s *FolderStatus) DeepCopyInto(dst *FolderStatus) {
resource.CopyObjectInto(dst, s)
}

View File

@@ -1,3 +0,0 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v1beta1

View File

@@ -104,18 +104,12 @@ func schema_pkg_apis_folder_v1beta1_Folder(ref common.ReferenceCallback) common.
Ref: ref("github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1.FolderSpec"),
},
},
"status": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1.FolderStatus"),
},
},
},
Required: []string{"metadata", "spec", "status"},
Required: []string{"metadata", "spec"},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1.FolderSpec", "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1.FolderStatus", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"},
"github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1.FolderSpec", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"},
}
}

View File

@@ -1,116 +0,0 @@
//
// This file is generated by grafana-app-sdk
// DO NOT EDIT
//
package apis
import (
"fmt"
"strings"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/resource"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/kube-openapi/pkg/spec3"
"k8s.io/kube-openapi/pkg/validation/spec"
v1beta1 "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
)
var appManifestData = app.ManifestData{
AppName: "folder",
Group: "folder.grafana.app",
PreferredVersion: "v1beta1",
Versions: []app.ManifestVersion{
{
Name: "v1beta1",
Served: true,
Kinds: []app.ManifestVersionKind{
{
Kind: "Folder",
Plural: "Folders",
Scope: "Namespaced",
Conversion: false,
},
},
Routes: app.ManifestVersionRoutes{
Namespaced: map[string]spec3.PathProps{},
Cluster: map[string]spec3.PathProps{},
Schemas: map[string]spec.Schema{},
},
},
},
}
func LocalManifest() app.Manifest {
return app.NewEmbeddedManifest(appManifestData)
}
func RemoteManifest() app.Manifest {
return app.NewAPIServerManifest("folder")
}
var kindVersionToGoType = map[string]resource.Kind{
"Folder/v1beta1": v1beta1.FolderKind(),
}
// ManifestGoTypeAssociator returns the associated resource.Kind instance for a given Kind and Version, if one exists.
// If there is no association for the provided Kind and Version, exists will return false.
func ManifestGoTypeAssociator(kind, version string) (goType resource.Kind, exists bool) {
goType, exists = kindVersionToGoType[fmt.Sprintf("%s/%s", kind, version)]
return goType, exists
}
var customRouteToGoResponseType = map[string]any{}
// ManifestCustomRouteResponsesAssociator returns the associated response go type for a given kind, version, custom route path, and method, if one exists.
// kind may be empty for custom routes which are not kind subroutes. Leading slashes are removed from subroute paths.
// If there is no association for the provided kind, version, custom route path, and method, exists will return false.
// Resource routes (those without a kind) should prefix their route with "<namespace>/" if the route is namespaced (otherwise the route is assumed to be cluster-scope)
func ManifestCustomRouteResponsesAssociator(kind, version, path, verb string) (goType any, exists bool) {
if len(path) > 0 && path[0] == '/' {
path = path[1:]
}
goType, exists = customRouteToGoResponseType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
return goType, exists
}
var customRouteToGoParamsType = map[string]runtime.Object{}
func ManifestCustomRouteQueryAssociator(kind, version, path, verb string) (goType runtime.Object, exists bool) {
if len(path) > 0 && path[0] == '/' {
path = path[1:]
}
goType, exists = customRouteToGoParamsType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
return goType, exists
}
var customRouteToGoRequestBodyType = map[string]any{}
func ManifestCustomRouteRequestBodyAssociator(kind, version, path, verb string) (goType any, exists bool) {
if len(path) > 0 && path[0] == '/' {
path = path[1:]
}
goType, exists = customRouteToGoRequestBodyType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
return goType, exists
}
type GoTypeAssociator struct{}
func NewGoTypeAssociator() *GoTypeAssociator {
return &GoTypeAssociator{}
}
func (g *GoTypeAssociator) KindToGoType(kind, version string) (goType resource.Kind, exists bool) {
return ManifestGoTypeAssociator(kind, version)
}
func (g *GoTypeAssociator) CustomRouteReturnGoType(kind, version, path, verb string) (goType any, exists bool) {
return ManifestCustomRouteResponsesAssociator(kind, version, path, verb)
}
func (g *GoTypeAssociator) CustomRouteQueryGoType(kind, version, path, verb string) (goType runtime.Object, exists bool) {
return ManifestCustomRouteQueryAssociator(kind, version, path, verb)
}
func (g *GoTypeAssociator) CustomRouteRequestBodyGoType(kind, version, path, verb string) (goType any, exists bool) {
return ManifestCustomRouteRequestBodyAssociator(kind, version, path, verb)
}

View File

@@ -221,7 +221,7 @@ require (
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect

View File

@@ -817,8 +817,8 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232 h1:I9l/BxoqxTlPUVx05t8OsqbdP/qwqOeD2E5makeeIz0=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/iam/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/investigations"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,2 +1,4 @@
module: "github.com/grafana/grafana/apps/logsdrilldown/kinds"
language: version: "v0.8.2"
language: {
version: "v0.11.0"
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/playlist/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/plugins/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/preferences/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/provisioning"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,4 +1,4 @@
module: "github.com/grafana/grafana/apps/secret/kinds"
language: {
version: "v0.9.0"
version: "v0.11.0"
}

View File

@@ -1,2 +1,4 @@
module: "github.com/grafana/grafana/apps/shorturl/kinds"
language: version: "v0.8.2"
language: {
version: "v0.11.0"
}

View File

@@ -12,7 +12,7 @@ labels:
- oss
menuTitle: Examples of dynamic thresholds
title: Example of dynamic thresholds per dimension
weight: 1103
weight: 1105
refs:
testdata-data-source:
- pattern: /docs/grafana/

View File

@@ -0,0 +1,382 @@
---
canonical: https://grafana.com/docs/grafana/latest/alerting/best-practices/trace-based-alerts/
description: This guide provides introductory examples and distinct approaches for setting up trace-based alerts in Grafana.
keywords:
- grafana
labels:
products:
- cloud
- enterprise
- oss
title: Examples of trace-based alerts
weight: 1103
refs:
testdata-data-source:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/testdata/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/testdata/
---
# Examples of trace-based alerts
Metrics are the foundation of most alerting systems. They are usually the first signal that something is wrong, but they dont always indicate _where_ or _why_ a failure occurs.
Traces fill that gap by showing the complete path a request takes through your system. They map the workflows across services, indicating where the request slows down or fails.
{{< figure src="/media/docs/alerting/screenshot-traces-visualization-11.5.png" max-width="750px" alt="Trace view" >}}
Traces report duration and errors directly to specific services and spans, helping to find the affected component and service scope. With this additional context, alerting on tracing data can help you **identify root causes faster**.
You can create trace-based alerts in Grafana Alerting using two main approaches:
- Querying metrics generated from tracing data.
- Using TraceQL, a query language for traces available in Grafana Tempo.
This guide provides introductory examples and distinct approaches for setting up **trace-based alerts** in Grafana. Tracing data is commonly collected using **OpenTelemetry (OTel)** instrumentation. OTel allows you to integrate trace data from a wide range of applications and environments into Grafana.
## **Alerting on span metrics**
OpenTelemetry provides processors that convert tracing data into Prometheus-style metrics.
The **service graph** and **span metrics** processors are the standard options in Alloy and Tempo to generate Prometheus metrics from traces. They can generate the rate, error, and duration (RED) metrics from sampled spans.
You can then create alert rules that query metrics derived from traces.
{{< figure src="/media/docs/alerting/why-trace-based-metrics.png" max-width="750px" alt="Why metrics if you have traces?" >}}
[Service graph metrics](https://grafana.com/docs/tempo/latest/metrics-from-traces/service_graphs/) focus on inter-service communication and dependency health. They measure the calls between services, helping Grafana to infer the service topology. However, they measure only the interaction between two services—they dont include the internal processing time of the client service.
You can use service graph metrics to detect infrastructure issues such as network degradation or service mesh problems.
For trace-based alerts, we recommend using [span metrics](https://grafana.com/docs/tempo/latest/metrics-from-traces/span-metrics/).
**Span metrics** measure the total processing time of a service request: capturing what happens inside the service, not just the communication between services. They include the time spent on internal processing and waiting on downstream calls, providing an **end-to-end picture of service performance**.
Depending on how you create span metrics, the following span metrics are generated:
| Span metrics generator | Metric name | Prometheus metric type | Description |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------- | :---------------------------- | :--------------------------- |
| [Alloy](https://grafana.com/docs/alloy/latest/reference/components/otelcol/otelcol.connector.spanmetrics/) and [OTEL span metrics connector](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/spanmetricsconnector) | `traces_span_metrics_calls_total` | Counter | Total count of the span |
| | `traces_span_metrics_duration_seconds` | Histogram (native or classic) | Duration of the span |
| [Tempo](https://grafana.com/docs/tempo/latest/metrics-from-traces/span-metrics/span-metrics-metrics-generator/) and [Grafana Cloud Application Observability](https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/setup/metrics-labels/) | `traces_spanmetrics_calls_total` | Counter | Total count of the span |
| | `traces_spanmetrics_latency` | Histogram (native or classic) | Duration of the span |
| | `traces_spanmetrics_size_total` | Counter | Total size of spans ingested |
Each metric includes by default the following labels: `service`, `span_name`, `span_kind`, `status_code`, `status_message`, `job`, and `instance`.
In the metrics generator, you can customize how traces are converted into metrics by configuring histograms, exemplars, metric dimensions, and other options.
The following examples assume that span metrics have already been generated using one of these options or an alternative.
### Detect slow span operations
This example shows how to define an alert rule that detects when operations handled by a service become slow.
Before looking at the query, its useful to review a few [trace elements](https://grafana.com/docs/tempo/latest/introduction/trace-structure/) that shape how it works:
- A trace represents a single request or transaction as it flows through multiple spans and services. A span refers to a specific operation within a service.
- Each span includes the operation name (`span_name`) and its duration (the metric value), as well as additional fields like [span status](https://opentelemetry.io/docs/concepts/signals/traces/#span-status) (`status_code`) and [span kind](https://opentelemetry.io/docs/concepts/signals/traces/#span-kind) (`span_kind`).
- A server span represents work performed on the receiving side of a request, while a client span represents the outbound call (parent span) waiting for a response (client → server).
To detect slow inbound operations within a specific service, you can define an alert rule that detects when the percentile latency of server spans exceeds a threshold. For example:
_Detect when 95% of requests (excluding errors) do not complete faster than 2 seconds._
#### Using native histograms
The following PromQL query uses the `traces_span_metrics_duration_seconds` native histogram metric to define the alert rule query.
```promql
histogram_quantile(0.95,
sum by (span_name) (
rate(traces_span_metrics_duration_seconds{
service_name="<SERVICE_NAME>",
span_kind="SPAN_KIND_SERVER",
status_code!="STATUS_CODE_ERROR"
}[10m])
)
) > 2
```
Heres the query breakdown
- `traces_span_metrics_duration_seconds`
Its a native histogram produced from spans using Alloy or the OTEL collector. The metric is filtered by:
- `service_name="<SERVICE_NAME>"` targets a particular service.
- `span_kind="SPAN_KIND_SERVER"` selects spans handling inbound requests.
- `status_code!="STATUS_CODE_ERROR"` excludes spans that ended with errors.
_You should query `traces_spanmetrics_latency` when using other span metric generators._
- `rate(...[10m])`
Converts the histogram into a per-second histogram over the last 10 minutes (the distribution of spans per second during that period).
This makes the time window explicit and ensures latencies can be calculated over the last 10 minutes using `histogram_*` functions.
- `sum by (span_name)( … )`
Merges all series that share the same `span_name`. This creates a [multidimensional alert](https://grafana.com/docs/grafana/latest/alerting/best-practices/multi-dimensional-alerts/) that generates one alert instance per span name (operation).
- `histogram_quantile(0.95, ...)`
Calculates p95 latency from the histogram after applying the rate.
The query runs as an **instant Prometheus query**, returning a single value for the 10-minute window.
- `> 2`
Defines the threshold condition. It returns only series whose p95 latency exceeds 2 seconds.
Alternatively, you can set this threshold as a Grafana Alerting expression in the UI, as shown in the following screenshot.
{{< figure src="/media/docs/alerting/trace-based-alertrule-screenshot.png" max-width="750px" caption="Alert rule querying span metrics and using threshold expression" >}}
#### Using classic histograms
Native histograms are stable in Prometheus since v3.8.0. Your span metric generator may therefore create classic histograms for latency span metrics, either `traces_span_metrics_duration_seconds` or `traces_spanmetrics_latency`.
When using classic histograms, the metric is the same but the metric format changes. A classic histogram represents a histogram with fixed buckets and exposes three metrics:
- `_bucket`: cumulative buckets of the observations.
- `_sum`: total sum of all observed values.
- `_count`: count of observed values.
To calculate percentiles accurately, especially exceeding a particular threshold (e.g. `` `2s` ``), you have to configure the classic histogram with the explicit bucket, such as:
```shell
["100ms", "250ms", "1s", "2s", "5s"]
```
The `otelcol.connector.spanmetrics` can configure the buckets using the [`explicit` block](https://grafana.com/docs/alloy/latest/reference/components/otelcol/otelcol.connector.spanmetrics/#explicit). The metric-generator in Tempo can configure the [`span_metrics.histogram_buckets` setting](https://grafana.com/docs/tempo/latest/configuration/#metrics-generator).
Here's the equivalent PromQL for classic histograms:
```promql
histogram_quantile(0.95,
sum by (span_name, le) (
rate(traces_span_metrics_duration_seconds_bucket{
service_name="<SERVICE_NAME>",
span_kind="SPAN_KIND_SERVER",
status_code!="STATUS_CODE_ERROR"
}[10m])
)
) > 2
```
Key differences compared with the native histograms example:
- You must configure a histogram bucket matching the desired threshold (for example, `2s`).
- You must query the `_bucket` metric, not the base metric.
- You must include `le` in the `sum by (…)` grouping for `histogram_quantile` calculation.
Everything else remains the same.
{{< admonition type="note" >}}
The alert rules in these examples create [multi-dimensional alerts](https://grafana.com/docs/grafana/latest/alerting/best-practices/multi-dimensional-alerts/): one alert instance for each distinct span name.
Dynamic span routes such as `/product/1234` can create separate metric dimensions and alerts for each unique span, which can significantly impact metric costs and performance for large volumes.
To prevent high-cardinality data, normalize dynamic routes like `/product/{id}` using semantic attributes such as [`http.route`](https://opentelemetry.io/docs/specs/semconv/registry/attributes/http/) and [`url.template`](https://opentelemetry.io/docs/specs/semconv/registry/attributes/url/), and limit dimensions to low-cardinality fields such as `service_name`, `status_code`, or `http_method`.
{{< /admonition >}}
### Detect high error rate
This example defines an alert rule that detects when the error rate for any operation exceeds 20%. You can use this error rate alerts to identify increases in request errors, such as 5xx responses or internal failures.
The following query calculates the fraction of failed server spans for each service and operation.
```promql
(
sum by (service, span_name) (
rate(traces_span_metrics_calls_total{
span_kind="SPAN_KIND_SERVER",
status_code="STATUS_CODE_ERROR"
}[10m])
)
/
sum by (service, span_name) (
rate(traces_span_metrics_calls_total{
span_kind="SPAN_KIND_SERVER"
}[10m])
)
) > 0.2
```
Heres the query breakdown
- `traces_span_metrics_calls_total`
A counter metric produced from spans that tracks the number of completed span operations.
- `span_kind="SPAN_KIND_SERVER"` selects spans handling inbound requests.
- `status_code="STATUS_CODE_ERROR"` selects only spans that ended in error.
- Omitting the `status_code` filter in the denominator includes all spans, returning the total span count.
_Check whether your metric generator instead creates the `traces_spanmetrics_calls_total` metric, and adjust the metric name._
- `rate(...[10m])`
Converts the cumulative histogram into a per-second histogram over the last 10 minutes (the distribution of spans per second during that period).
This makes the time window explicit and ensures counters can be calculated over the last 10 minutes.
- `sum by (service, span_name)( … )`
Aggregates per service and operation, creating one alert instance for each `(service, span_name)` combination.
This is a [multidimensional alert](https://grafana.com/docs/grafana/latest/alerting/best-practices/multi-dimensional-alerts/) that applies to all services, helping identify which service and corresponding operation is failing.
- `sum by () (...) / sum by () (...)`
Divides failed spans by total spans to calculate the error rate per operation.
The result is a ratio between `0` and `1,` where `1` means all operations failed.
The query runs as an **instant Prometheus query**, returning a single value for the 10-minute window.
- `> 0.2`
Defines the threshold condition. It returns only series whose error rate is higher than 20% of spans.
Alternatively, you can set this threshold as a Grafana Alerting expression in the UI.
### Enable traffic guardrails
When the traffic is very low, even a single slow or failing request can trigger the alerts.
To avoid these types of false positives during low-traffic periods, you can include a **minimum traffic condition** in your alert rule queries. For example:
```promql
sum by (service, span_name)(
increase(traces_span_metrics_calls_total{
span_kind="SPAN_KIND_SERVER"
}[10m])
) > 300
```
This query returns only spans that handled more than 300 requests in the 10-minute period.
This minimum level of traffic helps prevent false positives, ensuring the alert evaluates a significant number of spans before triggering.
You can combine this traffic condition with the **error-rate** query to ensure alerts fire only when both conditions are met:
```promql
((
sum by (service, span_name) (
rate(traces_span_metrics_calls_total{
span_kind="SPAN_KIND_SERVER",
status_code="STATUS_CODE_ERROR"
}[10m])
)
/
sum by (service, span_name) (
rate(traces_span_metrics_calls_total{
span_kind="SPAN_KIND_SERVER"
}[10m])
)
) > 0.2)
and
(
sum by (service, span_name)(
increase(traces_span_metrics_calls_total{
span_kind="SPAN_KIND_SERVER"
}[10m])
) > 300 )
```
For a given span, the alert fires when:
- The **error rate exceeds 20%** over the last 10 minutes.
- The span **handled at least 300 requests** over the last 10 minutes.
**Alternatively**, you can split the alert into separate queries and combine them using a math expression as the threshold. In the example below, `$ErrorRateCondition` is the Grafana reference for the error-rate query, and `$TrafficCondition` is the reference for the traffic query.
{{< figure src="/media/docs/alerting/traffic-guardrail-with-separate-queries.png" max-width="500px" alt="Alert rule with threshold based on two queries" >}}
In this case, you must ensure both queries group by the same labels.
The advantage of this approach is that you can observe the results of both independent queries. You can then access the query results through the [`$values` variable](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/templates/reference/#values) and display them in notifications or use them in custom labels.
A potential drawback of splitting queries is that each query runs separately. This increases backend load and can affect query performance, especially in environments with a large number of active alerts.
You can apply this traffic guardrail pattern to any alert rule.
### Consider sampling {#consider-sampling}
[Sampling](https://grafana.com/docs/tempo/latest/set-up-for-tracing/instrument-send/set-up-collector/tail-sampling/) is a technique used to reduce the amount of collected spans for cost-saving purposes. There are two main strategies which can be combined:
- **Head sampling**: The decision to record or drop a span is made when the trace begins. The condition can be configured probabilistically (a percentage of traces) or by filtering out certain operations.
- **Tail sampling**: The decision is made after the trace completes. This allows sampling more interesting operations, such as slow or failing requests.
With **head sampling**, alerting on span metrics should be done with caution, since span metrics will represent only a subset of all traces.
With **tail sampling**, its important to generate span metrics before a sampling decision is made. [Grafana Cloud Adaptive Traces](https://grafana.com/docs/grafana-cloud/adaptive-telemetry/adaptive-traces/) handle this automatically. With Alloy or the OpenTelemetry Collector, make sure the SpanMetrics connector runs before the filtering or [tail sampling processor](https://grafana.com/docs/alloy/latest/reference/components/otelcol/otelcol.processor.tail_sampling/).
## **Using TraceQL (experimental)**
**TraceQL** is a query language for searching and filtering traces in **Grafana Tempo**, which uses a syntax similar to `PromQL` and `LogQL`.
With TraceQL, you can skip converting tracing data into span metrics and query raw trace data directly. It provides a more flexible filtering based on the trace structure, attributes, or resource metadata, and can detect issues faster as it does not wait for metric generation.
However, keep in mind that TraceQL is not suitable for all scenarios. For example:
- **Inadequate for long-term analysis**
Trace data has a significantly shorter retention period than metrics. For historical monitoring, its recommended to convert key tracing data into metrics to ensure the persistence of important data.
- **Inadequate for alerting after sampling**
TraceQL can only query traces that are actually stored in Tempo. If sampling drops a large portion of traces, TraceQL-based alerts may miss real issues. Refer to [consider sampling](#consider-sampling) for guidance on how to generate span metrics before sampling.
{{< admonition type="caution" >}}
TraceQL alerting is available in Grafana v12.1 or higher, supported as an [experimental feature](https://grafana.com/docs/release-life-cycle/).
Engineering and on-call support is not available. Documentation is either limited or not provided outside of code comments. No SLA is provided.
While TraceQL can be powerful for exploring and detecting issues directly from trace data, **alerting with TraceQL should not be used in production environments yet**. Use it for testing and experimentation at this moment.
{{< /admonition >}}
The following example demonstrates how to recreate the previous **alert rule that detected slow span operations** using TraceQL.
Follow these steps to create the alert:
1. Enable TraceQL alerting
To use TraceQL in alerts, you must enable the [**`tempoAlerting`** feature flag in your Grafana configuration](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#feature_toggles).
2. Configure the alert query
In your alert rule, select the **Tempo** data source, then convert the original PromQL query into the equivalent TraceQL query:
```traceql
{status != error && kind = server && .service.name = "<SERVICE_NAME>"}
| quantile_over_time(duration, .95) by (name)
```
For a given service, this query calculates the **p95 latency** for all server spans, excluding errors, and groups them by span name.
3. Configure the time range
Currently, TraceQL alerting supports only range queries.
To define the time window, set the query time range to **the last 10 minutes.**
- From: `now-10m`
- To: `now`
{{< figure src="/media/docs/alerting/traceql-alert-configure-time-range.png" max-width="750px" alt="Time range configuration for TraceQL alert rule" >}}
4. Add a reducer expression.
Range queries return time series data, not a single value. The alert rule must then **reduce** time series data to a single numeric value before comparing it against a threshold.
Add a **Reduce** expression to convert the query results into a single value.
5. Set the threshold condition.
Create a **Threshold** expression to fire when the p95 latency exceeds 2 seconds: **$B > 2**.
{{< figure src="/media/docs/alerting/traceql-alert-configure-threshold.png" max-width="750px" alt="Alert rule configuration showing reducer and threshold expressions for TraceQL query" >}}
This final alert detects when 95% of the server spans for a particular service (excluding errors) take longer than 2 seconds to complete, using raw trace data instead of span metrics.
## Additional resources
To explore related topics and expand the examples in this guide, see the following resources:
- [Trace structure](https://grafana.com/docs/tempo/latest/introduction/trace-structure/): Learn how traces and spans are structured.
- [Grafana Tempo documentation](https://grafana.com/docs/tempo/latest/): Full reference for Grafanas open source tracing backend.
- [Span metrics using the metrics generator in Tempo](https://grafana.com/docs/tempo/latest/metrics-from-traces/span-metrics/span-metrics-metrics-generator/): Generate span metrics directly from traces with Tempos built-in metrics generator.
- [Span metrics using Grafana Alloy](https://grafana.com/docs/tempo/latest/metrics-from-traces/span-metrics/span-metrics-alloy/): Configure Alloy to export span metrics from OpenTelemetry (OTel) traces.
- [Multi-dimensional alerts](https://grafana.com/docs/grafana/latest/alerting/best-practices/multi-dimensional-alerts/): Learn how to trigger multiple alert instances per alert rule like in these examples.
- [Grafana SLO documentation](https://grafana.com/docs/grafana-cloud/alerting-and-irm/slo/): Use span metrics to define Service Level Objectives (SLOs) in Grafana.
- [Trace sampling](https://grafana.com/docs/tempo/latest/set-up-for-tracing/instrument-send/set-up-collector/tail-sampling/#sampling): explore strategies and configuration in Grafana Tempo.
{{< admonition type="note" >}}
OpenTelemetry instrumentations can record metrics independently of spans.
These [OTEL metrics](https://opentelemetry.io/docs/specs/semconv/general/metrics/) are not derived from traces and are not affected by sampling. They can serve as an alternative to span-derived metrics.
{{< /admonition >}}

View File

@@ -153,10 +153,6 @@ If you're using an older version of Microsoft SQL Server like 2008 and 2008R2, y
**Authentication:**
{{< admonition type="note" >}}
In order to use Azure AD Authentication the toggle `auth.azure_auth_enabled` must be set to `true` in the Grafana configuration file.
{{< /admonition >}}
| Authentication Type | Description | Credentials / Fields |
| ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **SQL Server Authentication** | Default method to connect to MSSQL. Use a SQL Server or Windows login in `DOMAIN\User` format. | - **Username**: SQL Server username<br>- **Password**: SQL Server password |

View File

@@ -4080,11 +4080,6 @@
"count": 2
}
},
"public/app/plugins/datasource/loki/types.ts": {
"no-barrel-files/no-barrel-files": {
"count": 3
}
},
"public/app/plugins/datasource/mixed/module.ts": {
"no-barrel-files/no-barrel-files": {
"count": 2
@@ -4110,11 +4105,6 @@
"count": 10
}
},
"public/app/plugins/datasource/opentsdb/components/AnnotationEditor.tsx": {
"no-restricted-syntax": {
"count": 3
}
},
"public/app/plugins/datasource/opentsdb/components/OpenTsdbDetails.tsx": {
"no-restricted-syntax": {
"count": 3
@@ -4687,4 +4677,4 @@
"count": 1
}
}
}
}

2
go.mod
View File

@@ -87,7 +87,7 @@ require (
github.com/googleapis/gax-go/v2 v2.15.0 // @grafana/grafana-backend-group
github.com/gorilla/mux v1.8.1 // @grafana/grafana-backend-group
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // @grafana/grafana-app-platform-squad
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232 // @grafana/alerting-backend
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // @grafana/alerting-backend
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // @grafana/identity-access-team
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // @grafana/identity-access-team
github.com/grafana/dataplane/examples v0.0.1 // @grafana/observability-metrics

4
go.sum
View File

@@ -1613,8 +1613,8 @@ github.com/gorilla/sessions v1.2.1 h1:DHd3rPN5lE3Ts3D8rKkQ8x/0kqfeNmBAaiSi+o7Fsg
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232 h1:I9l/BxoqxTlPUVx05t8OsqbdP/qwqOeD2E5makeeIz0=
github.com/grafana/alerting v0.0.0-20251202151018-58fa500f3232/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -444,7 +444,6 @@ export type FolderSpec = {
description?: string;
title: string;
};
export type FolderStatus = object;
export type Folder = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
apiVersion?: string;
@@ -453,7 +452,6 @@ export type Folder = {
metadata: ObjectMeta;
/** Spec is the spec of the Folder */
spec: FolderSpec;
status: FolderStatus;
};
export type ListMeta = {
/** continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. */

View File

@@ -1211,4 +1211,9 @@ export interface FeatureToggles {
* Adds support for Kubernetes alerting historian APIs
*/
kubernetesAlertingHistorian?: boolean;
/**
* Enables plugins decoupling from bootdata
* @default false
*/
useMTPlugins?: boolean;
}

View File

@@ -0,0 +1,5 @@
const sharedConfig = require('../../jest.config.js');
module.exports = {
...sharedConfig,
rootDir: '../../',
};

View File

@@ -0,0 +1,169 @@
import { DataSourceInstanceSettings } from '@grafana/data';
import { DB, SQLOptions, SqlQueryModel } from '../types';
import { makeVariable } from '../utils/testHelpers';
import { SqlDatasource } from './SqlDatasource';
// Minimal test implementation of SqlDatasource
class TestSqlDatasource extends SqlDatasource {
getDB(): DB {
return {} as DB;
}
getQueryModel(): SqlQueryModel {
return {
quoteLiteral: (value: string) => `'${value.replace(/'/g, "''")}'`,
} as SqlQueryModel;
}
}
describe('SqlDatasource - Variable Interpolation', () => {
const instanceSettings = {
jsonData: {
defaultProject: 'testproject',
},
} as unknown as DataSourceInstanceSettings<SQLOptions>;
let ds: TestSqlDatasource;
beforeEach(() => {
ds = new TestSqlDatasource(instanceSettings);
});
describe('Case 1: Multi-value enabled, single value selected', () => {
it('should escape single quotes in string value', () => {
const variable = makeVariable('id1', 'name1', { multi: true });
// When we apply the general fix for all SQL data sources these should be uncommented
// expect(ds.interpolateVariable('value1', variable)).toEqual('value1');
// expect(ds.interpolateVariable("O'Brien", variable)).toEqual("O''Brien");
expect(ds.interpolateVariable('value1', variable)).toEqual(`'value1'`);
expect(ds.interpolateVariable("O'Brien", variable)).toEqual(`'O''Brien'`);
});
it('should handle numeric value', () => {
const variable = makeVariable('id1', 'name1', { multi: true });
expect(ds.interpolateVariable(42 as unknown as string, variable)).toEqual(42);
});
});
describe('Case 2: Multi-value enabled, multiple values selected', () => {
it('should return quoted, comma-separated values', () => {
const variable = makeVariable('id1', 'name1', { multi: true });
expect(ds.interpolateVariable(['value1', 'value2', 'value3'], variable)).toEqual("'value1','value2','value3'");
});
it('should escape single quotes in array values', () => {
const variable = makeVariable('id1', 'name1', { multi: true });
expect(ds.interpolateVariable(["O'Brien", 'Smith', "D'Angelo"], variable)).toEqual(
"'O''Brien','Smith','D''Angelo'"
);
});
it('should handle empty array', () => {
const variable = makeVariable('id1', 'name1', { multi: true });
expect(ds.interpolateVariable([], variable)).toEqual('');
});
});
describe('Case 3: Include all enabled, single value selected', () => {
it('should escape single quotes in string value', () => {
const variable = makeVariable('id1', 'name1', { includeAll: true });
// When we apply the general fix for all SQL data sources these should be uncommented
// expect(ds.interpolateVariable('value1', variable)).toEqual('value1');
// expect(ds.interpolateVariable("O'Brien", variable)).toEqual("O''Brien");
expect(ds.interpolateVariable('value1', variable)).toEqual(`'value1'`);
expect(ds.interpolateVariable("O'Brien", variable)).toEqual(`'O''Brien'`);
});
it('should handle numeric value', () => {
const variable = makeVariable('id1', 'name1', { includeAll: true });
expect(ds.interpolateVariable(123 as unknown as string, variable)).toEqual(123);
});
});
describe('Case 4: Include all enabled, "All" value selected', () => {
it('should handle All option as array', () => {
const variable = makeVariable('id1', 'name1', { includeAll: true });
expect(ds.interpolateVariable(['value1', 'value2', 'value3'], variable)).toEqual("'value1','value2','value3'");
});
it('should handle All option with special characters', () => {
const variable = makeVariable('id1', 'name1', { includeAll: true });
expect(ds.interpolateVariable(["test'1", 'test2', "test'3"], variable)).toEqual("'test''1','test2','test''3'");
});
});
describe('Case 5: No include all, no multi-value, single value selected', () => {
it('should escape single quotes in string value', () => {
const variable = makeVariable('id1', 'name1', { multi: false, includeAll: false });
expect(ds.interpolateVariable('value1', variable)).toEqual('value1');
expect(ds.interpolateVariable("O'Brien", variable)).toEqual("O''Brien");
});
it('should handle numeric value', () => {
const variable = makeVariable('id1', 'name1', { multi: false, includeAll: false });
expect(ds.interpolateVariable(999 as unknown as string, variable)).toEqual(999);
});
it('should handle empty string', () => {
const variable = makeVariable('id1', 'name1', { multi: false, includeAll: false });
expect(ds.interpolateVariable('', variable)).toEqual('');
});
});
describe('Case 6: Both include all and multi-value enabled, single value selected', () => {
it('should escape single quotes in string value', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
// When we apply the general fix for all SQL data sources these should be uncommented
// expect(ds.interpolateVariable('value1', variable)).toEqual('value1');
// expect(ds.interpolateVariable("O'Brien", variable)).toEqual("O''Brien");
expect(ds.interpolateVariable('value1', variable)).toEqual(`'value1'`);
expect(ds.interpolateVariable("O'Brien", variable)).toEqual(`'O''Brien'`);
});
it('should handle numeric value', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(456 as unknown as string, variable)).toEqual(456);
});
});
describe('Case 7: Both include all and multi-value enabled, "All" value selected', () => {
it('should handle All option as array', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(['value1', 'value2', 'value3'], variable)).toEqual("'value1','value2','value3'");
});
it('should handle All option with mixed values', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(['alpha', 'beta', 'gamma'], variable)).toEqual("'alpha','beta','gamma'");
});
it('should handle All option with special characters', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(["it's", "can't", "won't"], variable)).toEqual("'it''s','can''t','won''t'");
});
});
describe('Case 8: Both include all and multi-value enabled, multiple values selected', () => {
it('should return quoted, comma-separated values', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(['value1', 'value2'], variable)).toEqual("'value1','value2'");
});
it('should escape single quotes in array values', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(["O'Brien", "D'Angelo"], variable)).toEqual("'O''Brien','D''Angelo'");
});
it('should handle single item array', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(['value1'], variable)).toEqual("'value1'");
});
it('should handle array with single quote escaping', () => {
const variable = makeVariable('id1', 'name1', { multi: true, includeAll: true });
expect(ds.interpolateVariable(['a', "b'c", 'd'], variable)).toEqual("'a','b''c','d'");
});
});
});

View File

@@ -135,7 +135,7 @@ export function RadialGauge(props: RadialGaugeProps) {
displayProcessor,
});
if (spotlight) {
if (spotlight && theme.isDark) {
defs.push(
<SpotlightGradient
key={spotlightGradientId}

View File

@@ -16,6 +16,7 @@ import { t, Trans } from '@grafana/i18n';
import { useTheme2 } from '../../themes/ThemeContext';
import { Icon } from '../Icon/Icon';
import { getPortalContainer } from '../Portal/Portal';
import { CustomInput } from './CustomInput';
import { DropdownIndicator } from './DropdownIndicator';
@@ -123,7 +124,7 @@ export function SelectBase<T, Rest = {}>({
minMenuHeight,
maxVisibleValues,
menuPlacement = 'auto',
menuPosition,
menuPosition = 'fixed',
menuShouldPortal = true,
noOptionsMessage = t('grafana-ui.select.no-options-label', 'No options found'),
onBlur,
@@ -255,9 +256,9 @@ export function SelectBase<T, Rest = {}>({
maxVisibleValues,
menuIsOpen: isOpen,
menuPlacement: menuPlacement === 'auto' && closeToBottom ? 'top' : menuPlacement,
menuPosition,
menuPosition: menuShouldPortal ? 'fixed' : menuPosition,
menuShouldBlockScroll: true,
menuPortalTarget: menuShouldPortal && typeof document !== 'undefined' ? document.body : undefined,
menuPortalTarget: menuShouldPortal && getPortalContainer(),
menuShouldScrollIntoView: false,
onBlur,
onChange: onChangeWithEmpty,

View File

@@ -23,6 +23,7 @@ interface Props {
setSearchFilter: (value: string) => void;
operator: SelectableValue<string>;
setOperator: (item: SelectableValue<string>) => void;
referenceElement: HTMLElement;
}
const ITEM_HEIGHT = 28;
@@ -81,6 +82,7 @@ export const FilterList = ({
setSearchFilter,
operator,
setOperator,
referenceElement,
}: Props) => {
const regex = useMemo(() => new RegExp(searchFilter, caseSensitive ? undefined : 'i'), [searchFilter, caseSensitive]);
const items = useMemo(
@@ -186,6 +188,7 @@ export const FilterList = ({
{showOperators && (
<Stack direction="row" gap={0}>
<ButtonSelect
root={referenceElement}
variant="canvas"
options={OPERATORS}
onChange={setOperator}

View File

@@ -1,5 +1,5 @@
import { css, cx } from '@emotion/css';
import { useCallback, useMemo, useState } from 'react';
import { useCallback, useMemo, useRef, useState } from 'react';
import * as React from 'react';
import { Field, GrafanaTheme2, SelectableValue } from '@grafana/data';
@@ -42,8 +42,9 @@ export const FilterPopup = ({
const filteredOptions = useMemo(() => getFilteredOptions(options, filterValue), [options, filterValue]);
const [values, setValues] = useState<SelectableValue[]>(filteredOptions);
const [matchCase, setMatchCase] = useState(false);
const ref = useRef<HTMLDivElement>(null);
const onCancel = useCallback((event?: React.MouseEvent) => onClose(), [onClose]);
const onCancel = useCallback(() => onClose(), [onClose]);
const onFilter = useCallback(
(event: React.MouseEvent) => {
@@ -70,7 +71,7 @@ export const FilterPopup = ({
<ClickOutsideWrapper onClick={onCancel} useCapture={true}>
{/* This is just blocking click events from bubbeling and should not have a keyboard interaction. */}
{/* eslint-disable-next-line jsx-a11y/no-static-element-interactions, jsx-a11y/click-events-have-key-events */}
<div className={cx(styles.filterContainer)} onClick={stopPropagation}>
<div ref={ref} className={cx(styles.filterContainer)} onClick={stopPropagation}>
<Stack direction="column" gap={3}>
<Stack direction="column" gap={0.5}>
<Stack justifyContent="space-between" alignItems="center">
@@ -87,17 +88,20 @@ export const FilterPopup = ({
/>
</Stack>
<div className={cx(styles.listDivider)} />
<FilterList
onChange={setValues}
values={values}
options={options}
caseSensitive={matchCase}
showOperators={true}
searchFilter={searchFilter}
setSearchFilter={setSearchFilter}
operator={operator}
setOperator={setOperator}
/>
{ref.current && (
<FilterList
referenceElement={ref.current}
onChange={setValues}
values={values}
options={options}
caseSensitive={matchCase}
showOperators={true}
searchFilter={searchFilter}
setSearchFilter={setSearchFilter}
operator={operator}
setOperator={setOperator}
/>
)}
</Stack>
<Stack gap={3}>
<Stack>

View File

@@ -64,6 +64,7 @@ import (
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/grafana/grafana/pkg/storage/unified/apistore"
"github.com/grafana/grafana/pkg/storage/unified/resource"
resourcepb "github.com/grafana/grafana/pkg/storage/unified/resourcepb"
"github.com/grafana/grafana/pkg/util"
)
@@ -531,11 +532,9 @@ func (b *DashboardsAPIBuilder) UpdateAPIGroupInfo(apiGroupInfo *genericapiserver
RequireDeprecatedInternalID: true,
}
// TODO: merge this into one option
if b.isStandalone {
// TODO: Sets default root permissions
storageOpts.Permissions = b.setDefaultDashboardPermissions
} else {
// Sets default root permissions
storageOpts.Permissions = b.dashboardPermissions.SetDefaultPermissionsAfterCreate
}
@@ -647,6 +646,18 @@ func (b *DashboardsAPIBuilder) storageForVersion(
unified.AfterDelete = b.afterDelete
storage[dashboards.StoragePath()] = unified
storage[dashboards.StoragePath("dto")], err = NewDTOConnector(
unified,
largeObjects,
b.unified,
b.accessClient,
newDTOFunc,
nil, // no publicDashboardService in standalone mode
)
if err != nil {
return err
}
return nil
}
@@ -675,10 +686,8 @@ func (b *DashboardsAPIBuilder) storageForVersion(
storage[dashboards.StoragePath("dto")], err = NewDTOConnector(
storage[dashboards.StoragePath()].(rest.Getter),
largeObjects,
b.legacy.Access,
b.unified,
b.accessControl,
opts.Scheme,
b.accessClient,
newDTOFunc,
b.publicDashboardService,
)
@@ -746,6 +755,88 @@ func (b *DashboardsAPIBuilder) afterDelete(obj runtime.Object, _ *metav1.DeleteO
}
}
var defaultDashboardPermissions = []map[string]any{
{
"kind": "BasicRole",
"name": "Admin",
"verb": "admin",
},
{
"kind": "BasicRole",
"name": "Editor",
"verb": "edit",
},
{
"kind": "BasicRole",
"name": "Viewer",
"verb": "view",
},
}
func (b *DashboardsAPIBuilder) setDefaultDashboardPermissions(ctx context.Context, key *resourcepb.ResourceKey, id authlib.AuthInfo, obj utils.GrafanaMetaAccessor) error {
if b.resourcePermissionsSvc == nil {
return nil
}
if obj.GetFolder() != "" {
return nil
}
log := logging.FromContext(ctx)
log.Debug("setting default dashboard permissions", "uid", obj.GetName(), "namespace", obj.GetNamespace())
client := (*b.resourcePermissionsSvc).Namespace(obj.GetNamespace())
name := fmt.Sprintf("%s-%s-%s", dashv1.DashboardResourceInfo.GroupVersionResource().Group, dashv1.DashboardResourceInfo.GroupVersionResource().Resource, obj.GetName())
if _, err := client.Get(ctx, name, metav1.GetOptions{}); err == nil {
_, err := client.Update(ctx, &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]any{
"name": name,
"namespace": obj.GetNamespace(),
},
"spec": map[string]any{
"resource": map[string]any{
"apiGroup": dashv1.DashboardResourceInfo.GroupVersionResource().Group,
"resource": dashv1.DashboardResourceInfo.GroupVersionResource().Resource,
"name": obj.GetName(),
},
"permissions": defaultDashboardPermissions,
},
},
}, metav1.UpdateOptions{})
if err != nil {
log.Error("failed to update dashboard permissions", "error", err)
return fmt.Errorf("update dashboard permissions: %w", err)
}
return nil
}
_, err := client.Create(ctx, &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]any{
"name": name,
"namespace": obj.GetNamespace(),
},
"spec": map[string]any{
"resource": map[string]any{
"apiGroup": dashv1.DashboardResourceInfo.GroupVersionResource().Group,
"resource": dashv1.DashboardResourceInfo.GroupVersionResource().Resource,
"name": obj.GetName(),
},
"permissions": defaultDashboardPermissions,
},
},
}, metav1.CreateOptions{})
if err != nil {
log.Error("failed to create dashboard permissions", "error", err)
return fmt.Errorf("create dashboard permissions: %w", err)
}
return nil
}
func (b *DashboardsAPIBuilder) GetOpenAPIDefinitions() common.GetOpenAPIDefinitions {
return func(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
defs := dashv0.GetOpenAPIDefinitions(ref)

View File

@@ -12,17 +12,17 @@ import (
authlib "github.com/grafana/authlib/types"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard"
dashv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/slugify"
"github.com/grafana/grafana/pkg/registry/apis/dashboard/legacy"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/publicdashboards"
"github.com/grafana/grafana/pkg/storage/unified/apistore"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
"github.com/grafana/grafana/pkg/util"
)
type dtoBuilder = func(dashboard runtime.Object, access *dashboard.DashboardAccess) (runtime.Object, error)
@@ -30,11 +30,9 @@ type dtoBuilder = func(dashboard runtime.Object, access *dashboard.DashboardAcce
// The DTO returns everything the UI needs in a single request
type DTOConnector struct {
getter rest.Getter
legacy legacy.DashboardAccessor
unified resource.ResourceClient
largeObjects apistore.LargeObjectSupport
accessControl accesscontrol.AccessControl
scheme *runtime.Scheme
accessClient authlib.AccessClient
builder dtoBuilder
publicDashboardService publicdashboards.Service
}
@@ -42,21 +40,17 @@ type DTOConnector struct {
func NewDTOConnector(
getter rest.Getter,
largeObjects apistore.LargeObjectSupport,
legacyAccess legacy.DashboardAccessor,
resourceClient resource.ResourceClient,
accessControl accesscontrol.AccessControl,
scheme *runtime.Scheme,
accessClient authlib.AccessClient,
builder dtoBuilder,
publicDashboardService publicdashboards.Service,
) (rest.Storage, error) {
return &DTOConnector{
getter: getter,
legacy: legacyAccess,
accessControl: accessControl,
accessClient: accessClient,
unified: resourceClient,
largeObjects: largeObjects,
builder: builder,
scheme: scheme,
publicDashboardService: publicDashboardService,
}, nil
}
@@ -132,35 +126,87 @@ func (r *DTOConnector) Connect(ctx context.Context, name string, opts runtime.Ob
return
}
dashScope := dashboards.ScopeDashboardsProvider.GetResourceScopeUID(name)
evaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsRead, dashScope)
canView, err := r.accessControl.Evaluate(ctx, user, evaluator)
if err != nil || !canView {
logger := logging.FromContext(ctx).With("logger", "dto-connector")
access := &dashboard.DashboardAccess{}
folder := obj.GetFolder()
ns := obj.GetNamespace()
authInfo, ok := authlib.AuthInfoFrom(ctx)
if !ok {
responder.Error(fmt.Errorf("no identity found for request"))
return
}
gvr := dashv1.DashboardResourceInfo.GroupVersionResource()
// Check read permission using authlib.AccessClient
readRes, err := r.accessClient.Check(ctx, authInfo, authlib.CheckRequest{
Verb: utils.VerbGet,
Group: gvr.Group,
Resource: gvr.Resource,
Namespace: ns,
Name: name,
}, folder)
if err != nil {
logger.Warn("Failed to check read permission", "err", err)
responder.Error(fmt.Errorf("not allowed to view"))
return
}
if !readRes.Allowed {
responder.Error(fmt.Errorf("not allowed to view"))
return
}
access := &dashboard.DashboardAccess{}
writeEvaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsWrite, dashScope)
access.CanSave, _ = r.accessControl.Evaluate(ctx, user, writeEvaluator)
access.CanEdit = access.CanSave
adminEvaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsPermissionsWrite, dashScope)
access.CanAdmin, _ = r.accessControl.Evaluate(ctx, user, adminEvaluator)
deleteEvaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsDelete, dashScope)
access.CanDelete, _ = r.accessControl.Evaluate(ctx, user, deleteEvaluator)
// Check write permission
writeRes, err := r.accessClient.Check(ctx, authInfo, authlib.CheckRequest{
Verb: utils.VerbUpdate,
Group: gvr.Group,
Resource: gvr.Resource,
Namespace: ns,
Name: name,
}, folder)
// Keeping the same logic as with accessControl.Evaluate.
// On errors we default on deny.
if err != nil {
logger.Warn("Failed to check write permission", "err", err)
}
access.CanSave = writeRes.Allowed
access.CanEdit = writeRes.Allowed
// Check delete permission
deleteRes, err := r.accessClient.Check(ctx, authInfo, authlib.CheckRequest{
Verb: utils.VerbDelete,
Group: gvr.Group,
Resource: gvr.Resource,
Namespace: ns,
Name: name,
}, folder)
if err != nil {
logger.Warn("Failed to check delete permission", "err", err)
}
access.CanDelete = deleteRes.Allowed
// For admin permission, use write as a proxy for now
access.CanAdmin = writeRes.Allowed
access.CanStar = user.IsIdentityType(authlib.TypeUser)
access.AnnotationsPermissions = &dashboard.AnnotationPermission{}
r.getAnnotationPermissionsByScope(ctx, user, &access.AnnotationsPermissions.Dashboard, dashScope)
r.getAnnotationPermissionsByScope(ctx, user, &access.AnnotationsPermissions.Organization, accesscontrol.ScopeAnnotationsTypeOrganization)
// Annotation permissions - use write permission as proxy
access.AnnotationsPermissions = &dashboard.AnnotationPermission{
Dashboard: dashboard.AnnotationActions{CanAdd: writeRes.Allowed, CanEdit: writeRes.Allowed, CanDelete: writeRes.Allowed},
Organization: dashboard.AnnotationActions{CanAdd: writeRes.Allowed, CanEdit: writeRes.Allowed, CanDelete: writeRes.Allowed},
}
title := obj.FindTitle("")
access.Slug = slugify.Slugify(title)
access.Url = dashboards.GetDashboardFolderURL(false, name, access.Slug)
pubDash, err := r.publicDashboardService.FindByDashboardUid(ctx, user.GetOrgID(), name)
if err == nil && pubDash != nil {
access.IsPublic = true
// Only check public dashboards if service is available
if !util.IsInterfaceNil(r.publicDashboardService) {
pubDash, err := r.publicDashboardService.FindByDashboardUid(ctx, user.GetOrgID(), name)
if err == nil && pubDash != nil {
access.IsPublic = true
}
}
dash, err := r.builder(rawobj, access)
@@ -171,26 +217,3 @@ func (r *DTOConnector) Connect(ctx context.Context, name string, opts runtime.Ob
responder.Object(http.StatusOK, dash)
}), nil
}
func (r *DTOConnector) getAnnotationPermissionsByScope(ctx context.Context, user identity.Requester, actions *dashboard.AnnotationActions, scope string) {
var err error
logger := logging.FromContext(ctx).With("logger", "dto-connector")
evaluate := accesscontrol.EvalPermission(accesscontrol.ActionAnnotationsCreate, scope)
actions.CanAdd, err = r.accessControl.Evaluate(ctx, user, evaluate)
if err != nil {
logger.Warn("Failed to evaluate permission", "err", err, "action", accesscontrol.ActionAnnotationsCreate, "scope", scope)
}
evaluate = accesscontrol.EvalPermission(accesscontrol.ActionAnnotationsDelete, scope)
actions.CanDelete, err = r.accessControl.Evaluate(ctx, user, evaluate)
if err != nil {
logger.Warn("Failed to evaluate permission", "err", err, "action", accesscontrol.ActionAnnotationsDelete, "scope", scope)
}
evaluate = accesscontrol.EvalPermission(accesscontrol.ActionAnnotationsWrite, scope)
actions.CanEdit, err = r.accessControl.Evaluate(ctx, user, evaluate)
if err != nil {
logger.Warn("Failed to evaluate permission", "err", err, "action", accesscontrol.ActionAnnotationsWrite, "scope", scope)
}
}

View File

@@ -85,7 +85,8 @@ func RegisterAPIService(
accessControl,
//nolint:staticcheck // not yet migrated to OpenFeature
features.IsEnabledGlobally(featuremgmt.FlagDatasourceQueryTypes),
false,
//nolint:staticcheck // not yet migrated to OpenFeature
features.IsEnabledGlobally(featuremgmt.FlagQueryServiceWithConnections),
)
if err != nil {
return nil, err

View File

@@ -17,6 +17,7 @@ import (
"github.com/grafana/grafana/pkg/registry/apis/iam/common"
idStore "github.com/grafana/grafana/pkg/registry/apis/iam/legacy"
"github.com/grafana/grafana/pkg/services/accesscontrol"
gapiutil "github.com/grafana/grafana/pkg/services/apiserver/utils"
)
var (
@@ -120,6 +121,7 @@ func newV0ResourcePermission(grn *groupResourceName, specs []v0alpha1.ResourcePe
},
}
r.SetUpdateTimestamp(updated.UTC())
r.UID = gapiutil.CalculateClusterWideUID(&r)
return r
}

View File

@@ -297,6 +297,10 @@ func NewMapperRegistry() MapperRegistry {
skipScopeOnVerb: nil,
},
},
"plugins.grafana.app": {
"plugins": newResourceTranslation("plugins.plugins", "uid", false, nil),
"pluginsmeta": newResourceTranslation("plugins.pluginsmeta", "uid", false, nil),
},
})
return mapper

View File

@@ -6,8 +6,10 @@ import (
"fmt"
"time"
authzextv1 "github.com/grafana/grafana/pkg/services/authz/proto/v1"
openfgav1 "github.com/openfga/api/proto/openfga/v1"
"go.opentelemetry.io/otel/codes"
authzextv1 "github.com/grafana/grafana/pkg/services/authz/proto/v1"
)
type OperationGroup string
@@ -119,3 +121,65 @@ func groupByOperation(operations []*authzextv1.MutateOperation) (map[OperationGr
return grouped, nil
}
func deduplicateTupleKeys(writeTuples []*openfgav1.TupleKey, deleteTuples []*openfgav1.TupleKeyWithoutCondition) ([]*openfgav1.TupleKey, []*openfgav1.TupleKeyWithoutCondition) {
deduplicatedWriteTuples := make([]*openfgav1.TupleKey, 0)
deduplicatedDeleteTuples := make([]*openfgav1.TupleKeyWithoutCondition, 0)
writeTupleMap := make(map[string]bool)
for _, writeTuple := range writeTuples {
id := getTupleKeyID(writeTuple)
if !writeTupleMap[id] {
writeTupleMap[id] = true
deduplicatedWriteTuples = append(deduplicatedWriteTuples, writeTuple)
}
}
// Prioritize writes over deletes. Deletes do not have a condition, so we don't know if write tuple is different from delete one.
for _, deleteTuple := range deleteTuples {
id := getTupleKeyID(deleteTuple)
if !writeTupleMap[id] {
writeTupleMap[id] = true
deduplicatedDeleteTuples = append(deduplicatedDeleteTuples, deleteTuple)
}
}
return deduplicatedWriteTuples, deduplicatedDeleteTuples
}
func (s *Server) writeTuples(ctx context.Context, store *storeInfo, writeTuples []*openfgav1.TupleKey, deleteTuples []*openfgav1.TupleKeyWithoutCondition) error {
writeReq := &openfgav1.WriteRequest{
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
}
writeTuples, deleteTuples = deduplicateTupleKeys(writeTuples, deleteTuples)
if len(writeTuples) > 0 {
writeReq.Writes = &openfgav1.WriteRequestWrites{
TupleKeys: writeTuples,
OnDuplicate: "ignore",
}
}
if len(deleteTuples) > 0 {
writeReq.Deletes = &openfgav1.WriteRequestDeletes{
TupleKeys: deleteTuples,
OnMissing: "ignore",
}
}
_, err := s.openfga.Write(ctx, writeReq)
return err
}
type TupleKey interface {
GetUser() string
GetRelation() string
GetObject() string
}
func getTupleKeyID(t TupleKey) string {
return fmt.Sprintf("%s:%s:%s", t.GetUser(), t.GetRelation(), t.GetObject())
}

View File

@@ -52,24 +52,7 @@ func (s *Server) mutateFolders(ctx context.Context, store *storeInfo, operations
return nil
}
writeReq := &openfgav1.WriteRequest{
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
}
if len(writeTuples) > 0 {
writeReq.Writes = &openfgav1.WriteRequestWrites{
TupleKeys: writeTuples,
OnDuplicate: "ignore",
}
}
if len(deleteTuples) > 0 {
writeReq.Deletes = &openfgav1.WriteRequestDeletes{
TupleKeys: deleteTuples,
OnMissing: "ignore",
}
}
_, err := s.openfga.Write(ctx, writeReq)
err := s.writeTuples(ctx, store, writeTuples, deleteTuples)
if err != nil {
s.logger.Error("failed to write folder tuples", "error", err)
return err

View File

@@ -50,24 +50,7 @@ func (s *Server) mutateOrgRoles(ctx context.Context, store *storeInfo, operation
return nil
}
writeReq := &openfgav1.WriteRequest{
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
}
if len(writeTuples) > 0 {
writeReq.Writes = &openfgav1.WriteRequestWrites{
TupleKeys: writeTuples,
OnDuplicate: "ignore",
}
}
if len(deleteTuples) > 0 {
writeReq.Deletes = &openfgav1.WriteRequestDeletes{
TupleKeys: deleteTuples,
OnMissing: "ignore",
}
}
_, err := s.openfga.Write(ctx, writeReq)
err := s.writeTuples(ctx, store, writeTuples, deleteTuples)
if err != nil {
s.logger.Error("failed to write user org role tuples", "error", err)
return err

View File

@@ -47,24 +47,7 @@ func (s *Server) mutateResourcePermissions(ctx context.Context, store *storeInfo
}
}
writeReq := &openfgav1.WriteRequest{
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
}
if len(writeTuples) > 0 {
writeReq.Writes = &openfgav1.WriteRequestWrites{
TupleKeys: writeTuples,
OnDuplicate: "ignore",
}
}
if len(deleteTuples) > 0 {
writeReq.Deletes = &openfgav1.WriteRequestDeletes{
TupleKeys: deleteTuples,
OnMissing: "ignore",
}
}
_, err := s.openfga.Write(ctx, writeReq)
err := s.writeTuples(ctx, store, writeTuples, deleteTuples)
if err != nil {
s.logger.Error("failed to write resource permission tuples", "error", err)
return err

View File

@@ -44,24 +44,7 @@ func (s *Server) mutateRoleBindings(ctx context.Context, store *storeInfo, opera
}
}
writeReq := &openfgav1.WriteRequest{
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
}
if len(writeTuples) > 0 {
writeReq.Writes = &openfgav1.WriteRequestWrites{
TupleKeys: writeTuples,
OnDuplicate: "ignore",
}
}
if len(deleteTuples) > 0 {
writeReq.Deletes = &openfgav1.WriteRequestDeletes{
TupleKeys: deleteTuples,
OnMissing: "ignore",
}
}
_, err := s.openfga.Write(ctx, writeReq)
err := s.writeTuples(ctx, store, writeTuples, deleteTuples)
if err != nil {
s.logger.Error("failed to write resource role binding tuples", "error", err)
return err

View File

@@ -41,24 +41,7 @@ func (s *Server) mutateRoles(ctx context.Context, store *storeInfo, operations [
}
}
writeReq := &openfgav1.WriteRequest{
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
}
if len(writeTuples) > 0 {
writeReq.Writes = &openfgav1.WriteRequestWrites{
TupleKeys: writeTuples,
OnDuplicate: "ignore",
}
}
if len(deleteTuples) > 0 {
writeReq.Deletes = &openfgav1.WriteRequestDeletes{
TupleKeys: deleteTuples,
OnMissing: "ignore",
}
}
_, err := s.openfga.Write(ctx, writeReq)
err := s.writeTuples(ctx, store, writeTuples, deleteTuples)
if err != nil {
s.logger.Error("failed to write resource role binding tuples", "error", err)
return err

View File

@@ -43,24 +43,7 @@ func (s *Server) mutateTeamBindings(ctx context.Context, store *storeInfo, opera
}
}
writeReq := &openfgav1.WriteRequest{
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
}
if len(writeTuples) > 0 {
writeReq.Writes = &openfgav1.WriteRequestWrites{
TupleKeys: writeTuples,
OnDuplicate: "ignore",
}
}
if len(deleteTuples) > 0 {
writeReq.Deletes = &openfgav1.WriteRequestDeletes{
TupleKeys: deleteTuples,
OnMissing: "ignore",
}
}
_, err := s.openfga.Write(ctx, writeReq)
err := s.writeTuples(ctx, store, writeTuples, deleteTuples)
if err != nil {
s.logger.Error("failed to write resource role binding tuples", "error", err)
return err

View File

@@ -5,6 +5,7 @@ import (
openfgav1 "github.com/openfga/api/proto/openfga/v1"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
v1 "github.com/grafana/grafana/pkg/services/authz/proto/v1"
@@ -133,3 +134,66 @@ func testMutate(t *testing.T, srv *Server) {
require.Len(t, res.Tuples, 0)
})
}
func TestDeduplicateTupleKeys(t *testing.T) {
t.Run("should deduplicate write tuples", func(t *testing.T) {
writeTuples := []*openfgav1.TupleKey{
{User: "user:1", Relation: "get", Object: "object:1"},
{User: "user:1", Relation: "get", Object: "object:2"},
}
deleteTuples := []*openfgav1.TupleKeyWithoutCondition{
{User: "user:1", Relation: "get", Object: "object:1"},
{User: "user:2", Relation: "get", Object: "object:2"},
}
deduplicatedWriteTuples, deduplicatedDeleteTuples := deduplicateTupleKeys(writeTuples, deleteTuples)
require.Len(t, deduplicatedWriteTuples, 2)
require.ElementsMatch(t, deduplicatedWriteTuples, []*openfgav1.TupleKey{
{User: "user:1", Relation: "get", Object: "object:1"},
{User: "user:1", Relation: "get", Object: "object:2"},
})
require.Len(t, deduplicatedDeleteTuples, 1)
require.ElementsMatch(t, deduplicatedDeleteTuples, []*openfgav1.TupleKeyWithoutCondition{
{User: "user:2", Relation: "get", Object: "object:2"},
})
})
t.Run("should deduplicate write tuples with conditions", func(t *testing.T) {
writeTuples := []*openfgav1.TupleKey{
{User: "user:1", Relation: "get", Object: "object:1", Condition: &openfgav1.RelationshipCondition{Name: "condition:1", Context: &structpb.Struct{Fields: map[string]*structpb.Value{
"field:1": structpb.NewStringValue("value:1"),
}}}},
{User: "user:1", Relation: "get", Object: "object:2"},
}
deleteTuples := []*openfgav1.TupleKeyWithoutCondition{
{User: "user:1", Relation: "get", Object: "object:1"},
}
deduplicatedWriteTuples, deduplicatedDeleteTuples := deduplicateTupleKeys(writeTuples, deleteTuples)
require.Len(t, deduplicatedWriteTuples, 2)
require.ElementsMatch(t, deduplicatedWriteTuples, []*openfgav1.TupleKey{
{User: "user:1", Relation: "get", Object: "object:1", Condition: &openfgav1.RelationshipCondition{Name: "condition:1", Context: &structpb.Struct{Fields: map[string]*structpb.Value{
"field:1": structpb.NewStringValue("value:1"),
}}}},
{User: "user:1", Relation: "get", Object: "object:2"},
})
require.Len(t, deduplicatedDeleteTuples, 0)
})
t.Run("should do nothing for no duplicates", func(t *testing.T) {
writeTuples := []*openfgav1.TupleKey{
{User: "user:1", Relation: "get", Object: "object:1"},
}
deleteTuples := []*openfgav1.TupleKeyWithoutCondition{
{User: "user:2", Relation: "get", Object: "object:2"},
}
deduplicatedWriteTuples, deduplicatedDeleteTuples := deduplicateTupleKeys(writeTuples, deleteTuples)
require.Len(t, deduplicatedWriteTuples, 1)
require.ElementsMatch(t, deduplicatedWriteTuples, writeTuples)
require.Len(t, deduplicatedDeleteTuples, 1)
require.ElementsMatch(t, deduplicatedDeleteTuples, deleteTuples)
})
}

View File

@@ -2000,6 +2000,14 @@ var (
Owner: grafanaAlertingSquad,
RequiresRestart: true,
},
{
Name: "useMTPlugins",
Description: "Enables plugins decoupling from bootdata",
Stage: FeatureStageExperimental,
Owner: grafanaPluginsPlatformSquad,
FrontendOnly: true,
Expression: "false",
},
}
)

View File

@@ -271,3 +271,4 @@ ttlPluginInstanceManager,experimental,@grafana/plugins-platform-backend,false,fa
lokiQueryLimitsContext,experimental,@grafana/observability-logs,false,false,true
rudderstackUpgrade,experimental,@grafana/grafana-frontend-platform,false,false,true
kubernetesAlertingHistorian,experimental,@grafana/alerting-squad,false,true,false
useMTPlugins,experimental,@grafana/plugins-platform-backend,false,false,true
1 Name Stage Owner requiresDevMode RequiresRestart FrontendOnly
271 lokiQueryLimitsContext experimental @grafana/observability-logs false false true
272 rudderstackUpgrade experimental @grafana/grafana-frontend-platform false false true
273 kubernetesAlertingHistorian experimental @grafana/alerting-squad false true false
274 useMTPlugins experimental @grafana/plugins-platform-backend false false true

View File

@@ -3558,6 +3558,20 @@
"frontend": true
}
},
{
"metadata": {
"name": "useMTPlugins",
"resourceVersion": "1764913709691",
"creationTimestamp": "2025-12-05T05:48:29Z"
},
"spec": {
"description": "Enables plugins decoupling from bootdata",
"stage": "experimental",
"codeowner": "@grafana/plugins-platform-backend",
"frontend": true,
"expression": "false"
}
},
{
"metadata": {
"name": "useMultipleScopeNodesEndpoint",
@@ -3674,4 +3688,4 @@
}
}
]
}
}

View File

@@ -64,9 +64,19 @@ var _ Service = (*LibraryPanelService)(nil)
// ConnectLibraryPanelsForDashboard loops through all panels in dashboard JSON and connects any library panels to the dashboard.
func (lps *LibraryPanelService) ConnectLibraryPanelsForDashboard(c context.Context, signedInUser identity.Requester, dash *dashboards.Dashboard) error {
panels := dash.Data.Get("panels").MustArray()
var panels []any
isV2 := dash.Data.Get("elements").Interface() != nil
if isV2 {
elementsMap := dash.Data.Get("elements").MustMap()
panels = make([]any, 0, len(elementsMap))
for _, element := range elementsMap {
panels = append(panels, element)
}
} else {
panels = dash.Data.Get("panels").MustArray()
}
libraryPanels := make(map[string]string)
err := connectLibraryPanelsRecursively(c, panels, libraryPanels)
err := connectLibraryPanelsRecursively(c, panels, libraryPanels, isV2)
if err != nil {
return err
}
@@ -83,10 +93,13 @@ func isLibraryPanelOrRow(panel *simplejson.Json, panelType string) bool {
return panel.Interface() != nil || panelType == "row"
}
func connectLibraryPanelsRecursively(c context.Context, panels []any, libraryPanels map[string]string) error {
func connectLibraryPanelsRecursively(c context.Context, panels []any, libraryPanels map[string]string, isV2 bool) error {
for _, panel := range panels {
panelAsJSON := simplejson.NewFromAny(panel)
libraryPanel := panelAsJSON.Get("libraryPanel")
if isV2 {
libraryPanel = panelAsJSON.Get("spec").Get("libraryPanel")
}
panelType := panelAsJSON.Get("type").MustString()
if !isLibraryPanelOrRow(libraryPanel, panelType) {
continue
@@ -95,7 +108,7 @@ func connectLibraryPanelsRecursively(c context.Context, panels []any, libraryPan
// we have a row
if panelType == "row" {
rowPanels := panelAsJSON.Get("panels").MustArray()
err := connectLibraryPanelsRecursively(c, rowPanels, libraryPanels)
err := connectLibraryPanelsRecursively(c, rowPanels, libraryPanels, isV2)
if err != nil {
return err
}

View File

@@ -87,6 +87,31 @@ func TestIntegrationConnectLibraryPanelsForDashboard(t *testing.T) {
require.Equal(t, sc.initialResult.Result.UID, elements[sc.initialResult.Result.UID].UID)
})
scenarioWithLibraryPanel(t, "When an admin tries to store a V2 dashboard with a library panel, it should connect the two",
func(t *testing.T, sc scenarioContext) {
dashJSON := map[string]any{
"elements": []any{
map[string]any{
"kind": "Panel",
"spec": map[string]any{
"datasource": "${DS_GDEV-TESTDATA}",
"libraryPanel": map[string]any{
"uid": sc.initialResult.Result.UID,
},
},
},
},
}
dash := dashboards.Dashboard{
Title: "Testing ConnectLibraryPanelsForDashboard for V2 dashboard",
Data: simplejson.NewFromAny(dashJSON),
}
dashInDB := createDashboard(t, sc, &dash)
err := sc.service.ConnectLibraryPanelsForDashboard(sc.ctx, sc.user, dashInDB)
require.NoError(t, err)
})
scenarioWithLibraryPanel(t, "When an admin tries to store a dashboard with library panels inside and outside of rows, it should connect all",
func(t *testing.T, sc scenarioContext) {
cmd := model.CreateLibraryElementCommand{

View File

@@ -153,7 +153,6 @@ func TestGetRemoteState(t *testing.T) {
getOkHandler := func(state string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, tenantID, r.Header.Get(client.MimirTenantHeader))
require.Equal(t, "true", r.Header.Get(client.RemoteAlertmanagerHeader))
res := map[string]any{
"status": "success",
@@ -268,7 +267,6 @@ func TestIntegrationApplyConfig(t *testing.T) {
errorHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, tenantID, r.Header.Get(client.MimirTenantHeader))
require.Equal(t, "true", r.Header.Get(client.RemoteAlertmanagerHeader))
w.Header().Add("content-type", "application/json")
w.WriteHeader(http.StatusInternalServerError)
require.NoError(t, json.NewEncoder(w).Encode(map[string]string{"status": "error"}))
@@ -278,7 +276,6 @@ func TestIntegrationApplyConfig(t *testing.T) {
var configSyncs, stateSyncs int
okHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, tenantID, r.Header.Get(client.MimirTenantHeader))
require.Equal(t, "true", r.Header.Get(client.RemoteAlertmanagerHeader))
res := map[string]any{"status": "success"}
if r.Method == http.MethodPost {
@@ -432,7 +429,6 @@ func TestCompareAndSendConfiguration(t *testing.T) {
var got string
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, tenantID, r.Header.Get(client.MimirTenantHeader))
require.Equal(t, "true", r.Header.Get(client.RemoteAlertmanagerHeader))
w.Header().Add("content-type", "application/json")
b, err := io.ReadAll(r.Body)
@@ -639,7 +635,6 @@ func Test_TestReceiversDecryptsSecureSettings(t *testing.T) {
var got apimodels.TestReceiversConfigBodyParams
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, tenantID, r.Header.Get(client.MimirTenantHeader))
require.Equal(t, "true", r.Header.Get(client.RemoteAlertmanagerHeader))
w.Header().Add("Content-Type", "application/json")
require.NoError(t, json.NewDecoder(r.Body).Decode(&got))
require.NoError(t, r.Body.Close())
@@ -746,7 +741,6 @@ func TestApplyConfigWithExtraConfigs(t *testing.T) {
var configSent client.UserGrafanaConfig
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, tenantID, r.Header.Get(client.MimirTenantHeader))
require.Equal(t, "true", r.Header.Get(client.RemoteAlertmanagerHeader))
if r.Method == http.MethodPost && strings.Contains(r.URL.Path, "/config") {
require.NoError(t, json.NewDecoder(r.Body).Decode(&configSent))
@@ -828,7 +822,6 @@ func TestCompareAndSendConfigurationWithExtraConfigs(t *testing.T) {
var configSent client.UserGrafanaConfig
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
require.Equal(t, tenantID, r.Header.Get(client.MimirTenantHeader))
require.Equal(t, "true", r.Header.Get(client.RemoteAlertmanagerHeader))
if r.Method == http.MethodPost && strings.Contains(r.URL.Path, "/config") {
require.NoError(t, json.NewDecoder(r.Body).Decode(&configSent))

View File

@@ -5,8 +5,7 @@ import (
)
const (
MimirTenantHeader = "X-Scope-OrgID"
RemoteAlertmanagerHeader = "X-Remote-Alertmanager"
MimirTenantHeader = "X-Scope-OrgID"
)
type MimirAuthRoundTripper struct {
@@ -19,7 +18,6 @@ type MimirAuthRoundTripper struct {
// It adds an `X-Scope-OrgID` header with the TenantID if only provided with a tenantID or sets HTTP Basic Authentication if both
// a tenantID and a password are provided.
func (r *MimirAuthRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
req.Header.Set(RemoteAlertmanagerHeader, "true")
if r.TenantID != "" && r.Password == "" {
req.Header.Set(MimirTenantHeader, r.TenantID)
}

View File

@@ -186,7 +186,7 @@ func initResourceTables(mg *migrator.Migrator) string {
}))
mg.AddMigration("Add key_path column to resource_history", migrator.NewAddColumnMigration(resource_history_table, &migrator.Column{
Name: "key_path", Type: migrator.DB_NVarchar, Length: 2048, Nullable: false, Default: "", IsLatin: true,
Name: "key_path", Type: migrator.DB_NVarchar, Length: 2048, Nullable: false, Default: "''", IsLatin: true,
}))
resource_events_table := migrator.Table{

View File

@@ -14,7 +14,6 @@ import (
"time"
"github.com/google/uuid"
"github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -23,6 +22,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
folders "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/pkg/api/dtos"
"github.com/grafana/grafana/pkg/apimachinery/utils"
@@ -357,9 +357,7 @@ func doFolderTests(t *testing.T, helper *apis.K8sTestHelper) *apis.K8sTestHelper
"spec": {
"title": "Test",
"description": ""
},
"status": {}
}`
}}`
// Get should return the same result
found, err := client.Resource.Get(context.Background(), uid, metav1.GetOptions{})

View File

@@ -1103,8 +1103,7 @@
"type": "object",
"required": [
"metadata",
"spec",
"status"
"spec"
],
"properties": {
"apiVersion": {
@@ -1131,14 +1130,6 @@
"$ref": "#/components/schemas/com.github.grafana.grafana.apps.folder.pkg.apis.folder.v1beta1.FolderSpec"
}
]
},
"status": {
"default": {},
"allOf": [
{
"$ref": "#/components/schemas/com.github.grafana.grafana.apps.folder.pkg.apis.folder.v1beta1.FolderStatus"
}
]
}
},
"x-kubernetes-group-version-kind": [
@@ -1329,10 +1320,6 @@
}
}
},
"com.github.grafana.grafana.apps.folder.pkg.apis.folder.v1beta1.FolderStatus": {
"description": "Empty stub",
"type": "object"
},
"com.github.grafana.grafana.apps.folder.pkg.apis.folder.v1beta1.ResourceStats": {
"type": "object",
"required": [

View File

@@ -0,0 +1,224 @@
package es
const (
// DefaultGeoHashPrecision is the default precision for geohash grid aggregations
DefaultGeoHashPrecision = 3
// termsOrderTerm is used internally for ordering terms
termsOrderTerm = "_term"
)
// AggBuilder represents an aggregation builder
type AggBuilder interface {
Histogram(key, field string, fn func(a *HistogramAgg, b AggBuilder)) AggBuilder
DateHistogram(key, field string, fn func(a *DateHistogramAgg, b AggBuilder)) AggBuilder
Terms(key, field string, fn func(a *TermsAggregation, b AggBuilder)) AggBuilder
Nested(key, path string, fn func(a *NestedAggregation, b AggBuilder)) AggBuilder
Filters(key string, fn func(a *FiltersAggregation, b AggBuilder)) AggBuilder
GeoHashGrid(key, field string, fn func(a *GeoHashGridAggregation, b AggBuilder)) AggBuilder
Metric(key, metricType, field string, fn func(a *MetricAggregation)) AggBuilder
Pipeline(key, pipelineType string, bucketPath any, fn func(a *PipelineAggregation)) AggBuilder
Build() (AggArray, error)
}
type aggBuilderImpl struct {
AggBuilder
aggDefs []*aggDef
}
func newAggBuilder() *aggBuilderImpl {
return &aggBuilderImpl{
aggDefs: make([]*aggDef, 0),
}
}
func (b *aggBuilderImpl) Build() (AggArray, error) {
aggs := make(AggArray, 0)
for _, aggDef := range b.aggDefs {
agg := &Agg{
Key: aggDef.key,
Aggregation: aggDef.aggregation,
}
for _, cb := range aggDef.builders {
childAggs, err := cb.Build()
if err != nil {
return nil, err
}
agg.Aggregation.Aggs = append(agg.Aggregation.Aggs, childAggs...)
}
aggs = append(aggs, agg)
}
return aggs, nil
}
func (b *aggBuilderImpl) Histogram(key, field string, fn func(a *HistogramAgg, b AggBuilder)) AggBuilder {
innerAgg := &HistogramAgg{
Field: field,
}
aggDef := newAggDef(key, &aggContainer{
Type: "histogram",
Aggregation: innerAgg,
})
if fn != nil {
builder := newAggBuilder()
aggDef.builders = append(aggDef.builders, builder)
fn(innerAgg, builder)
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}
func (b *aggBuilderImpl) DateHistogram(key, field string, fn func(a *DateHistogramAgg, b AggBuilder)) AggBuilder {
innerAgg := &DateHistogramAgg{
Field: field,
}
aggDef := newAggDef(key, &aggContainer{
Type: "date_histogram",
Aggregation: innerAgg,
})
if fn != nil {
builder := newAggBuilder()
aggDef.builders = append(aggDef.builders, builder)
fn(innerAgg, builder)
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}
func (b *aggBuilderImpl) Terms(key, field string, fn func(a *TermsAggregation, b AggBuilder)) AggBuilder {
innerAgg := &TermsAggregation{
Field: field,
Order: make(map[string]any),
}
aggDef := newAggDef(key, &aggContainer{
Type: "terms",
Aggregation: innerAgg,
})
if fn != nil {
builder := newAggBuilder()
aggDef.builders = append(aggDef.builders, builder)
fn(innerAgg, builder)
}
if len(innerAgg.Order) > 0 {
if orderBy, exists := innerAgg.Order[termsOrderTerm]; exists {
innerAgg.Order["_key"] = orderBy
delete(innerAgg.Order, termsOrderTerm)
}
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}
func (b *aggBuilderImpl) Nested(key, field string, fn func(a *NestedAggregation, b AggBuilder)) AggBuilder {
innerAgg := &NestedAggregation{
Path: field,
}
aggDef := newAggDef(key, &aggContainer{
Type: "nested",
Aggregation: innerAgg,
})
if fn != nil {
builder := newAggBuilder()
aggDef.builders = append(aggDef.builders, builder)
fn(innerAgg, builder)
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}
func (b *aggBuilderImpl) Filters(key string, fn func(a *FiltersAggregation, b AggBuilder)) AggBuilder {
innerAgg := &FiltersAggregation{
Filters: make(map[string]any),
}
aggDef := newAggDef(key, &aggContainer{
Type: "filters",
Aggregation: innerAgg,
})
if fn != nil {
builder := newAggBuilder()
aggDef.builders = append(aggDef.builders, builder)
fn(innerAgg, builder)
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}
func (b *aggBuilderImpl) GeoHashGrid(key, field string, fn func(a *GeoHashGridAggregation, b AggBuilder)) AggBuilder {
innerAgg := &GeoHashGridAggregation{
Field: field,
Precision: DefaultGeoHashPrecision,
}
aggDef := newAggDef(key, &aggContainer{
Type: "geohash_grid",
Aggregation: innerAgg,
})
if fn != nil {
builder := newAggBuilder()
aggDef.builders = append(aggDef.builders, builder)
fn(innerAgg, builder)
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}
func (b *aggBuilderImpl) Metric(key, metricType, field string, fn func(a *MetricAggregation)) AggBuilder {
innerAgg := &MetricAggregation{
Type: metricType,
Field: field,
Settings: make(map[string]any),
}
aggDef := newAggDef(key, &aggContainer{
Type: metricType,
Aggregation: innerAgg,
})
if fn != nil {
fn(innerAgg)
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}
func (b *aggBuilderImpl) Pipeline(key, pipelineType string, bucketPath any, fn func(a *PipelineAggregation)) AggBuilder {
innerAgg := &PipelineAggregation{
BucketPath: bucketPath,
Settings: make(map[string]any),
}
aggDef := newAggDef(key, &aggContainer{
Type: pipelineType,
Aggregation: innerAgg,
})
if fn != nil {
fn(innerAgg)
}
b.aggDefs = append(b.aggDefs, aggDef)
return b
}

Some files were not shown because too many files have changed in this diff Show More