📝 docs: update "ENABLED_COMFYUI" and remove "ENABLED_BFL" "ENABLED_VERCELAIGATEWAY" in docs (#9858)
Some checks failed
Upstream Sync / Sync latest commits from upstream repo (push) Has been cancelled
E2E CI / Test Web App (push) Has been cancelled
Release CI / Release (push) Has been cancelled
Test CI / Test package agent-runtime (push) Has been cancelled
Test CI / Test package context-engine (push) Has been cancelled
Test CI / Test package electron-server-ipc (push) Has been cancelled
Test CI / Test package file-loaders (push) Has been cancelled
Test CI / Test package model-runtime (push) Has been cancelled
Test CI / Test package prompts (push) Has been cancelled
Test CI / Test package python-interpreter (push) Has been cancelled
Test CI / Test package utils (push) Has been cancelled
Test CI / Test package web-crawler (push) Has been cancelled
Test CI / Test package model-bank (push) Has been cancelled
Test CI / Test Website (push) Has been cancelled
Test CI / Test Database (push) Has been cancelled
Lighthouse Badger / LobeChat | Chat (push) Has been cancelled
Lighthouse Badger / LobeChat | Market (push) Has been cancelled
Issue Close Require / issue-check-inactive (push) Has been cancelled
Issue Close Require / issue-close-require (push) Has been cancelled
Daily i18n Update / update-i18n (push) Has been cancelled
Lock Stale Issues / lock-closed-issues (push) Has been cancelled
Auto-close duplicate issues / auto-close-duplicates (push) Has been cancelled

* add ENABLED_COMFYUI, ENABLED_AWS_BEDROCK, ENABLED_OPENAI and remove ENABLED_BFL and ENABLED_VERCELAIGATEWAY

* 🐛 fix: update AiHubMix links and API key documentation

* revert change AiHubMix url
This commit is contained in:
bbbugg
2025-10-24 01:15:56 -07:00
committed by GitHub
parent 15dd7ecea4
commit 26533b4938
8 changed files with 78 additions and 41 deletions

View File

@@ -156,7 +156,7 @@ ENV \
# Anthropic
ANTHROPIC_API_KEY="" ANTHROPIC_MODEL_LIST="" ANTHROPIC_PROXY_URL="" \
# Amazon Bedrock
AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
ENABLED_AWS_BEDROCK="" AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
# Azure OpenAI
AZURE_API_KEY="" AZURE_API_VERSION="" AZURE_ENDPOINT="" AZURE_MODEL_LIST="" \
# Baichuan
@@ -166,7 +166,7 @@ ENV \
# Cohere
COHERE_API_KEY="" COHERE_MODEL_LIST="" COHERE_PROXY_URL="" \
# ComfyUI
COMFYUI_BASE_URL="" COMFYUI_AUTH_TYPE="" \
ENABLED_COMFYUI="" COMFYUI_BASE_URL="" COMFYUI_AUTH_TYPE="" \
COMFYUI_API_KEY="" COMFYUI_USERNAME="" COMFYUI_PASSWORD="" COMFYUI_CUSTOM_HEADERS="" \
# DeepSeek
DEEPSEEK_API_KEY="" DEEPSEEK_MODEL_LIST="" \
@@ -209,7 +209,7 @@ ENV \
# Ollama
ENABLED_OLLAMA="" OLLAMA_MODEL_LIST="" OLLAMA_PROXY_URL="" \
# OpenAI
OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
ENABLED_OPENAI="" OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
# OpenRouter
OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
# Perplexity

View File

@@ -209,7 +209,7 @@ ENV \
# Anthropic
ANTHROPIC_API_KEY="" ANTHROPIC_MODEL_LIST="" ANTHROPIC_PROXY_URL="" \
# Amazon Bedrock
AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
ENABLED_AWS_BEDROCK="" AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
# Azure OpenAI
AZURE_API_KEY="" AZURE_API_VERSION="" AZURE_ENDPOINT="" AZURE_MODEL_LIST="" \
# Baichuan
@@ -219,7 +219,7 @@ ENV \
# Cohere
COHERE_API_KEY="" COHERE_MODEL_LIST="" COHERE_PROXY_URL="" \
# ComfyUI
COMFYUI_BASE_URL="" COMFYUI_AUTH_TYPE="" \
ENABLED_COMFYUI="" COMFYUI_BASE_URL="" COMFYUI_AUTH_TYPE="" \
COMFYUI_API_KEY="" COMFYUI_USERNAME="" COMFYUI_PASSWORD="" COMFYUI_CUSTOM_HEADERS="" \
# DeepSeek
DEEPSEEK_API_KEY="" DEEPSEEK_MODEL_LIST="" \
@@ -262,7 +262,7 @@ ENV \
# Ollama
ENABLED_OLLAMA="" OLLAMA_MODEL_LIST="" OLLAMA_PROXY_URL="" \
# OpenAI
OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
ENABLED_OPENAI="" OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
# OpenRouter
OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
# Perplexity

View File

@@ -158,7 +158,7 @@ ENV \
# Anthropic
ANTHROPIC_API_KEY="" ANTHROPIC_MODEL_LIST="" ANTHROPIC_PROXY_URL="" \
# Amazon Bedrock
AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
ENABLED_AWS_BEDROCK="" AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
# Azure OpenAI
AZURE_API_KEY="" AZURE_API_VERSION="" AZURE_ENDPOINT="" AZURE_MODEL_LIST="" \
# Baichuan
@@ -168,7 +168,7 @@ ENV \
# Cohere
COHERE_API_KEY="" COHERE_MODEL_LIST="" COHERE_PROXY_URL="" \
# ComfyUI
COMFYUI_BASE_URL="" COMFYUI_AUTH_TYPE="" \
ENABLED_COMFYUI="" COMFYUI_BASE_URL="" COMFYUI_AUTH_TYPE="" \
COMFYUI_API_KEY="" COMFYUI_USERNAME="" COMFYUI_PASSWORD="" COMFYUI_CUSTOM_HEADERS="" \
# DeepSeek
DEEPSEEK_API_KEY="" DEEPSEEK_MODEL_LIST="" \
@@ -211,7 +211,7 @@ ENV \
# Ollama
ENABLED_OLLAMA="" OLLAMA_MODEL_LIST="" OLLAMA_PROXY_URL="" \
# OpenAI
OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
ENABLED_OPENAI="" OPENAI_API_KEY="" OPENAI_MODEL_LIST="" OPENAI_PROXY_URL="" \
# OpenRouter
OPENROUTER_API_KEY="" OPENROUTER_MODEL_LIST="" \
# Perplexity

View File

@@ -26,6 +26,17 @@ For example: `+qwen-7b-chat,+glm-6b,-gpt-3.5-turbo,gpt-4-0125-preview=gpt-4-turb
In the above example, it adds `qwen-7b-chat` and `glm-6b` to the model list, removes `gpt-3.5-turbo` from the list, and displays the model name of `gpt-4-0125-preview` as `gpt-4-turbo`. If you want to disable all models first and then enable specific models, you can use `-all,+gpt-3.5-turbo`, which means only enabling `gpt-3.5-turbo`.
### -all: Hide all models
- Description: `-all` means hiding all built-in models first. Its usually combined with `+` to only enable the models you explicitly specify.
- Example:
```text
-all,+gpt-3.5-turbo,+gpt-4-0125-preview=gpt-4-turbo
```
This enables only gpt-3.5-turbo and gpt-4-turbo while hiding other models.
## Extension Capabilities
Considering the diversity of model capabilities, we started to add extension configuration in version `0.147.8`, with the following rules:

View File

@@ -25,6 +25,17 @@ id->deploymentName=displayName<maxToken:vision:reasoning:search:fc:file:imageOut
上面示例表示增加 `qwen-7b-chat` 和 `glm-6b` 到模型列表,而从列表中删除 `gpt-3.5-turbo`,并将 `gpt-4-0125-preview` 模型名字展示为 `gpt-4-turbo`。如果你想先禁用所有模型,再启用指定模型,可以使用 `-all,+gpt-3.5-turbo`,则表示仅启用 `gpt-3.5-turbo`。
### -all隐藏所有模型
- 描述:`-all` 表示先隐藏所有内置模型。通常与 `+` 组合使用,用于只启用你显式指定的模型。
- 示例:
```text
-all,+gpt-3.5-turbo,+gpt-4-0125-preview=gpt-4-turbo
```
仅启用 gpt-3.5-turbo 和 gpt-4-turbo而其他模型都隐藏。
## 扩展能力
考虑到模型的能力多样性,我们在 `0.147.8` 版本开始增加扩展性配置,它的规则如下:

View File

@@ -653,6 +653,13 @@ The above example disables all models first, then enables `fal-ai/flux/schnell`
## ComfyUI
### `ENABLED_COMFYUI`
- Type: Optional
- Description: Enables ComfyUI as a model provider by default. Set to `0` to disable the ComfyUI service.
- Default: `1`
- Example: `0`
### `COMFYUI_BASE_URL`
- Type: Optional
@@ -705,13 +712,6 @@ The above example disables all models first, then enables `fal-ai/flux/schnell`
## BFL
### `ENABLED_BFL`
- Type: Optional
- Description: Enables BFL as a model provider by default. Set to `0` to disable the BFL service.
- Default: `1`
- Example: `0`
### `BFL_API_KEY`
- Type: Required
@@ -748,13 +748,6 @@ NewAPI is a multi-provider model aggregation service that supports automatic mod
## Vercel AI Gateway
### `ENABLED_VERCELAIGATEWAY`
- Type: Optional
- Description: Enables Vercel AI Gateway as a model provider by default. Set to `0` to disable the Vercel AI Gateway service.
- Default: `1`
- Example: `0`
### `VERCELAIGATEWAY_API_KEY`
- Type: Required
@@ -785,4 +778,20 @@ NewAPI is a multi-provider model aggregation service that supports automatic mod
- Default: `-`
- Example: `-all,+cerebras-model-1,+cerebras-model-2=cerebras-special`
## AiHubMix
### `AIHUBMIX_API_KEY`
- Type: Required
- Description: This is the API key you applied for in the AiHubMix service.
- Default: -
- Example: `sk-xxxxxx...xxxxxx`
### `AIHUBMIX_MODEL_LIST`
- Type: Optional
- Description: Used to control the AiHubMix model list. Use `+` to add a model, `-` to hide a model, and `model_name=display_name` to customize the display name of a model. Separate multiple entries with commas. The definition syntax follows the same rules as other providers' model lists.
- Default: `-`
- Example: `-all,+claude-opus-4-1-20250805,+claude-opus-4-20250514=claude-opus-4`
[model-list]: /docs/self-hosting/advanced/model-list

View File

@@ -167,6 +167,13 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
## ComfyUI
### `ENABLED_COMFYUI`
- 类型:可选
- 描述:默认启用 ComfyUI 作为模型供应商,当设为 0 时关闭 ComfyUI 服务
- 默认值:`1`
- 示例:`0`
### `COMFYUI_BASE_URL`
- 类型:可选
@@ -703,13 +710,6 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
## BFL
### `ENABLED_BFL`
- 类型:可选
- 描述:默认启用 BFL 作为模型供应商,当设为 0 时关闭 BFL 服务
- 默认值:`1`
- 示例:`0`
### `BFL_API_KEY`
- 类型:必选
@@ -751,13 +751,6 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
## Vercel AI Gateway
### `ENABLED_VERCELAIGATEWAY`
- 类型:可选
- 描述:默认启用 Vercel AI Gateway 作为模型供应商,当设为 0 时关闭 Vercel AI Gateway 服务
- 默认值:`1`
- 示例:`0`
### `VERCELAIGATEWAY_API_KEY`
- 类型:必选
@@ -788,4 +781,20 @@ LobeChat 在部署时提供了丰富的模型服务商相关的环境变量,
- 默认值:`-`
- 示例:`-all,+cerebras-model-1,+cerebras-model-2=cerebras-special`
## AiHubMix
### `AIHUBMIX_API_KEY`
- 类型:必选
- 描述:这是你在 AiHubMix 服务中申请的 API 密钥
- 默认值:-
- 示例:`sk-xxxxxx...xxxxxx`
### `AIHUBMIX_MODEL_LIST`
- 类型:可选
- 描述:用来控制 AiHubMix 模型列表,使用 `+` 增加一个模型,使用 `-` 来隐藏一个模型,使用 `模型名=展示名` 来自定义模型的展示名,用英文逗号隔开。模型定义语法规则与其他 provider 保持一致。
- 默认值:`-`
- 示例:`-all,+claude-opus-4-1-20250805,+claude-opus-4-20250514=claude-opus-4`
[model-list]: /zh/docs/self-hosting/advanced/model-list

View File

@@ -31,9 +31,6 @@ AiHubMix 是一个 AI 模型聚合平台,通过统一的 OpenAI 兼容 API 接
在您的 `.env` 文件中添加以下环境变量:
```bash
# 启用 AiHubMix 提供商
ENABLED_AIHUBMIX=1
# AiHubMix API 密钥(必需)
AIHUBMIX_API_KEY=your_aihubmix_api_key
```
@@ -97,5 +94,5 @@ AiHubMix 提供多种热门 AI 模型的访问,包括:
如需更多支持:
- 访问 [AiHubMix 文档](https://docs.aihubmix.com/)
- 查看 [模型列表](https://docs.aihubmix.com/cn/api/Model-List)
- 查看 [模型列表](https://aihubmix.com/models)
- 联系 AiHubMix 支持团队解决 API 相关问题