Skip to content

Feat: Opencode Go Subcription as Provider#8179

Open
VonLan233 wants to merge 7 commits into
AstrBotDevs:masterfrom
VonLan233:feat/issue-8158-opencode-go-provider
Open

Feat: Opencode Go Subcription as Provider#8179
VonLan233 wants to merge 7 commits into
AstrBotDevs:masterfrom
VonLan233:feat/issue-8158-opencode-go-provider

Conversation

@VonLan233
Copy link
Copy Markdown

@VonLan233 VonLan233 commented May 13, 2026

Modifications / 改动点

接入Opencode Go作为模型供应商,提供Kimi等chat completion模型

关联issue:
Close #8158

核心文件:

  • astrbot/core/provider/sources/opencode_go_source.py

    • 新增 OpenCode Go Provider。
    • 走 OpenAI-compatible /v1/chat/completions。
    • 支持 opencode-go/kimi-k2.6 这类前端 ID,同时实际请求时去掉 opencode-go/ 前缀。
    • 过滤 MiniMax /v1/messages 专用模型,避免走错接口。
  • astrbot/core/provider/manager.py

    • 注册 opencode_go_chat_completion provider 类型。
  • astrbot/core/config/default.py

    • 在默认 Provider Sources 里新增 OpenCode Go 配置模板。
    • 默认 API base:https://opencode.ai/zen/go/v1
    • 默认模型:opencode-go/kimi-k2.6
  • astrbot/core/provider/sources/openai_source.py

    • 修复 Moonshot/Kimi thinking 模式下 tool-call 历史报错。
    • 对 Moonshot/Kimi/OpenCode Go 的 assistant tool_calls 历史消息补 reasoning_content,避免 400。
  • dashboard/src/utils/providerUtils.js

    • 新增 OpenCode Go 图标映射。
    • 支持 opencode-go 和 opencode_go_chat_completion。
  • dashboard/src/composables/useProviderSources.ts

    • 修复 Provider Source 图标解析兜底逻辑。
    • 优先按 provider,再按 id/type/templateKey 找图标。
  • dashboard/src/assets/images/provider_logos/opencode-go.png

    • 新增 OpenCode Go 本地图标资源。
  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

后台运行日志- provider链接 Telegram-设置任务 Telegram-任务成功

Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Add OpenCode Go as an OpenAI-compatible chat completion provider and improve compatibility handling for Kimi/Moonshot-style tool calls and provider icons.

New Features:

  • Introduce an OpenCode Go provider adapter that reuses the OpenAI chat completions flow while filtering out unsupported messages-only models.
  • Add a default OpenCode Go provider source configuration, including API base, default model, and enabling it in the system.
  • Expose OpenCode Go in the dashboard with a dedicated icon and support for resolving its provider type in the UI.

Bug Fixes:

  • Ensure assistant tool_call history for Moonshot, Kimi, and OpenCode Go includes reasoning_content when required to prevent request failures.
  • Improve provider icon resolution in the dashboard by falling back across provider, id, type, and template keys.

@auto-assign auto-assign Bot requested review from LIghtJUNction and Raven95676 May 13, 2026 16:51
@dosubot dosubot Bot added size:L This PR changes 100-499 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. area:webui The bug / feature is about webui(dashboard) of astrbot. labels May 13, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • In ProviderOpenCodeGo.get_models, you currently return bare model names (e.g., kimi-k2.6), while the default config and description use the opencode-go/-prefixed IDs; consider re-adding the opencode-go/ prefix in the returned list to keep the UI/API-facing model identifiers consistent with the configured default and user expectations.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `ProviderOpenCodeGo.get_models`, you currently return bare model names (e.g., `kimi-k2.6`), while the default config and description use the `opencode-go/`-prefixed IDs; consider re-adding the `opencode-go/` prefix in the returned list to keep the UI/API-facing model identifiers consistent with the configured default and user expectations.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the 'OpenCode Go' AI provider, including the necessary configuration, provider adapter implementation, and integration with the existing OpenAI-based source. It also adds logic to ensure tool call reasoning content is handled correctly for specific providers and updates the dashboard UI to resolve provider icons more robustly. Feedback was provided regarding the refactoring of hardcoded provider checks into configuration-driven logic, removing redundant method calls in the provider initialization, optimizing list processing in model retrieval, and applying the DRY principle to model resolution logic.

Comment on lines +531 to +536
return (
provider in {"moonshot", "opencode-go"}
or "moonshot" in api_base
or "api.kimi" in api_base
or model.startswith(("kimi-k2.5", "kimi-k2.6", "kimi-k2-thinking"))
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

为了提高代码的可维护性和可扩展性,建议将判断是否需要 reasoning_content 的逻辑从硬编码的字符串匹配改为通过提供商配置中的一个标志位来控制。
当前实现硬编码了 moonshotopencode-go 等提供商名称和 api.kimi 等 URL 片段,当未来需要支持更多有类似需求的提供商时,都需要修改此处的代码,这违反了开闭原则。

建议在提供商配置中增加一个布尔类型的配置项,例如 force_tool_call_reasoning_content: true。然后 _requires_tool_call_reasoning_content 方法可以直接检查此配置项。

例如,在 astrbot/core/config/default.py 中为 Moonshot 和 OpenCode Go 添加该配置:

"Moonshot": {
    ...
    "force_tool_call_reasoning_content": True,
},
"OpenCode Go": {
    ...
    "force_tool_call_reasoning_content": True,
}

然后 _requires_tool_call_reasoning_content 方法可以简化为:

def _requires_tool_call_reasoning_content(
    self,
    payloads: dict,
    extra_body: dict[str, Any],
) -> bool:
    thinking = extra_body.get("thinking")
    if isinstance(thinking, dict) and thinking.get("type") == "disabled":
        return False

    return self.provider_config.get("force_tool_call_reasoning_content", False)

这样修改后,未来添加新的提供商时,只需在配置中声明即可,无需改动 openai_source.py 的代码。

self.set_model(model)

self.openai_provider = ProviderOpenAIOfficial(
self._build_delegate_config(model=self._to_api_model(model)),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

__init__ 方法中,self._to_api_model 方法被重复调用。变量 model 已经是 _to_api_model 处理后的结果,在构建 openai_provider 时无需再次调用。
建议直接使用已经处理过的 model 变量,以避免冗余并提高代码清晰度。

Suggested change
self._build_delegate_config(model=self._to_api_model(model)),
self._build_delegate_config(model=model),

Comment on lines +77 to +82
return sorted(
self._to_api_model(model)
for model in models
if model.strip()
and self._to_api_model(model) not in OPENCODE_GO_MESSAGES_ONLY_MODELS
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

get_models 方法中,列表推导式对每个 model 都调用了两次 self._to_api_model(model),这既影响性能也降低了代码的可读性。
建议进行重构,避免重复调用。可以使用 for 循环或者 Python 3.8+ 的海象运算符(:=)来优化。

        api_models = []
        for model in models:
            if not model.strip():
                continue
            api_model = self._to_api_model(model)
            if api_model not in OPENCODE_GO_MESSAGES_ONLY_MODELS:
                api_models.append(api_model)
        return sorted(api_models)

Comment on lines +84 to +143
async def text_chat(
self,
prompt: str | None = None,
session_id: str | None = None,
image_urls: list[str] | None = None,
audio_urls: list[str] | None = None,
func_tool: ToolSet | None = None,
contexts: list[Message] | list[dict] | None = None,
system_prompt: str | None = None,
tool_calls_result: ToolCallsResult | list[ToolCallsResult] | None = None,
model: str | None = None,
extra_user_content_parts: list[ContentPart] | None = None,
tool_choice: Literal["auto", "required"] = "auto",
**kwargs,
) -> LLMResponse:
requested_model = model or self.get_model()
return await self.openai_provider.text_chat(
prompt=prompt,
session_id=session_id,
image_urls=image_urls,
audio_urls=audio_urls,
func_tool=func_tool,
contexts=contexts,
system_prompt=system_prompt,
tool_calls_result=tool_calls_result,
model=self._ensure_chat_completions_model(requested_model),
extra_user_content_parts=extra_user_content_parts,
tool_choice=tool_choice,
**kwargs,
)

async def text_chat_stream(
self,
prompt: str | None = None,
session_id: str | None = None,
image_urls: list[str] | None = None,
audio_urls: list[str] | None = None,
func_tool: ToolSet | None = None,
contexts: list[Message] | list[dict] | None = None,
system_prompt: str | None = None,
tool_calls_result: ToolCallsResult | list[ToolCallsResult] | None = None,
model: str | None = None,
tool_choice: Literal["auto", "required"] = "auto",
**kwargs,
) -> AsyncGenerator[LLMResponse, None]:
requested_model = model or self.get_model()
async for response in self.openai_provider.text_chat_stream(
prompt=prompt,
session_id=session_id,
image_urls=image_urls,
audio_urls=audio_urls,
func_tool=func_tool,
contexts=contexts,
system_prompt=system_prompt,
tool_calls_result=tool_calls_result,
model=self._ensure_chat_completions_model(requested_model),
tool_choice=tool_choice,
**kwargs,
):
yield response
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

text_chattext_chat_stream 方法有重复的逻辑来解析和验证模型名称。为了遵循 DRY (Don't Repeat Yourself) 原则并提高代码可维护性,建议将这部分逻辑提取到一个辅助方法中。

例如,可以创建一个 _resolve_model 方法:

def _resolve_model(self, model: str | None) -> str:
    requested_model = model or self.get_model()
    return self._ensure_chat_completions_model(requested_model)

此外,新功能的实现(如该 Provider 的核心逻辑)应当伴随相应的单元测试以确保稳定性。

References
  1. When implementing similar functionality for different cases, refactor the logic into a shared helper function to avoid code duplication.
  2. New functionality, such as handling attachments, should be accompanied by corresponding unit tests.

@VonLan233 VonLan233 changed the title Feat/issue 8158 opencode go provider Feat: Opencode Go Subcription as Provider May 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. area:webui The bug / feature is about webui(dashboard) of astrbot. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]将Opencode Go纳入模型供应商

1 participant