Feat: Opencode Go Subcription as Provider#8179
Conversation
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- In
ProviderOpenCodeGo.get_models, you currently return bare model names (e.g.,kimi-k2.6), while the default config and description use theopencode-go/-prefixed IDs; consider re-adding theopencode-go/prefix in the returned list to keep the UI/API-facing model identifiers consistent with the configured default and user expectations.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `ProviderOpenCodeGo.get_models`, you currently return bare model names (e.g., `kimi-k2.6`), while the default config and description use the `opencode-go/`-prefixed IDs; consider re-adding the `opencode-go/` prefix in the returned list to keep the UI/API-facing model identifiers consistent with the configured default and user expectations.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request introduces support for the 'OpenCode Go' AI provider, including the necessary configuration, provider adapter implementation, and integration with the existing OpenAI-based source. It also adds logic to ensure tool call reasoning content is handled correctly for specific providers and updates the dashboard UI to resolve provider icons more robustly. Feedback was provided regarding the refactoring of hardcoded provider checks into configuration-driven logic, removing redundant method calls in the provider initialization, optimizing list processing in model retrieval, and applying the DRY principle to model resolution logic.
| return ( | ||
| provider in {"moonshot", "opencode-go"} | ||
| or "moonshot" in api_base | ||
| or "api.kimi" in api_base | ||
| or model.startswith(("kimi-k2.5", "kimi-k2.6", "kimi-k2-thinking")) | ||
| ) |
There was a problem hiding this comment.
为了提高代码的可维护性和可扩展性,建议将判断是否需要 reasoning_content 的逻辑从硬编码的字符串匹配改为通过提供商配置中的一个标志位来控制。
当前实现硬编码了 moonshot、opencode-go 等提供商名称和 api.kimi 等 URL 片段,当未来需要支持更多有类似需求的提供商时,都需要修改此处的代码,这违反了开闭原则。
建议在提供商配置中增加一个布尔类型的配置项,例如 force_tool_call_reasoning_content: true。然后 _requires_tool_call_reasoning_content 方法可以直接检查此配置项。
例如,在 astrbot/core/config/default.py 中为 Moonshot 和 OpenCode Go 添加该配置:
"Moonshot": {
...
"force_tool_call_reasoning_content": True,
},
"OpenCode Go": {
...
"force_tool_call_reasoning_content": True,
}然后 _requires_tool_call_reasoning_content 方法可以简化为:
def _requires_tool_call_reasoning_content(
self,
payloads: dict,
extra_body: dict[str, Any],
) -> bool:
thinking = extra_body.get("thinking")
if isinstance(thinking, dict) and thinking.get("type") == "disabled":
return False
return self.provider_config.get("force_tool_call_reasoning_content", False)这样修改后,未来添加新的提供商时,只需在配置中声明即可,无需改动 openai_source.py 的代码。
| self.set_model(model) | ||
|
|
||
| self.openai_provider = ProviderOpenAIOfficial( | ||
| self._build_delegate_config(model=self._to_api_model(model)), |
There was a problem hiding this comment.
| return sorted( | ||
| self._to_api_model(model) | ||
| for model in models | ||
| if model.strip() | ||
| and self._to_api_model(model) not in OPENCODE_GO_MESSAGES_ONLY_MODELS | ||
| ) |
There was a problem hiding this comment.
在 get_models 方法中,列表推导式对每个 model 都调用了两次 self._to_api_model(model),这既影响性能也降低了代码的可读性。
建议进行重构,避免重复调用。可以使用 for 循环或者 Python 3.8+ 的海象运算符(:=)来优化。
api_models = []
for model in models:
if not model.strip():
continue
api_model = self._to_api_model(model)
if api_model not in OPENCODE_GO_MESSAGES_ONLY_MODELS:
api_models.append(api_model)
return sorted(api_models)| async def text_chat( | ||
| self, | ||
| prompt: str | None = None, | ||
| session_id: str | None = None, | ||
| image_urls: list[str] | None = None, | ||
| audio_urls: list[str] | None = None, | ||
| func_tool: ToolSet | None = None, | ||
| contexts: list[Message] | list[dict] | None = None, | ||
| system_prompt: str | None = None, | ||
| tool_calls_result: ToolCallsResult | list[ToolCallsResult] | None = None, | ||
| model: str | None = None, | ||
| extra_user_content_parts: list[ContentPart] | None = None, | ||
| tool_choice: Literal["auto", "required"] = "auto", | ||
| **kwargs, | ||
| ) -> LLMResponse: | ||
| requested_model = model or self.get_model() | ||
| return await self.openai_provider.text_chat( | ||
| prompt=prompt, | ||
| session_id=session_id, | ||
| image_urls=image_urls, | ||
| audio_urls=audio_urls, | ||
| func_tool=func_tool, | ||
| contexts=contexts, | ||
| system_prompt=system_prompt, | ||
| tool_calls_result=tool_calls_result, | ||
| model=self._ensure_chat_completions_model(requested_model), | ||
| extra_user_content_parts=extra_user_content_parts, | ||
| tool_choice=tool_choice, | ||
| **kwargs, | ||
| ) | ||
|
|
||
| async def text_chat_stream( | ||
| self, | ||
| prompt: str | None = None, | ||
| session_id: str | None = None, | ||
| image_urls: list[str] | None = None, | ||
| audio_urls: list[str] | None = None, | ||
| func_tool: ToolSet | None = None, | ||
| contexts: list[Message] | list[dict] | None = None, | ||
| system_prompt: str | None = None, | ||
| tool_calls_result: ToolCallsResult | list[ToolCallsResult] | None = None, | ||
| model: str | None = None, | ||
| tool_choice: Literal["auto", "required"] = "auto", | ||
| **kwargs, | ||
| ) -> AsyncGenerator[LLMResponse, None]: | ||
| requested_model = model or self.get_model() | ||
| async for response in self.openai_provider.text_chat_stream( | ||
| prompt=prompt, | ||
| session_id=session_id, | ||
| image_urls=image_urls, | ||
| audio_urls=audio_urls, | ||
| func_tool=func_tool, | ||
| contexts=contexts, | ||
| system_prompt=system_prompt, | ||
| tool_calls_result=tool_calls_result, | ||
| model=self._ensure_chat_completions_model(requested_model), | ||
| tool_choice=tool_choice, | ||
| **kwargs, | ||
| ): | ||
| yield response |
There was a problem hiding this comment.
text_chat 和 text_chat_stream 方法有重复的逻辑来解析和验证模型名称。为了遵循 DRY (Don't Repeat Yourself) 原则并提高代码可维护性,建议将这部分逻辑提取到一个辅助方法中。
例如,可以创建一个 _resolve_model 方法:
def _resolve_model(self, model: str | None) -> str:
requested_model = model or self.get_model()
return self._ensure_chat_completions_model(requested_model)此外,新功能的实现(如该 Provider 的核心逻辑)应当伴随相应的单元测试以确保稳定性。
References
- When implementing similar functionality for different cases, refactor the logic into a shared helper function to avoid code duplication.
- New functionality, such as handling attachments, should be accompanied by corresponding unit tests.
Modifications / 改动点
接入Opencode Go作为模型供应商,提供Kimi等chat completion模型
关联issue:
Close #8158
核心文件:
astrbot/core/provider/sources/opencode_go_source.py
astrbot/core/provider/manager.py
astrbot/core/config/default.py
astrbot/core/provider/sources/openai_source.py
dashboard/src/utils/providerUtils.js
dashboard/src/composables/useProviderSources.ts
dashboard/src/assets/images/provider_logos/opencode-go.png
This is NOT a breaking change. / 这不是一个破坏性变更。
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
/ 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。
👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
/ 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。
🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in
requirements.txtandpyproject.toml./ 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到
requirements.txt和pyproject.toml文件相应位置。😮 My changes do not introduce malicious code.
/ 我的更改没有引入恶意代码。
Summary by Sourcery
Add OpenCode Go as an OpenAI-compatible chat completion provider and improve compatibility handling for Kimi/Moonshot-style tool calls and provider icons.
New Features:
Bug Fixes: