Skip to content

Tr/multi model prompt#1726

Merged
mrT23 merged 5 commits intomainfrom
tr/multi_model_prompt
Apr 27, 2025
Merged

Tr/multi model prompt#1726
mrT23 merged 5 commits intomainfrom
tr/multi_model_prompt

Conversation

@mrT23
Copy link
Copy Markdown
Contributor

@mrT23 mrT23 commented Apr 27, 2025

PR Type

Enhancement, Documentation


Description

  • Add support for multiple model types, including reasoning models

    • Introduce model_reasoning config and selection logic
    • Update model fallback and selection functions
  • Refine code suggestion and self-reflection prompts and evaluation

    • Improve prompt clarity, accuracy, and evaluation criteria
    • Enhance YAML schema instructions for suggestions and feedback
  • Enhance dynamic context handling in git patch processing

    • Add partial line matching for better context extension
    • Adjust context extension logic for improved accuracy
  • Update documentation and configuration defaults for improved usability

    • Increase default context tokens and suggestions per chunk
    • Clarify documentation on chunking and suggestion generation

Changes walkthrough 📝

Relevant files
Enhancement
4 files
utils.py
Add general model selection and new ModelType for reasoning
+5/-3     
pr_processing.py
Update model fallback logic to support reasoning model type
+4/-2     
pr_code_suggestions.py
Use reasoning model for self-reflection and update prediction logic
+10/-5   
git_patch_processing.py
Improve dynamic context handling with partial line matching
+30/-20 
Documentation
3 files
pr_code_suggestions_prompts_not_decoupled.toml
Clarify and tighten code suggestion prompt instructions   
+5/-5     
pr_code_suggestions_reflect_prompts.toml
Refine self-reflection prompt for accuracy and evaluation
+12/-10 
improve.md
Update documentation for chunking and suggestion defaults
+3/-3     
Configuration changes
1 files
configuration.toml
Update default context tokens, chunking, and model config
+5/-4     

Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • @qodo-free-for-open-source-projects
    Copy link
    Copy Markdown
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Dynamic Context Logic

    The new implementation for dynamic context matching now checks for partial line matches, which may have edge cases where incorrect context is applied. Verify the logic handles all scenarios correctly.

    for i in range(len(delta_lines_original)):
        if delta_lines_original[i:] == delta_lines_new[i:]:
            delta_lines_original = delta_lines_original[i:]
            delta_lines_new = delta_lines_new[i:]
            extended_start1 += i
            extended_start2 += i
            found_mini_match = True
            break
    Model Selection

    The new model selection logic for self-reflection uses a reasoning model, but there's a potential issue with fallback handling that could lead to using the same model for both suggestion generation and reflection.

    model_reflect_with_reasoning = get_model('model_reasoning')
    if model_reflect_with_reasoning == get_settings().config.model and model != get_settings().config.model and model == \
            get_settings().config.fallback_models[0]:
        # we are using a fallback model (should not happen on regular conditions)
        get_logger().warning(f"Using the same model for self-reflection as the one used for suggestions")
        model_reflect_with_reasoning = model

    Comment thread pr_agent/algo/pr_processing.py
    @mrT23
    Copy link
    Copy Markdown
    Contributor Author

    mrT23 commented Apr 27, 2025

    PR Code Suggestions ✨

    Latest suggestions up to f505c7a

    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Handle all fallback models consistently

    The current logic only adjusts the reflection model if the first fallback
    model (fallbacks[0]) was used for suggestions. If the system falls back further
    (e.g., to fallbacks[1]), the reflection might still use the default (potentially
    stronger) model, leading to inconsistent evaluation. Modify the condition to
    check if the model used for suggestions is present anywhere in the fallbacks
    list.

    pr_agent/tools/pr_code_suggestions.py [419-425]

     model_reflect_with_reasoning = get_model('model_reasoning')
     fallbacks = get_settings().config.fallback_models
    -if model_reflect_with_reasoning == get_settings().config.model and model != get_settings().config.model and fallbacks and model == \
    -        fallbacks[0]:
    +if model_reflect_with_reasoning == get_settings().config.model and model != get_settings().config.model and fallbacks and model in fallbacks:
         # we are using a fallback model (should not happen on regular conditions)
    -    get_logger().warning(f"Using the same model for self-reflection as the one used for suggestions")
    +    get_logger().warning(f"Using the same model ({model}) for self-reflection as the one used for suggestions, due to fallback.")
         model_reflect_with_reasoning = model
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    __

    Why: The suggestion correctly identifies that the original code only checks against the first fallback model (fallbacks[0]). The proposed change to use model in fallbacks correctly handles cases where any fallback model is used, ensuring consistent behavior for the self-reflection mechanism. This improves the logic's robustness.

    Low
    • More
    • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

    Previous suggestions

    Suggestions up to commit f505c7a
    CategorySuggestion                                                                                                                                    Impact
    Learned
    best practice
    Reduce code duplication

    The function contains duplicated logic for checking different model types.
    Extract the repeated pattern of checking model type and retrieving the
    corresponding setting into a more generic implementation.

    pr_agent/algo/utils.py [33-38]

     def get_model(model_type: str = "model_weak") -> str:
    -    if model_type == "model_weak" and get_settings().get("config.model_weak"):
    -        return get_settings().config.model_weak
    -    elif model_type == "model_reasoning" and get_settings().get("config.model_reasoning"):
    -        return get_settings().config.model_reasoning
    +    model_config_key = f"config.{model_type}"
    +    if get_settings().get(model_config_key):
    +        return get_settings().get(model_config_key)
         return get_settings().config.model
    Suggestion importance[1-10]: 6
    Low
    Suggestions up to commit f505c7a
    CategorySuggestion                                                                                                                                    Impact
    Learned
    best practice
    Add null safety check

    The function doesn't handle the case where get_settings() might return None. Add
    a null check before accessing its attributes to prevent potential runtime
    errors.

    pr_agent/algo/utils.py [33-38]

     def get_model(model_type: str = "model_weak") -> str:
    -    if model_type == "model_weak" and get_settings().get("config.model_weak"):
    -        return get_settings().config.model_weak
    -    elif model_type == "model_reasoning" and get_settings().get("config.model_reasoning"):
    -        return get_settings().config.model_reasoning
    -    return get_settings().config.model
    +    settings = get_settings()
    +    if settings is None:
    +        return ""
    +    if model_type == "model_weak" and settings.get("config.model_weak"):
    +        return settings.config.model_weak
    +    elif model_type == "model_reasoning" and settings.get("config.model_reasoning"):
    +        return settings.config.model_reasoning
    +    return settings.config.model
    Suggestion importance[1-10]: 6
    Low

    @mrT23 mrT23 merged commit 1b0aa16 into main Apr 27, 2025
    2 checks passed
    @mrT23 mrT23 deleted the tr/multi_model_prompt branch April 27, 2025 08:15
    @mrT23
    Copy link
    Copy Markdown
    Contributor Author

    mrT23 commented May 16, 2025

    Generating PR code suggestions

    Work in progress ...

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants