Skip to content

fix: handle non-standard responses from OpenAI-compatible API proxies#47

Merged
bdqnghi merged 1 commit intoFSoft-AI4Code:mainfrom
zhalice2011:fix/openai-compatible-proxy-response
Mar 16, 2026
Merged

fix: handle non-standard responses from OpenAI-compatible API proxies#47
bdqnghi merged 1 commit intoFSoft-AI4Code:mainfrom
zhalice2011:fix/openai-compatible-proxy-response

Conversation

@zhalice2011
Copy link

Problem

When using CodeWiki with an OpenAI-compatible API proxy, documentation generation fails with:

pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatCompletion
choices.0.index
  Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType]

This happens because some OpenAI-compatible proxies (Azure OpenAI, vLLM, LiteLLM proxy, internal corporate proxies, etc.) return choices[0].index as null instead of an integer. The pydantic-ai library validates responses strictly against the OpenAI spec and rejects these responses.

Root Cause

pydantic-ai's OpenAIModel._validate_completion() calls ChatCompletion.model_validate() which requires choices[].index to be int. The library already has a workaround for Ollama's finish_reason: null (line 777 in openai.py), but doesn't handle index: null.

Solution

Created a CompatibleOpenAIModel subclass that overrides _validate_completion() to patch non-standard fields before validation:

class CompatibleOpenAIModel(OpenAIModel):
    def _validate_completion(self, response):
        if response.choices:
            for i, choice in enumerate(response.choices):
                if choice.index is None:
                    choice.index = i
        return super()._validate_completion(response)

This approach:

  • Uses the existing _validate_completion hook (designed for subclass overrides)
  • Only patches None values, doesn't touch valid responses
  • Minimal surface area — single method override
  • Follows the same pattern pydantic-ai uses internally for Ollama compatibility

Test plan

  • Works with standard OpenAI API (no behavior change for valid responses)
  • Works with proxy that returns choices[].index: null
  • Fallback model chain still functions correctly
  • No changes to call_llm() direct client path (only affects pydantic-ai agent calls)

Some OpenAI-compatible proxies (Azure, vLLM, internal proxies, etc.)
return choices[].index as null instead of an integer, causing pydantic
validation to fail. Add a CompatibleOpenAIModel subclass that patches
these fields before validation.
@bdqnghi bdqnghi merged commit 40c53e3 into FSoft-AI4Code:main Mar 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants