Skip to content

Conversation

gotsysdba
Copy link
Contributor

Title

Fix parsing failure due to a validation error because completionTokensDetails and promptTokensDetails are missing in the completion endpoint response with certain models.

Relevant issues

Fixes #14090

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix
✅ Test

Changes

Update the OCIResponseUsage model:

completionTokensDetails: Optional[OCICompletionTokenDetails] = None
promptTokensDetails: Optional[OCIPromptTokensDetails] = None

add test.

image

Copy link

vercel bot commented Aug 30, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Sep 2, 2025 11:47am

@gotsysdba
Copy link
Contributor Author

Fixes #14063

@kutsushitaneko
Copy link
Contributor

I'm trying to use OCI Generative AI's Llama via LiteLLM Proxy with Hugging Face smolagents and n8n, and I'm facing the same issue (#14063 ). I'm eagerly awaiting the merge of this PR.

@kutsushitaneko
Copy link
Contributor

According to the Oracle Docs: https://docs.oracle.com/en-us/iaas/tools/python/2.159.0/api/generative_ai_inference/models/oci.generative_ai_inference.models.Usage.html#oci.generative_ai_inference.models.Usage, completionTokensDetails and promptTokensDetails are not required. Therefore, OCI Generative AI may not return these two values. However, in the current OCIResponseUsage class, these are mandatory, causing pydantic validation errors. It is appropriate to make these optional, and the fix in this PR is valid. Note that the failing Mock Test is related to groq/llama and is unrelated to this fix, which only pertains to the OCI provider. I look forward to the review being completed and the PR being merged soon.

@krrishdholakia
Copy link
Contributor

Hey @gotsysdba we should always have usage - if the usage is missing, let's use token_counter to calculate the tokens

this will ensure we always have some estimate of cost

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: OCI Provider - validation fails on completion on some models
3 participants