Skip to content

Conversation

@habema
Copy link
Contributor

@habema habema commented Oct 23, 2025

Resolves #1973

Summary

When RunResultStreaming raises an error (API errors, connection drops, context window exceeded, etc.), usage from completed turns is now preserved and accessible.
In the original issue I mentioned that input tokens can be tracked dependably; it was an oversight as it also is returned in within the usage in repsonse.completed. Input tokens of the failed turn are not tracked by the usage, nor can they be unless there is a change on the API level in the future.

Changes

  • Added RunError exception wrapper that preserves run_data (including context_wrapper.usage) for non-AgentsException errors
  • Updated both streaming and non-streaming paths to wrap external exceptions
  • Original exceptions remain accessible via .original_exception attribute

Repro script

@function_tool def get_book_content(book_id: str) -> str: """Get the content of a book.""" return ( f"The content of the book {book_id}. This is a long content to test the context window limit." * 100_000 )[:1048576] # API request character limit async def main(): agent = Agent( model="gpt-4o-mini", # low context window of 128,000 tokens name="Analyzer Agent", instructions="Use the get_book_content tool to get the content of a book and analyze it.", tools=[get_book_content], ) result = Runner.run_streamed( agent, input="Get the content of the book with ID 1234567890", ) print("=== Run starting ===") try: async for event in result.stream_events(): if event.type == "run_item_stream_event": if event.item.type == "tool_call_item": print("-- Tool was called") elif event.item.type == "tool_call_output_item": print(f"-- Tool output: {len(event.item.output)} characters") elif event.item.type == "message_output_item": print(f"-- Message output:\n {ItemHelpers.text_message_output(event.item)}") else: pass # Ignore other event types print("=== Run complete ===") print(f"Final Usage: {result.context_wrapper.usage}") except Exception as e: print(f"\nError: {type(e).__name__}: {e}") print("=== Run complete (with error) ===") print(f"Usage: {result.context_wrapper.usage}") if __name__ == "__main__": asyncio.run(main())

Important Note

For LiteLLM models, model_settings.include_usage=True is required to ensure usage is tracked during successful turns. Is there a reason it is not True by default?

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@ihower
Copy link
Contributor

ihower commented Oct 23, 2025

@habema Hi, can you explain what the expected behavior is?

I ran your repro script and only noticed that the exception changed from APIError to RunError (which I think this is a breaking change). The result.context_wrapper.usage is the same before and after the change:

Usage( requests=1, input_tokens=84, input_tokens_details=InputTokensDetails(cached_tokens=0), output_tokens=20, output_tokens_details=OutputTokensDetails(reasoning_tokens=0), total_tokens=104, ) 

So I’m not sure what difference this PR makes in practice.

@habema habema marked this pull request as draft October 23, 2025 17:35
@habema
Copy link
Contributor Author

habema commented Oct 23, 2025

@ihower Thank you for pointing this out. I must have dove too deep. I'll take a look at this in a bit.

@habema
Copy link
Contributor Author

habema commented Oct 23, 2025

Closing this as it turns out I just overcomplicated the issue. Thank you @ihower for pointing it out

@habema habema closed this Oct 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants