Skip to content

Conversation

@alanzhang25
Copy link
Collaborator

πŸ“ Summary

  • 1. ...

βœ… Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @alanzhang25, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue in the Claude Agent SDK integration where concurrent use of multiple tracer instances could lead to trace ID collisions due to a shared global thread-local storage. The changes refactor the thread-local storage mechanism to be specific to each tracer instance, ensuring proper isolation of tracing contexts. This prevents cross-contamination of tracing data, allowing multiple Claude Agent setups to operate simultaneously without interference.

Highlights

  • Thread-Local Storage Refactor: Replaced a single global _thread_local with a dictionary (_tracer_thread_locals) that maps each tracer instance to its own threading.local() object. This ensures that thread-local storage is isolated per tracer instance, preventing cross-contamination of tracing contexts.
  • Tracer Instance Storage: The LLMSpanTracker now explicitly stores the tracer instance on itself (self.__judgeval_tracer), allowing methods like receive_response and wrapped_query to access the correct tracer for context propagation.
  • Context Retrieval Verification: The _wrap_tool_handler function has been updated to retrieve the parent_context and tracer from the correct, tracer-specific thread-local storage. An additional check verifies that the retrieved stored_tracer matches the current tracer to prevent incorrect context usage.
  • Enhanced Test Coverage: Added a new unit test file (test_thread_local_isolation.py) to thoroughly test the isolation of thread-local storage across multiple tracer instances and threads. An existing e2e test (test_e2e_real_api.py) was also updated with a new test case (test_multiple_tracers_no_trace_id_collision) to simulate concurrent tracer usage and confirm no trace ID collisions.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with πŸ‘ and πŸ‘Ž on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a critical issue with context propagation when using multiple tracer instances by introducing tracer-specific thread-local storage. The approach of keying storage by tracer ID and storing the tracer on the client instance is sound. The changes are well-supported with both new unit tests for thread-local isolation and a new end-to-end test that simulates the multi-tracer scenario. I have a few suggestions to improve thread safety and maintainability.

Comment on lines +35 to +40
def _get_thread_local(tracer: "BaseTracer") -> threading.local:
"""Get thread-local storage for a specific tracer instance."""
tracer_id = id(tracer)
if tracer_id not in _tracer_thread_locals:
_tracer_thread_locals[tracer_id] = threading.local()
return _tracer_thread_locals[tracer_id]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This function has a race condition. If two threads call _get_thread_local for the same new tracer instance concurrently, both could pass the if check, leading to one threading.local() object overwriting the other. This can cause unpredictable behavior and context loss.

To fix this, you should use a threading.Lock to ensure that the check and creation of the threading.local instance are atomic.

You'll need to define a lock at the module level, for example:

_thread_locals_lock = threading.Lock()

And then use it within the function as shown in the suggestion.

Suggested change
def _get_thread_local(tracer: "BaseTracer") -> threading.local:
"""Get thread-local storage for a specific tracer instance."""
tracer_id = id(tracer)
if tracer_id not in _tracer_thread_locals:
_tracer_thread_locals[tracer_id] = threading.local()
return _tracer_thread_locals[tracer_id]
def _get_thread_local(tracer: "BaseTracer") -> threading.local:
"""Get thread-local storage for a specific tracer instance."""
tracer_id = id(tracer)
if tracer_id not in _tracer_thread_locals:
with _thread_locals_lock:
# Double-check locking pattern to prevent race conditions
if tracer_id not in _tracer_thread_locals:
_tracer_thread_locals[tracer_id] = threading.local()
return _tracer_thread_locals[tracer_id]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants