Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Oct 23, 2025

📄 24% (0.24x) speedup for encode_query in src/deepgram/core/query_encoder.py

⏱️ Runtime : 9.09 milliseconds 7.31 milliseconds (best of 149 runs)

📝 Explanation and details

The optimization achieves a 24% speedup through three key performance improvements:

1. Function Call Caching: The optimized code caches isinstance as _is_pydantic and pydantic.BaseModel as BaseModel within single_query_encoder. This eliminates repeated attribute lookups, particularly beneficial in tight loops where these functions are called frequently.

2. Structural Logic Simplification: The original code used redundant isinstance checks with complex conditional logic (isinstance(query_value, pydantic.BaseModel) or isinstance(query_value, dict)). The optimized version separates these checks into distinct branches and directly calls traverse_query_dict for dict values instead of going through the recursive single_query_encoder call, reducing function call overhead.

3. Method Reference Caching: In encode_query, the optimization caches encoded_query.extend as a local variable extend, avoiding repeated attribute lookups during the loop iteration.

The performance gains are most significant for test cases involving lists of dictionaries and Pydantic models, where the optimizations show 31-103% improvements (e.g., test_encode_query_large_list_of_dicts shows 101% speedup). This is because these scenarios trigger the tight loops where function call overhead is most impactful. Basic operations with simple data types show minimal improvements (0-8%), while complex nested structures benefit moderately (8-20%).

The optimizations are particularly effective for workloads with repetitive dictionary/model processing, making it ideal for API query parameter encoding scenarios where large collections of structured data need to be flattened.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 29 Passed
🌀 Generated Regression Tests 60 Passed
⏪ Replay Tests 25 Passed
🔎 Concolic Coverage Tests 2 Passed
📊 Tests Coverage 100.0%
⚙️ Existing Unit Tests and Runtime
Test File::Test Function Original ⏱️ Optimized ⏱️ Speedup
unit/test_core_query_encoder.py::TestEncodeQuery.test_complex_query 8.16μs 6.84μs 19.3%✅
unit/test_core_query_encoder.py::TestEncodeQuery.test_empty_query 610ns 807ns -24.4%⚠️
unit/test_core_query_encoder.py::TestEncodeQuery.test_none_query 355ns 402ns -11.7%⚠️
unit/test_core_query_encoder.py::TestEncodeQuery.test_query_with_pydantic_models 20.6μs 20.1μs 2.28%✅
unit/test_core_query_encoder.py::TestEncodeQuery.test_query_with_special_values 4.79μs 4.74μs 1.08%✅
unit/test_core_query_encoder.py::TestEncodeQuery.test_simple_query 3.05μs 3.12μs -2.31%⚠️
unit/test_core_query_encoder.py::TestQueryEncoderEdgeCases.test_circular_reference_protection 2.38μs 2.27μs 5.21%✅
unit/test_core_query_encoder.py::TestQueryEncoderEdgeCases.test_unicode_and_special_characters 4.42μs 4.07μs 8.62%✅
utils/test_query_encoding.py::test_encode_query_with_none 374ns 355ns 5.35%✅
utils/test_query_encoding.py::test_query_encoding_deep_object_arrays 8.60μs 6.46μs 33.2%✅
utils/test_query_encoding.py::test_query_encoding_deep_objects 6.33μs 5.82μs 8.74%✅
🌀 Generated Regression Tests and Runtime
from typing import Any, Dict, List, Optional, Tuple # function to test # (copied from the user's code, unchanged) import pydantic # imports import pytest from deepgram.core.query_encoder import encode_query # unit tests # --------------------- # BASIC TEST CASES # --------------------- def test_encode_query_none(): # Test None input returns None codeflash_output = encode_query(None) # 340ns -> 364ns (6.59% slower) def test_encode_query_empty_dict(): # Test empty dict returns empty list codeflash_output = encode_query({}) # 629ns -> 852ns (26.2% slower) def test_encode_query_simple_flat_dict(): # Test a simple flat dictionary q = {'a': 1, 'b': 'x', 'c': True} expected = [('a', 1), ('b', 'x'), ('c', True)] codeflash_output = encode_query(q); result = codeflash_output # 3.32μs -> 3.28μs (1.19% faster) def test_encode_query_flat_dict_with_lists(): # Test dict with lists of primitives q = {'a': [1, 2, 3], 'b': ['x', 'y']} expected = [('a', 1), ('a', 2), ('a', 3), ('b', 'x'), ('b', 'y')] codeflash_output = encode_query(q) # 3.51μs -> 3.47μs (1.01% faster) def test_encode_query_flat_dict_with_empty_list(): # Test dict with an empty list q = {'a': []} expected = [] codeflash_output = encode_query(q) # 1.80μs -> 1.79μs (0.952% faster) def test_encode_query_flat_dict_with_none_value(): # Test dict with a None value q = {'a': None, 'b': 2} expected = [('a', None), ('b', 2)] codeflash_output = encode_query(q) # 2.36μs -> 2.27μs (4.19% faster) # --------------------- # EDGE TEST CASES # --------------------- def test_encode_query_nested_dict(): # Test a nested dictionary q = {'a': {'b': {'c': 1}}} expected = [('a[b][c]', 1)] codeflash_output = encode_query(q) # 3.21μs -> 3.17μs (1.20% faster) def test_encode_query_nested_dict_with_list(): # Dict with a nested dict containing a list q = {'a': {'b': [1, 2, 3]}} expected = [('a[b]', 1), ('a[b]', 2), ('a[b]', 3)] codeflash_output = encode_query(q) # 2.90μs -> 2.84μs (2.08% faster) def test_encode_query_list_of_dicts(): # List of dicts at top level q = {'a': [{'b': 1}, {'b': 2}]} expected = [('a[b]', 1), ('a[b]', 2)] codeflash_output = encode_query(q) # 4.76μs -> 3.63μs (31.2% faster) def test_encode_query_list_of_lists(): # List of lists (should flatten only the top level) q = {'a': [[1, 2], [3, 4]]} # Each inner list is not a dict or pydantic model, so should be treated as a value expected = [('a', [1, 2]), ('a', [3, 4])] codeflash_output = encode_query(q) # 2.25μs -> 2.27μs (0.662% slower) def test_encode_query_nested_list_of_dicts(): # List of dicts, each with nested dicts q = {'a': [{'x': {'y': 1}}, {'x': {'y': 2}}]} expected = [('a[x][y]', 1), ('a[x][y]', 2)] codeflash_output = encode_query(q) # 5.50μs -> 4.18μs (31.6% faster) def test_encode_query_dict_with_empty_dict(): # Dict with an empty dict as value q = {'a': {}} expected = [] codeflash_output = encode_query(q) # 2.12μs -> 1.85μs (14.5% faster) def test_encode_query_dict_with_empty_list_and_dict(): # Dict with both empty list and empty dict q = {'a': [], 'b': {}} expected = [] codeflash_output = encode_query(q) # 2.79μs -> 2.48μs (12.3% faster) def test_encode_query_dict_with_various_types(): # Dict with bool, int, float, str, None q = {'a': True, 'b': 42, 'c': 3.14, 'd': 'hello', 'e': None} expected = [('a', True), ('b', 42), ('c', 3.14), ('d', 'hello'), ('e', None)] codeflash_output = encode_query(q) # 3.54μs -> 3.56μs (0.702% slower) def test_encode_query_dict_with_mixed_types_in_list(): # Dict with a list of mixed types q = {'a': [1, 'x', None, {'b': 2}]} expected = [('a', 1), ('a', 'x'), ('a', None), ('a[b]', 2)] codeflash_output = encode_query(q) # 4.12μs -> 3.52μs (17.2% faster) def test_encode_query_dict_with_deeply_nested(): # Deeply nested dicts q = {'a': {'b': {'c': {'d': {'e': 5}}}}} expected = [('a[b][c][d][e]', 5)] codeflash_output = encode_query(q) # 3.66μs -> 3.69μs (0.678% slower) def test_encode_query_dict_with_list_of_empty_dicts(): # List of empty dicts should not contribute any pairs q = {'a': [{}]} expected = [] codeflash_output = encode_query(q) # 3.04μs -> 2.26μs (34.4% faster) def test_encode_query_dict_with_list_of_lists_of_dicts(): # List of lists of dicts q = {'a': [[{'b': 1}], [{'b': 2}]]} # Each inner list is treated as a value, not traversed expected = [('a', [{'b': 1}]), ('a', [{'b': 2}])] codeflash_output = encode_query(q) # 2.62μs -> 2.48μs (5.40% faster) # --------------------- # PYDANTIC MODEL TEST CASES # --------------------- class SimpleModel(pydantic.BaseModel): foo: int bar: str def test_encode_query_with_pydantic_model(): # Top-level pydantic model m = SimpleModel(foo=10, bar='baz') q = {'model': m} expected = [('model[foo]', 10), ('model[bar]', 'baz')] codeflash_output = encode_query(q) # 18.9μs -> 18.8μs (0.579% faster) def test_encode_query_with_pydantic_model_in_list(): # List of pydantic models m1 = SimpleModel(foo=1, bar='a') m2 = SimpleModel(foo=2, bar='b') q = {'models': [m1, m2]} expected = [('models[foo]', 1), ('models[bar]', 'a'), ('models[foo]', 2), ('models[bar]', 'b')] codeflash_output = encode_query(q) # 22.1μs -> 20.0μs (10.5% faster) def test_encode_query_with_pydantic_model_in_dict(): # Dict with pydantic model as value m = SimpleModel(foo=7, bar='q') q = {'a': {'b': m}} expected = [('a[b][foo]', 7), ('a[b][bar]', 'q')] codeflash_output = encode_query(q) # 3.06μs -> 2.80μs (9.07% faster) def test_encode_query_with_pydantic_model_in_list_of_dicts(): # List of dicts, each with a pydantic model m1 = SimpleModel(foo=1, bar='a') m2 = SimpleModel(foo=2, bar='b') q = {'a': [{'m': m1}, {'m': m2}]} expected = [('a[m][foo]', 1), ('a[m][bar]', 'a'), ('a[m][foo]', 2), ('a[m][bar]', 'b')] codeflash_output = encode_query(q) # 4.90μs -> 3.53μs (38.9% faster) # --------------------- # LARGE SCALE TEST CASES # --------------------- def test_encode_query_large_flat_dict(): # Large flat dict q = {f'k{i}': i for i in range(1000)} expected = [(f'k{i}', i) for i in range(1000)] codeflash_output = encode_query(q) # 229μs -> 215μs (6.21% faster) def test_encode_query_large_nested_dict(): # Large nested dict (depth 3, width 10) q = {f'a{i}': {f'b{j}': {f'c{k}': i*100 + j*10 + k for k in range(2)} for j in range(5)} for i in range(2)} # Build expected result expected = [] for i in range(2): for j in range(5): for k in range(2): expected.append((f'a{i}[b{j}][c{k}]', i*100 + j*10 + k)) codeflash_output = encode_query(q) # 9.11μs -> 8.88μs (2.51% faster) def test_encode_query_large_list_of_dicts(): # Large list of dicts q = {'a': [{'b': i} for i in range(1000)]} expected = [('a[b]', i) for i in range(1000)] codeflash_output = encode_query(q) # 678μs -> 333μs (103% faster) def test_encode_query_large_list_of_mixed_types(): # Large list with mixed types q = {'a': [i if i % 2 == 0 else {'b': i} for i in range(1000)]} expected = [] for i in range(1000): if i % 2 == 0: expected.append(('a', i)) else: expected.append(('a[b]', i)) codeflash_output = encode_query(q) # 396μs -> 225μs (75.8% faster) def test_encode_query_large_list_of_pydantic_models(): # Large list of pydantic models models = [SimpleModel(foo=i, bar=str(i)) for i in range(1000)] q = {'models': models} expected = [] for i in range(1000): expected.append(('models[foo]', i)) expected.append(('models[bar]', str(i))) codeflash_output = encode_query(q) # 4.43ms -> 3.75ms (17.9% faster) def test_encode_query_large_deeply_nested_dict(): # Deeply nested dict with a single chain (depth 100) d = 0 for i in range(99, -1, -1): d = {f'k{i}': d} q = {'root': d} # Only the deepest leaf is present, so expect [('root' + '[k0][k1]...[k99]', 0)] key = 'root' + ''.join([f'[k{i}]' for i in range(100)]) expected = [(key, 0)] codeflash_output = encode_query(q) # 30.3μs -> 30.0μs (0.909% faster) # --------------------- # ADDITIONAL EDGE CASES # --------------------- def test_encode_query_keys_with_special_characters(): # Keys that have special characters q = {'a b': {'c-d': 1, 'e.f': 2}} expected = [('a b[c-d]', 1), ('a b[e.f]', 2)] codeflash_output = encode_query(q) # 3.20μs -> 3.05μs (5.15% faster) def test_encode_query_values_with_special_characters(): # Values that are strings with special characters q = {'a': 'hello world!', 'b': 'foo=bar&baz'} expected = [('a', 'hello world!'), ('b', 'foo=bar&baz')] codeflash_output = encode_query(q) # 2.29μs -> 2.25μs (2.00% faster) def test_encode_query_with_boolean_false_and_zero(): # Test that False and 0 are handled distinctly q = {'a': False, 'b': 0} expected = [('a', False), ('b', 0)] codeflash_output = encode_query(q) # 2.54μs -> 2.63μs (3.31% slower) def test_encode_query_with_float_nan_and_inf(): # Test with float('nan') and float('inf') import math q = {'a': float('nan'), 'b': float('inf')} codeflash_output = encode_query(q); result = codeflash_output # 2.07μs -> 2.16μs (3.94% slower) def test_encode_query_with_tuple_value(): # Tuples are not explicitly handled, so should be treated as a value q = {'a': (1, 2, 3)} expected = [('a', (1, 2, 3))] codeflash_output = encode_query(q) # 1.83μs -> 1.82μs (0.440% faster) def test_encode_query_with_set_value(): # Sets are not explicitly handled, so should be treated as a value q = {'a': {1, 2, 3}} expected = [('a', {1, 2, 3})] codeflash_output = encode_query(q) # 1.89μs -> 1.78μs (6.07% faster) def test_encode_query_with_bytes_value(): # Bytes should be treated as a value q = {'a': b'abc'} expected = [('a', b'abc')] codeflash_output = encode_query(q) # 1.91μs -> 1.96μs (2.25% slower) def test_encode_query_with_object_value(): # Arbitrary objects should be treated as a value class Dummy: pass obj = Dummy() q = {'a': obj} expected = [('a', obj)] codeflash_output = encode_query(q) # 2.00μs -> 1.97μs (2.04% faster) # codeflash_output is used to check that the output of the original code is the same as that of the optimized code. #------------------------------------------------ from typing import Any, Dict, List, Optional, Tuple import pydantic # imports import pytest # used for our unit tests from deepgram.core.query_encoder import encode_query # unit tests # Helper Pydantic model for testing class NestedModel(pydantic.BaseModel): a: int b: str # ------------------------- # Basic Test Cases # ------------------------- def test_encode_query_none(): # Should return None for None input codeflash_output = encode_query(None) # 351ns -> 364ns (3.57% slower) def test_encode_query_empty_dict(): # Should return empty list for empty dict codeflash_output = encode_query({}) # 684ns -> 835ns (18.1% slower) def test_encode_query_simple_flat_dict(): # Simple flat dict query = {'foo': 'bar', 'baz': 123} expected = [('foo', 'bar'), ('baz', 123)] codeflash_output = encode_query(query) # 2.75μs -> 2.64μs (3.90% faster) def test_encode_query_simple_list(): # Dict with list value query = {'foo': [1, 2, 3]} expected = [('foo', 1), ('foo', 2), ('foo', 3)] codeflash_output = encode_query(query) # 2.66μs -> 2.56μs (3.83% faster) def test_encode_query_simple_nested_dict(): # Dict with nested dict query = {'user': {'name': 'alice', 'age': 30}} expected = [('user[name]', 'alice'), ('user[age]', 30)] codeflash_output = encode_query(query) # 3.07μs -> 2.90μs (5.87% faster) def test_encode_query_list_of_dicts(): # List of dicts query = {'items': [{'id': 1}, {'id': 2}]} expected = [('items[id]', 1), ('items[id]', 2)] codeflash_output = encode_query(query) # 4.71μs -> 3.59μs (31.4% faster) def test_encode_query_pydantic_model(): # Single pydantic model model = NestedModel(a=10, b='hello') query = {'model': model} expected = [('model[a]', 10), ('model[b]', 'hello')] codeflash_output = encode_query(query) # 18.0μs -> 17.8μs (1.04% faster) def test_encode_query_list_of_pydantic_models(): # List of pydantic models models = [NestedModel(a=1, b='x'), NestedModel(a=2, b='y')] query = {'models': models} expected = [('models[a]', 1), ('models[b]', 'x'), ('models[a]', 2), ('models[b]', 'y')] codeflash_output = encode_query(query) # 22.3μs -> 20.4μs (8.93% faster) def test_encode_query_mixed_types(): # Dict with mixed types query = { 'str': 'abc', 'int': 42, 'float': 3.14, 'bool': True, 'none': None } expected = [('str', 'abc'), ('int', 42), ('float', 3.14), ('bool', True), ('none', None)] codeflash_output = encode_query(query) # 3.59μs -> 3.47μs (3.40% faster) # ------------------------- # Edge Test Cases # ------------------------- def test_encode_query_deeply_nested_dict(): # Dict with multiple levels of nesting query = {'a': {'b': {'c': {'d': 5}}}} expected = [('a[b][c][d]', 5)] codeflash_output = encode_query(query) # 3.33μs -> 3.36μs (0.833% slower) def test_encode_query_list_of_lists(): # List of lists (should flatten only one level) query = {'arr': [[1, 2], [3, 4]]} expected = [('arr', [1, 2]), ('arr', [3, 4])] codeflash_output = encode_query(query) # 2.32μs -> 2.27μs (2.21% faster) def test_encode_query_list_of_mixed_types(): # List with mixed types query = {'arr': [1, {'x': 2}, NestedModel(a=3, b='z'), 'end']} expected = [ ('arr', 1), ('arr[x]', 2), ('arr[a]', 3), ('arr[b]', 'z'), ('arr', 'end') ] codeflash_output = encode_query(query) # 18.9μs -> 16.9μs (11.9% faster) def test_encode_query_empty_list_and_dict(): # Empty list and empty dict as values query = {'empty_list': [], 'empty_dict': {}} expected = [] codeflash_output = encode_query(query) # 2.71μs -> 2.43μs (11.7% faster) def test_encode_query_dict_with_none_values(): # Dict with None values query = {'foo': None, 'bar': {'baz': None}} expected = [('foo', None), ('bar[baz]', None)] codeflash_output = encode_query(query) # 2.96μs -> 2.91μs (1.69% faster) def test_encode_query_dict_with_bool_values(): # Dict with boolean values query = {'flag': True, 'settings': {'enabled': False}} expected = [('flag', True), ('settings[enabled]', False)] codeflash_output = encode_query(query) # 3.41μs -> 3.15μs (8.42% faster) def test_encode_query_dict_with_special_characters(): # Keys and values with special characters query = {'sp&cial': 'v@lue', 'nest': {'k*y': 'va#l'}} expected = [('sp&cial', 'v@lue'), ('nest[k*y]', 'va#l')] codeflash_output = encode_query(query) # 2.87μs -> 2.78μs (3.49% faster) def test_encode_query_dict_with_empty_string_keys_and_values(): # Empty string keys and values query = {'': '', 'nested': {'': ''}} expected = [('', ''), ('nested[]', '')] codeflash_output = encode_query(query) # 2.98μs -> 2.78μs (7.30% faster) def test_encode_query_dict_with_int_keys(): # Integer keys (should be converted to str in output) query = {1: 'one', 2: {'3': 'three'}} expected = [('1', 'one'), ('2[3]', 'three')] codeflash_output = encode_query(query) # 3.19μs -> 2.90μs (9.87% faster) # ------------------------- # Large Scale Test Cases # ------------------------- def test_encode_query_large_flat_dict(): # Large flat dict query = {f'key{i}': i for i in range(1000)} expected = [(f'key{i}', i) for i in range(1000)] codeflash_output = encode_query(query); result = codeflash_output # 225μs -> 213μs (5.73% faster) def test_encode_query_large_nested_dict(): # Large nested dict query = {'outer': {f'inner{i}': i for i in range(500)}} expected = [(f'outer[inner{i}]', i) for i in range(500)] codeflash_output = encode_query(query); result = codeflash_output # 61.7μs -> 61.2μs (0.797% faster) def test_encode_query_large_list(): # Large list value query = {'numbers': list(range(1000))} expected = [('numbers', i) for i in range(1000)] codeflash_output = encode_query(query); result = codeflash_output # 131μs -> 121μs (8.71% faster) def test_encode_query_large_list_of_dicts(): # Large list of dicts query = {'items': [{'id': i} for i in range(500)]} expected = [('items[id]', i) for i in range(500)] codeflash_output = encode_query(query); result = codeflash_output # 331μs -> 165μs (101% faster) def test_encode_query_large_list_of_pydantic_models(): # Large list of pydantic models models = [NestedModel(a=i, b=str(i)) for i in range(500)] query = {'models': models} expected = [] for i in range(500): expected.append(('models[a]', i)) expected.append(('models[b]', str(i))) codeflash_output = encode_query(query); result = codeflash_output # 2.15ms -> 1.79ms (19.9% faster) def test_encode_query_large_mixed_structure(): # Large mixed structure: dict with lists, nested dicts, pydantic models models = [NestedModel(a=i, b=str(i)) for i in range(10)] query = { 'numbers': list(range(10)), 'nested': {'x': [1, 2, 3], 'y': {'z': 'deep'}}, 'models': models } expected = [('numbers', i) for i in range(10)] expected += [('nested[x]', 1), ('nested[x]', 2), ('nested[x]', 3)] expected += [('nested[y][z]', 'deep')] for i in range(10): expected.append(('models[a]', i)) expected.append(('models[b]', str(i))) codeflash_output = encode_query(query); result = codeflash_output # 61.5μs -> 54.3μs (13.2% faster) # codeflash_output is used to check that the output of the original code is the same as that of the optimized code. #------------------------------------------------ from deepgram.core.query_encoder import encode_query def test_encode_query(): encode_query({'': {}}) def test_encode_query_2(): encode_query(None)
⏪ Replay Tests and Runtime
Test File::Test Function Original ⏱️ Optimized ⏱️ Speedup
test_pytest_testsintegrationstest_integration_scenarios_py_testsunittest_core_utils_py_testsutilstest_htt__replay_test_0.py::test_deepgram_core_query_encoder_encode_query 17.0μs 14.9μs 14.3%✅
test_pytest_testsintegrationstest_manage_client_py_testsunittest_core_query_encoder_py_testsunittest_type__replay_test_0.py::test_deepgram_core_query_encoder_encode_query 39.9μs 39.2μs 1.63%✅
🔎 Concolic Coverage Tests and Runtime
Test File::Test Function Original ⏱️ Optimized ⏱️ Speedup
codeflash_concolic_d0k9fm5y/tmpa0ids3s_/test_concolic_coverage.py::test_encode_query 2.24μs 1.98μs 13.0%✅
codeflash_concolic_d0k9fm5y/tmpa0ids3s_/test_concolic_coverage.py::test_encode_query_2 325ns 377ns -13.8%⚠️

To edit these changes git checkout codeflash/optimize-encode_query-mh2rdmp9 and push.

Codeflash

The optimization achieves a 24% speedup through three key performance improvements: **1. Function Call Caching**: The optimized code caches `isinstance` as `_is_pydantic` and `pydantic.BaseModel` as `BaseModel` within `single_query_encoder`. This eliminates repeated attribute lookups, particularly beneficial in tight loops where these functions are called frequently. **2. Structural Logic Simplification**: The original code used redundant `isinstance` checks with complex conditional logic (`isinstance(query_value, pydantic.BaseModel) or isinstance(query_value, dict)`). The optimized version separates these checks into distinct branches and directly calls `traverse_query_dict` for dict values instead of going through the recursive `single_query_encoder` call, reducing function call overhead. **3. Method Reference Caching**: In `encode_query`, the optimization caches `encoded_query.extend` as a local variable `extend`, avoiding repeated attribute lookups during the loop iteration. The performance gains are most significant for test cases involving **lists of dictionaries and Pydantic models**, where the optimizations show 31-103% improvements (e.g., `test_encode_query_large_list_of_dicts` shows 101% speedup). This is because these scenarios trigger the tight loops where function call overhead is most impactful. Basic operations with simple data types show minimal improvements (0-8%), while complex nested structures benefit moderately (8-20%). The optimizations are particularly effective for workloads with repetitive dictionary/model processing, making it ideal for API query parameter encoding scenarios where large collections of structured data need to be flattened.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 October 23, 2025 01:43
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash labels Oct 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: High Optimization Quality according to Codeflash

1 participant