- Notifications
You must be signed in to change notification settings - Fork 79
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Description of the problem
When using the watch.Watch().stream()
function to listen for Kubernetes events, I encounter the following exception:
aiohttp.client_exceptions.ClientPayloadError: Response payload is not completed: <TransferEncodingError: 400, message='Not enough data for satisfy transfer length header.'>
As a workaround, I've implemented a retry mechanism with backoff.
I didn't specified any timeout_seconds
or _request_timeout
settings (reference). I want the stream to run infinitely.
Expected Behavior
- The event stream should run continuously without encountering
ClientPayloadError
. - If the connection is interrupted, it should be handled gracefully without requiring retries.
Actual Behavior
- The stream occasionally throws
ClientPayloadError
with aTransferEncodingError (400)
. - It seems to be caused by incomplete response payloads from the Kubernetes API.
Code
import asyncio from kubernetes_asyncio import client, watch ... async def listener(namespace: str) -> None: watcher = None v1 = None while True: try: watcher = watch.Watch() v1 = client.CoreV1Api() function = v1.list_namespaced_config_map if namespace else v1.list_config_map_for_all_namespaces args = {"namespace": namespace} if namespace else {} response = await function(**args) resource_version = response.metadata.resource_version async for event in watcher.stream( function, resource_version=resource_version, **args ): ... except Exception: logger.exception("Exception occurred while listening for events") delay = 3 + random.uniform(0, 1) logger.info(f"Retrying in {delay:.2f} seconds") await asyncio.sleep(delay) finally: logger.exception("Events stream interrupted. Closing connections") if watcher: watcher.stop() if v1: await v1.api_client.rest_client.pool_manager.close()
Environment
kubernetes_asyncio
version: 32.0.0- Python version: 3.12
- Kubernetes version: 1.31
- Is this issue related to how Kubernetes API servers handle chunked responses?
- Could this be mitigated by adjusting
timeout_seconds
, even though the goal is an indefinite stream? - Any recommendations on handling this error gracefully without frequent retries?
Would appreciate any insights on whether this is a known issue or if there's a recommended approach to prevent these exceptions.
rad-pat
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working