Skip to content

aio-libs/async-lru

Repository files navigation

async-lru

info:Simple lru cache for asyncio
GitHub Actions CI/CD workflows status async-lru @ PyPI Matrix Room — #aio-libs:matrix.org Matrix Space — #aio-libs-space:matrix.org

Installation

pip install async-lru

Usage

This package is a port of Python's built-in functools.lru_cache function for asyncio. To better handle async behaviour, it also ensures multiple concurrent calls will only result in 1 call to the wrapped function, with all awaits receiving the result of that call when it completes.

import asyncio import aiohttp from async_lru import alru_cache @alru_cache(maxsize=32) async def get_pep(num): resource = 'http://www.python.org/dev/peps/pep-%04d/' % num async with aiohttp.ClientSession() as session: try: async with session.get(resource) as s: return await s.read() except aiohttp.ClientError: return 'Not Found' async def main(): for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991: pep = await get_pep(n) print(n, len(pep)) print(get_pep.cache_info()) # CacheInfo(hits=3, misses=8, maxsize=32, currsize=8) # closing is optional, but highly recommended await get_pep.cache_close() asyncio.run(main())

TTL (time-to-live in seconds, expiration on timeout) is supported by accepting ttl configuration parameter (off by default):

@alru_cache(ttl=5) async def func(arg): return arg * 2

The library supports explicit invalidation for specific function call by cache_invalidate():

@alru_cache(ttl=5) async def func(arg1, arg2): return arg1 + arg2 func.cache_invalidate(1, arg2=2)

The method returns True if corresponding arguments set was cached already, False otherwise.

Benchmarks

async-lru uses CodSpeed for performance regression testing.

To run the benchmarks locally:

pip install -r requirements-dev.txt pytest --codspeed benchmark.py

The benchmark suite covers both bounded (with maxsize) and unbounded (no maxsize) cache configurations. Scenarios include:

  • Cache hit
  • Cache miss
  • Cache fill/eviction (cycling through more keys than maxsize)
  • Cache clear
  • TTL expiry
  • Cache invalidation
  • Cache info retrieval
  • Concurrent cache hits
  • Baseline (uncached async function)

On CI, benchmarks are run automatically via GitHub Actions on Python 3.13, and results are uploaded to CodSpeed (if a CODSPEED_TOKEN is configured). You can view performance history and detect regressions on the CodSpeed dashboard.

Thanks

The library was donated by Ocean S.A.

Thanks to the company for contribution.