Skip to content

Conversation

@mattt
Copy link
Contributor

@mattt mattt commented Apr 17, 2024

Related to #1384

@mattt mattt requested a review from andreasjansson April 17, 2024 13:18
@mattt mattt force-pushed the mattt/fix-pydantic-v1-incompatibility branch from 3d36817 to 338e7f8 Compare April 17, 2024 13:27
@andreasjansson
Copy link
Member

@mattt I tried this approach but it didn't work, which is why I went all in on vendoring. Although in my tests Pydantic was still pinned at <2, but I replaced all imports with the try..catch v1 thing. When I ran it on a model that installed Pydantic 2 it still failed. It might work when Pydantic isn't pinned but it's worth trying to build Cog models that include pydantic~=1 and pydantic~=2 on top of these changes, to make sure it still works.

@mattt
Copy link
Contributor Author

mattt commented Apr 17, 2024

@andreasjansson Do you recall what errors you saw? It'd be great to have a minimal example that reproduces this problem.

@mattt mattt mentioned this pull request Apr 18, 2024
@mattt mattt force-pushed the mattt/fix-pydantic-v1-incompatibility branch from 338e7f8 to 4e0e640 Compare April 19, 2024 13:00
@mattt mattt marked this pull request as ready for review April 19, 2024 13:01
@mattt
Copy link
Contributor Author

mattt commented Apr 19, 2024

Digging into this some more, I found this discussion and this issue about problems arising from FastAPI >= 0.100.0 attempting to load v2 Pydantic when the v1 shim is loaded. From there, I found this PR which makes the necessary changes to get all of that working. I forked the repo for that PR to replicate/fastapi, rebased against official FastAPI, and updated this PR to use fastapi from our fork.

@mattt mattt force-pushed the mattt/fix-pydantic-v1-incompatibility branch from 0895c21 to 52546b1 Compare April 19, 2024 13:19
@mattt
Copy link
Contributor Author

mattt commented Apr 19, 2024

I got tests passing locally (aside from hypothesis timeouts and flakes), but can't resolve failures in CI.

=========================================================== short test summary info ============================================================ FAILED python/tests/server/test_worker.py::test_fatalworkerexception_from_irrecoverable_failures[killed_in_predict-payloads1] - hypothesis.errors.DeadlineExceeded: Test took 1124.96ms, which exceeds the deadline of 1000.00ms FAILED python/tests/server/test_worker.py::test_output[complex_output-payloads2-<lambda>] - hypothesis.errors.DeadlineExceeded: Test took 1195.50ms, which exceeds the deadline of 1000.00ms FAILED python/tests/server/test_worker.py::test_output[hello_world-payloads0-<lambda>] - hypothesis.errors.Flaky: Hypothesis test_output(data=data(...), name='hello_world', payloads={'name': sampled_from(['John', 'Barry', 'Elspe... FAILED python/tests/server/test_worker.py::test_output[count_up-payloads1-<lambda>] - hypothesis.errors.Flaky: Hypothesis test_output(data=data(...), name='count_up', payloads={'upto': integers(min_value=0, max_value=100)}, o... 
requirements.txt
# This file was autogenerated by uv via the following command: # uv pip compile pyproject.toml --extra=dev -o requirements.txt anyio==4.3.0 # via # httpx # starlette # watchfiles attrs==23.2.0 # via hypothesis black==24.4.0 build==1.2.1 certifi==2024.2.2 # via # httpcore # httpx # requests charset-normalizer==3.3.2 # via requests click==8.1.7 # via # black # uvicorn coverage==7.4.4 # via pytest-cov execnet==2.1.1 # via pytest-xdist fastapi @ git+https://github.com/replicate/fastapi.git@7be995a943adde5315cb557a9330ee4ff4137849 h11==0.14.0 # via # httpcore # uvicorn httpcore==1.0.5 # via httpx httptools==0.6.1 # via uvicorn httpx==0.27.0 hypothesis==6.100.1 idna==3.7 # via # anyio # httpx # requests iniconfig==2.0.0 # via pytest markupsafe==2.1.5 # via werkzeug mypy-extensions==1.0.0 # via black nodeenv==1.8.0 # via pyright numpy==1.26.4 packaging==24.0 # via # black # build # pytest # pytest-rerunfailures pathspec==0.12.1 # via black pillow==10.3.0 platformdirs==4.2.0 # via black pluggy==1.4.0 # via pytest pydantic==1.10.15 # via fastapi pyproject-hooks==1.0.0 # via build pyright==1.1.347 pytest==8.1.1 # via # pytest-cov # pytest-rerunfailures # pytest-xdist pytest-cov==5.0.0 pytest-httpserver==1.0.10 pytest-rerunfailures==14.0 pytest-xdist==3.5.0 python-dotenv==1.0.1 # via uvicorn pyyaml==6.0.1 # via # responses # uvicorn requests==2.31.0 # via responses responses==0.25.0 ruff==0.4.0 setuptools==69.5.1 # via nodeenv sniffio==1.3.1 # via # anyio # httpx sortedcontainers==2.4.0 # via hypothesis starlette==0.37.2 # via fastapi structlog==24.1.0 typing-extensions==4.11.0 # via # fastapi # pydantic urllib3==2.2.1 # via # requests # responses uvicorn==0.29.0 uvloop==0.19.0 # via uvicorn watchfiles==0.21.0 # via uvicorn websockets==12.0 # via uvicorn werkzeug==3.0.2 # via pytest-httpserver 

Upgrading to Pydantic 2 is a much easier task (#1625), and we should probably go with that. The hard part is figuring out compatibility with models built using older versions of Cog.

@andreasjansson
Copy link
Member

@mattt Yes the FastAPI issue is the same thing I ran into. When I realized we'd have to rely on a fork of FastAPI I went down the vendoring route.

I've had pretty good success with act for debugging CI tests locally, in cases where they work in my environment but fail in CI.

I'd also love to understand why vendoring is a bad idea.

mattt added 8 commits May 2, 2024 15:48
Signed-off-by: Mattt Zmuda <mattt@replicate.com>
Signed-off-by: Mattt Zmuda <mattt@replicate.com>
…PredictionRequest and None have no overlap Signed-off-by: Mattt Zmuda <mattt@replicate.com>
Signed-off-by: Mattt Zmuda <mattt@replicate.com>
Signed-off-by: Mattt Zmuda <mattt@replicate.com>
Signed-off-by: Mattt Zmuda <mattt@replicate.com>
Signed-off-by: Mattt Zmuda <mattt@replicate.com>
Signed-off-by: Mattt Zmuda <mattt@replicate.com>
@technillogue technillogue force-pushed the mattt/fix-pydantic-v1-incompatibility branch from 52546b1 to 6806ca6 Compare May 2, 2024 19:48
@mattt mattt closed this Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants