-
- Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
LocalAI version:
LocalAI version: v2.15.0
Environment, CPU architecture, OS, and Version:
Linux Ubuntu-2204-jammy-amd64-base 5.15.0-107-generic #117-Ubuntu SMP Fri Apr 26 12:26:49 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Describe the bug
Cannot install model llama3-70b-instruct
To Reproduce
Step 1: Run docker image localai/localai:latest-gpu-nvidia-cuda-12
Step 2: Access the Galerry at port 8080
Step 3: Go to model and download
Expected behavior
Model is run and response answer
Logs
Installation
Error SHA mismatch for file "/build/models/Meta-Llama-3-70B-Instruct.Q4_K_M.gguf" ( calculated: c1cea5f87dc1af521f31b30991a4663e7e43f6046a7628b854c155f489eec213 != metadata: d559de8dd806a76dbd29f8d8bd04666f2b29e7c7872d8e8481abd07805884d72 )
INSTALL
Additional context