Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can use llama.cpp, it runs on almost all hardware. Whisper.cpp is similar, but unless you have a mid or high end nvidia card it will be a bit slower.

Still very reasonable on modern hardware.



If you build locally for Apple hardware (instructions in the whisper.cpp readme) then it performs quite admirably on Apple computers as well.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact