Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans make mistakes because we can’t know everything. Or don’t fact check etc.

LLMs make mistakes because they were trained on the entire knowledge of the internet and thus should know everything?

Why are you comparing this to a human?



Because it seems that LLMs have copied human behavior in this case, thinking that seahorse emoji exists, when it doesn't.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact