Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not so clear how one could use the output of an embedding layer recursively, so it is a bit ill-defined to know what you mean by "stopped it" and "confused with its own output" here. You are mixing metaphor and math, so your question ends up being unclear.

Yes, the outputs from a layer one or two layers before the final layer would be a continuous embedding of sorts, and not as lossy (compared to the discretized tokenization) at representing the meaning of the input sequence. But you can't "stop" here in a recursive LLM in any practical sense.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact