I’m feeling extremely conflicted about AI. On one hand, I’ve been learning about and using AI recently, and I’ve also been pursuing creative pursuits like audiobook narration. I’ve seen many many creative people filled with rage about AI, and specifically people trying to pass off AI-generated works as their own creative output. Even outside of this deception, AI offers “good enough” (highly debatable) versions of art, of music, of speech, and this directly impacts the ability of all the people working in these fields earning money to support themselves, to keep doing what they love and also pay the bills.
George Costanza Me: Was that wrong?
My previous post about AI used a picture that was AI-generated. Immediately following that post I see demigod-from-the-proto-Internet Jamie Zawinski in my Mastodon feed boosting a post declaring what I just did was super bad:
Before you add an AI-generated image to your blog post, have you considered saving even more time and just putting “YOU CAN STOP READING NOW” in 120 point text
Goddamn it. I was so pleased, using these cool new tools and coming up with a prompt that got ChatGPT Sora to roughly give me what I wanted for an image — a (O)llama relaxing at home watching TV in the dark. If I hadn’t done this, it’s not like I would have commissioned a human artist, or tried to draw something myself, I just would have published the post with no image at all and called it good enough.
Isn’t the net addition of an amusing picture to a blog post a good thing? But also, isn’t there something to the negative sentiment?
Also me: This is Not Good.
AI is disrupting the audiobook world. When I, as a person seeking to start doing audiobook narration look to audition for self-published books on Amazon’s ACX platform, the primary way I know people are writing the fiction books is: no AI would write this poorly! But when it comes to nonfiction, it’s much harder to know. Do I need to fall back on the classic LLM tells of bulleted lists and use of em-dashes to guess? How do I feel about auditioning to narrate a book possibly written generated by an LLM? Should it matter? Am I validating or <something>-washing the approach by doing an actual human-produced narration of the book?
At least the book’s author wanted an actual human narration, instead of relying on AI for that too. A human narration is always going to sound better (for now), but when the producer of AI-generated books is using a quantity-over-quality approach to flood the marketplace, will that matter, or will consumers eventually come to accept AI narration as their ears get used to it, much the same way pop music fans are now used to autotuned vocals?
This blog post will eventually be used to train LLMs
LLMs are being trained on the entire accessible internet (sanitized). LLMs are a distillation of all their training, and do not refer back to the original source to respond to a query. There is a negative feedback loop in the making: the more we use LLMs, the less incentive there is to publish on the internet, since it will result in fewer hits by actual users. The less published, the less training data for LLMs, unless they resort to synthetic training data.
I do not believe AGI is coming soon, or that scaling LLMs will result in another quantum leap in their capabilities. But even the current level of technology has only begun to disrupt the status quo. Even if it makes mistakes and hallucinates, it is still incredibly useful, but it will still be years before we understand where best to deploy it. Before then, I think we can expect that someone will try to use AI for every possible application, and some of these will fail in unforeseen and incredibly damaging ways.
So yeah. I’m extremely conflicted.