AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.
However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.
I regularly use CoPilot to search Microsoft documentation for me. E.g. I needed to find a particular interface in Entra and couldn’t remember where it was. So, I asked CoPilot and it got me to the right spot. I’ve thought about asking it about Microsoft licensing, but I figure that might result in CoPilot becoming self aware enough to kill itself.
I also use a number of AI agents built into the cybersecurity tools I use on a daily basis. Generally stuff along the lines of “find all the cases related to this system/IP/user/etc” type queries. It’s also good for questions like “how do I tune this alert” so I don’t have to remember whatever bullshit process this vendor put together for tuning false positives. Our primary SIEM/SOAR tool has an AI which does initial triage and investigation work and it’s not terrible. It struggles with correlations for more complex events, usually highlighting events which have no bearing on the event in question. But, it often provides a good first pass and description our first line analysts can use to start a real investigation.
AI is a tool. And like a lot of tools, it has it’s benefits and limitations. The problem is we’re still figuring all those out and the people marketing these tools don’t want to admit to the limitations and they over-sell the benefits, then blame the user when those benefits don’t materialize. Given how much modern economies are based on information and knowledge, I do expect AI to have some lasting impact, but I also expect that we’ll adapt and it will just be another way of getting things done in a generation or two.
LLMs tend to be a “jack of all trades, master of none”. You are likely to find them useful for helping you with something you are inexperienced at, but not at something you are an expert in. However, because they lie a lot, it’s best to double-check your information, but the LLM can still be helpful with the ”you don’t know what you don’t know” issue.
Converting PDFs into HTMLs or RFT/TXT docs witout OCR typos. Until recently, it was almost impossible to turn a scanned book from PDF into doc or TXT, because the output of copying and pasting or converting using PDF tools was illegible. AI now can do a “deep AI seek” (look it up) into the texts.
I am converting a textbook into an audiobook in HTML (paragraph highlighting with manual sync) with an integrated popup glossary into every word (with grammar and meaning) and dictionary lookup if clicked.
Besides, as an apendix to each chapter, I add all the explanations from the book.
I took the ~4 500 words of the book and asked for a grammar analysis and meaning lookup to create a glossary. The IA joyfully skipped many terms but that is something I will fix when each chapter is finished. Now I am being punished with waiting despite having paid $20.
Sorting millions of things by some visible (to cameras) feature completely automatically, especially if the cost of a miss is low.
Learning, exploring concepts and ideas.
Curating massive music libraries. I’ve been using a small embedding model to organise my music for DJing, and being able to generate a t-sne plot clustered on perceptual similarity has been wonderfully useful.
I’ve also found CLIP models useful for searching videos, just embed a screenshot every couple of min of footage and query with a description of the scene.
And as bad as generated subtitles can be, when the only other option is nothing at all they are pretty nice to have.
translation is pretty good.
they want to make ai npcs on games, which could be awesome if we can ever reduce the system requirements for running it.
There’s that one silly vampire game which uses AI NPCs, I think it’s kind of fun looking from people I saw play it
For every small benefit, there are disastrous mistakes. We shouldn’t discuss one without the other:
https://tech.co/news/list-ai-failures-mistakes-errors
March 2026
- Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she’s never visited
February 2026
- Health advice given by AI chatbots is frequently wrong, says new study
January 2026
-
Study reveals that fixing AI mistakes takes up to 40% of the time that it saves
-
An AI tool used by ICE to identify applicants with previous law enforcement experience falsely flagged applicants with no such experience, leading to the placement of unqualified recruits in field offices.
December 2025
- AI mistakes clarinet for gun at Florida school
November 2025
-
Google Antigravity deletes entire content of user’s computer drive
-
Report finds AI hallucinations in 490 court filings from the past six months
October 2025
-
Teenager handcuffed after AI mistakes Doritos packet for gun
-
Lawyer submits AI-assisted court filing with fake citations
-
Man follows ChatGPT advice over stopping eating salt, develops rare condition. The man was hospitalized, sectioned, and eventually treated for psychosis. He tried to escape the hospital within 24 hours of being admitted.
-
ChatGPT-5 jailbroken with 24 hours of release
July 2025
-
AI Coding app deletes entire company database
-
McDonald’s AI chatbot error exposes data of 64 million job applicants
-
AI program is tasked with running a small shop, goes insane, claims to be human
-
Apple Intelligence falsely presents BBC headline
… and it just keeps going.
So don’t put AI in front of anything mission critical or without going through a review of a human.
So LLMs in agentic mode are a disaster waiting to happen.
God yes.
I’ve used it to summarize long documents.
I have a script that uses yt-dlp to get subtitles off a YouTube video and summarises the main points for me with a language model so that I don’t have to watch a 20 minute top10 list video that could’ve been a buzzfeed article.
The whole thing is fully vibe engineered too.
Chatbots? Basically nothing. Any interaction I have with one leads to spending more time verifying its output, inevitably finding many mistakes, and eventually finding a primary source for what I’m actually looking for. The best actual impact it has is forcing me to narrow down my nebulous question into what I actually specifically want, but the bot itself is contributing very little to that.
Neutral nets in general have limited real usefulness in analyzing large batches of data when other purpose-built analysis software doesn’t exist.
“AI” is a misnomer and there is absolutely zero evidence to suggest that we’re even on a path toward actual AI, sometimes called AGI, though they’re also changing that to just mean a profitable LLM which is fucking hilarious.
Any task you use a bot to do, you will become worse at that task. For mass data analysis, that’s fine, poring over reams of data is already a skill that other technology has largely obsoleted. But using it to do research, to read or write for you, or god forbid to make actual decisions and think for you, are very slippery slopes that are already causing a lot of the general public to seriously erode their basic mental capabilities.
Anything that’s fuzzy and impossible to automate with traditional algorithms, but that also has a reasonably high tolerance for error. It just makes up stuff a good portion of the time, you see.
However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.
Watch out, personal finance is not one of those things.
- Searching a large dataset with a vague search criteria.
- Real-time feedback when studying a foreign language (since accuracy is less important than quantity).
- Apparently in medicine they’re using generative AI for something meaningful, but I’m not entirely convinced it is actually generative AI and I’d need to do more research.
- Sometimes it can help in learning to program and in sanity-checking code security.
If you’re thinking of protein design it is, just with a sequence instead of natural language text. Although it’s not just a straight LLM, there’s some kind of physics awareness engineered in as well.
Vibe coding slop you don’t need to work in production
I went to my local neighborhood association because I wanted to improve where I live. I was elected president of the association a couple months later, mostly because no one else wanted to do it. It’s a fairly poor part of a medium sized city in the U.S.
I’ve been using AI (running locally on a computer I built that isn’t connected to the internet, to reduce harm to the environment) to apply for grants, plan events and help me run the meetings.
It is actually perfect for the job. Saying that as someone who thinks AI is mostly hype and useless for the majority of its current common uses these days. I feed it the text from city grant applications or ask it to make a poster to increase attendance and it’s saved me a lot of time. Without it, being someone diagnosed ADHD, I would not have been able to do most of the stuff I have accomplished so far.







