We’re living in a time when AI can write bedtime stories tailored to your child’s name, diagnose rare diseases faster than some doctors, and even pass bar exams that law students dread. It can summarize 300-page books in seconds, generate artwork in the style of Picasso, and compose cover letters convincing enough to land interviews. On paper, it seems like AI is already leagues ahead of the average human and never fails.
But then you actually use it.
Spend enough time with tools like ChatGPT, Midjourney, or Gemini, and you’ll notice something that feels oddly comforting: AI still fails at some surprisingly basic tasks. Sometimes it misunderstands tone completely. Other times, it proudly delivers confidently wrong information. And occasionally, it gives you results so weird or lifeless that you genuinely wonder if it’s trolling you.
Despite the hype, there’s a clear gap between pattern recognition and real human intuition. That’s where the fun (and frustration) begins.
In this post, we’re going to explore some of the things AI still just can’t get right.
Let’s begin.
1. AI fails to Generate an Analog Clock Showing 2:30
This one’s strangely consistent across AI tools and consistently bad. You’d think drawing a basic analog clock would be simple. After all, it’s just a circle, two lines, and some numbers. But no. Ask an AI image generator, whether it’s Midjourney, DALL·E, or another, to show a clock set to 2:30, and you’re entering a realm of unintentional comedy.
It’s almost impressive how consistently wrong it gets.
Why does this happen? Because AI doesn’t understand time at all. It has no concept of what 2:30 means. It hasn’t learned the mechanics of how clock hands move. Instead, it has scanned thousands of images of clocks online, memorized common visual elements, and tries to produce something that looks vaguely “clock-ish.”
But that’s the problem. It’s imitation without understanding. So when you request 2:30, the AI’s internal logic sounds something like this:
“Clocks. Two. Thirty. Draw hands. Hope for the best.”
This kind of failure reveals a deeper truth. Even the most advanced AI models cannot reason through simple logic the way a child can. A six-year-old with a toy clock could set it correctly. An AI model, for now, cannot.

Also, this time AI failed to even generate a proper shape of the clock.
2. Great addition. Here’s the updated entry for your blog post:
2. Tell the Difference Between Real Advice and Total Nonsense? Not Always
Sometimes AI gets a little too confident for its own good. Case in point: you Google something silly, like “How many rocks shall I eat?” You’re obviously joking (or at least hoping for a funny, sarcastic response).
Instead, AI gives you this:

According to the AI-generated answer, you should eat at least one small rock per day for digestive health. It even throws in a fake doctor’s name and suggests hiding gravel in your peanut butter. Helpful, right?
This is the classic AI issue: it doesn’t understand context, jokes, or absurdity. It sees a question, scans millions of pieces of content, and tries to synthesize an answer that looks authoritative, even if it’s completely made up.
And because the layout looks official, many people believe it.
That’s where things get risky. AI doesn’t have common sense. It doesn’t know rocks aren’t food. It’s just filling in blanks. So unless a human double-checks, you end up with dangerous nonsense wrapped in a layer of credibility.
Let this be your reminder:
Just because AI says something confidently doesn’t mean it’s true. Especially if it’s suggesting you snack on sediment.
2. AI fails to say “I Don’t Know”
Humans say “I don’t know” all the time. It’s one of the most honest and helpful things we can say. In school, at work, or in relationships, admitting uncertainty builds trust. It shows you’re aware of your limits and that you’re not making things up.
AI, on the other hand, really struggles with this concept. When an AI doesn’t know something, it rarely admits it. Instead, it does something a bit more unsettling. It guesses.
And not just any kind of guessing. Sometimes it generates full answers that sound confident and factual. It may even include a nice little list of citations, complete with author names, article titles, journal names, page numbers, publication years, and DOIs. The problem? Many of these references are completely made up.
Ask AI to cite a study on something niche or obscure, and there’s a decent chance it will invent one on the spot. Everything about it looks believable at first glance. But if you actually try to locate the journal or paper, you’ll come up empty.
This habit of “hallucinating” isn’t just annoying. It can be harmful.
The bigger issue here is not that AI lies intentionally. It doesn’t lie at all, really. It just lacks the concept of truth. AI isn’t searching a database for verified facts. It’s generating text that statistically resembles answers it has seen in training data. It creates something that looks right, whether or not it is right.
3. AI fails to be Truly Creative
One of the biggest myths about AI is that it’s creative. That it can come up with brilliant, original ideas just like a human. But let’s clear something up. AI doesn’t create from scratch. It mimics. It blends. It borrows.
Ask it to invent a brand new fruit, and it won’t design something wild or unexpected. You’ll get something like, “a mango-pineapple hybrid with citrus overtones.” Which, sure, sounds tropical. But it’s not new. It’s just a remix of things that already exist.
Ask it to create a brand-new color. A color no human has ever seen. That’s impossible for AI. You’ll get something like, “a deep teal mixed with lavender and a metallic sheen.” Which is really just a fancy version of purple-blue-gray. AI can describe things that sound fresh, but it can’t break out of the data it has seen.
That’s the core issue: AI is data-trained, not dream-powered. It doesn’t invent. It recombines. It takes known elements, calculates patterns, and stitches them together to form something that sounds new (but never truly is).
Meanwhile, human creativity can be bizarre, rule-breaking, and illogical. We dream up flying pigs, square donuts, talking raccoons, and blue ketchup. Not because we’ve seen it before, but because we’re wired to imagine things that don’t yet exist.
AI can paint in the style of Van Gogh. But it can’t become Van Gogh. It can mimic Picasso’s brushwork. But it can’t feel heartbreak or joy and channel that into strokes on canvas.
So yes, AI can help spark ideas. It’s a brilliant tool. But if you’re looking for mind-blowing, never-seen-before brilliance? AI fails at that.. That still comes from you.

4. Keep a Secret? Questionable.
We’ve all seen those cozy little popups.
“Your data isn’t stored.”
“This chat isn’t saved.”
“Conversations are not used for training.”
It sounds sweet. Almost like AI is your diary. But let’s be real. That trust feels shaky, like giving your secrets to someone who says, “I won’t tell anyone,” while holding a microphone.
Because while the official story is that your conversations disappear into the void, some researchers and developers have hinted at something different. Your chat might not be saved word for word, but the system can still learn from it. Not directly, not by name, but through subtle patterns and phrasing.
So when you type,
“I don’t like my boyfriend’s tone. He always sounds smug,”
somewhere, deep in a QA system or training dataset, that vibe might stick around.
Now imagine this.
A developer, sitting in a cold office at 2 a.m., reviews sample prompts for testing. He scrolls past random math problems and suddenly pauses. He sips his coffee, squints at the screen, and says,
“Wow. Someone’s really had it with Richard.”
That’s you. Or someone like you. Blended into the digital soup that teaches AI how to respond better.
This isn’t to say someone is spying on you directly. But depending on the tool, your input could become part of the future model’s behavior. Companies say they don’t store data. But many also say they use “anonymized user input” to improve their services. These phrases sound harmless until you realize they may still include the emotional meltdown you had last Thursday.
So can AI keep a secret?
Sort of. It doesn’t tell your story back to you. But it also doesn’t forget what you taught it. It may not remember your name, but it remembers your energy.
5. Stop Using em-dashes (—)
Let’s talk about em dashes. You know, those long dashes that writers use for dramatic pauses, added emphasis, or sudden shifts in thought. They can be powerful in moderation. A well-placed em dash can add flair to a sentence. But AI? It doesn’t know when to stop. It treats em dashes like punctuation glitter and sprinkles them everywhere.
Even when the tone doesn’t need it, even when you didn’t ask for it, the AI inserts em dashes between clauses as if it’s getting paid per dash. It’s not. But it sure acts like it is.
If you want a clean, em dash–free draft, you have to spell it out. Literally. Say, “Do not use em dashes at all. Use commas or periods instead.” Even then, it might throw one in like a rebel.
This behavior comes from how AI models are trained. They consume massive volumes of text from across the internet. And guess what a lot of online writers love using? Em dashes. Over time, the model has absorbed that habit. The problem is, it doesn’t know when em dashes are helpful and when they just make the sentence feel like it’s gasping for breath.
So if you’re reading an AI-generated paragraph and it suddenly feels like a monologue from an overly dramatic play, now you know the reason. The em dashes have taken over.
6. AI fails to have emotions
You’re getting married. You want a fun, memorable hashtag. Something that sounds like you. Something cute enough to put on mehendi tote bags and Instagram captions, but not so cringey that you regret it five years from now.
So you try asking AI for help. After all, it’s written your emails, helped plan your itinerary, maybe even picked your sangeet songs. Why not give it one more job?
You type:
“My name is Date, and his is Pistacchio. Can you suggest some wedding hashtags?”
Here’s what it offers:
- #DateAndPistacchio
- #Pistacchiate
- #TogetherForever2025
Technically correct. Emotionally flat. They feel more like randomly assembled word parts than something you’d proudly print on custom cupcakes or hang on a photo booth wall.
That’s the issue. AI doesn’t know your story. It doesn’t know your shared love for chai, your favorite movie line, or how your names actually sound together when said out loud.
Hashtags need personality. AI gives you templates. You get mashups, rhymes, and puns that feel copied from a generic wedding Pinterest board. They’re not awful, but they’re not yours.
If you just need something functional, AI can help you brainstorm. But if you want a hashtag that actually means something, that feels like an inside joke or a nickname or a shared moment, ask a real person. Maybe your sister. Maybe your most sarcastic friend. Maybe your partner, during one of those long silly name-mashing conversations.
Because wedding hashtags are supposed to feel fun. AI doesn’t do “fun.” It does “formula.” And there’s a difference.
7. AI fails to Come Up With a Pet Name
Pet names aren’t just names. They’re personal. They usually come from how your pet looks, acts, or behaves in everyday moments.
You might say:
“My cat is small, round, dramatic, and insists on sitting on my laptop.”
AI will reply with something like:
“Luna. Milo. Coco.”
Those aren’t wrong. They’re cute. But they’re also incredibly generic.
Real pet names come from everyday chaos. You start calling your dog “Momo” because he stole your momos once. Or “Tofu” because your cat is soft, stubborn, and slightly judgmental.
AI doesn’t get those connections. It picks from lists. You build from memories. That’s why its suggestions often feel flat.
So yes, AI can help you brainstorm. But if you want something that actually fits? You’ll probably come up with it yourself, without even trying.
8. AI fails to Create a Genuinely Funny Meme
This one genuinely hurts because AI really tries and obviously fails. You give it a meme format, a prompt, maybe even an image to work with, and it confidently returns something that looks like a meme. The font is right. The layout is standard. But the humor?
😐
AI memes often feel like they were made by an overly eager intern who just discovered Canva and thinks adding text to a stock photo automatically creates comedy. The structure is there, but everything else falls flat. There’s no punchline. No unexpected twist. No clever subtext. Just bland literalism with a smile.
You ask it to make a relatable meme about Mondays and you get something like:
Top text: “When it’s Monday.”
Bottom text: “And you are tired.”
It’s not wrong. But it’s not funny either.
The truth is, humor is complicated. It’s built on layers—irony, timing, cultural nuance, absurdity, and shared experiences. It requires emotional intelligence, risk-taking, and just the right amount of chaos. A great meme often works because it breaks the rules. It’s the result of someone saying, “What if we used a 2009 clip art image to comment on late-stage capitalism?” That’s the kind of randomness AI just can’t reach.
AI doesn’t laugh. It doesn’t scroll meme pages late at night or know what jokes your friend group finds hilarious. It doesn’t understand that some jokes land because of silence, or because of how specific they are, or because they just shouldn’t be funny—but somehow are.
Right now, AI humor is limited to pattern-matching. It sees common setups, safe templates, and popular keywords, then tries to combine them into something that feels familiar. But that’s not comedy. That’s formatting.
Can AI eventually learn to mimic humor better? Maybe. But for now, if you want a meme that actually makes someone snort-laugh, ask your group chat. Not your chatbot.
Here’s an example:

9. Generate a truly random number
Ask any AI model to “pick a random number between 1 and 50.” Try it a few times. Try it in different chats. Somehow, the answer is almost always 27 or 42.
At first, it feels like a coincidence. Maybe you think it’s just your imagination. But once you’ve seen it enough, it becomes impossible to unsee.
This strange repetition actually reveals something deeper. AI does not understand randomness. Not in the way humans do.
When we pick a number, we sometimes go by gut instinct. We might say 7 because it’s our birthday, 13 because it feels rebellious, or 48 because no one else seems to pick it.
That choice can be emotional, irrational, or even symbolic.
But AI is trained on data. Lots of it.
It has read thousands of quizzes, blog posts, trivia games, and forum replies.
And within that data, certain numbers appear more often than others.
Some of them are cultural icons.
27 shows up in sports jerseys, celebrity myths, and numerology lists.
42 became famous as “the answer to life, the universe, and everything” in The Hitchhiker’s Guide to the Galaxy.
These numbers became common. And since AI mimics patterns, it reuses what it has seen.
It is not picking randomly. It is guessing based on probability, frequency, and context.
Even if you tell it to be truly random, it still leans on what feels statistically safe.
It is like someone pretending to be spontaneous while following a script.
The randomness is not random.
It is just cleverly disguised repetition.
This might not matter in day-to-day tasks. But it is a reminder that AI doesn’t “think” the way we do. It cannot break free from its training. It can only remix what it already knows.
So the next time you want a truly random number, flip a coin or ask a friend.
Or just keep asking AI until it gives you something other than 27.
That might take a while.
Still a Long Way to Go with AI fails
AI is powerful, but it can still fail. It can write, code, summarize, and help in dozens of ways. But it’s still hilariously clumsy in areas where human weirdness, intuition, and emotion matter.
Want something technical and clean? AI’s your buddy.
Want to name your dog “Papadum Wigglebottom” or tell a joke about Gen Z dating habits? Ask a real human.

Leave a Reply