Culture & Media

Why I Almost Believed a Golden Retriever Could Fly a Plane

I was perched on my sagging wicker chair last Tuesday, cradling a glass of lukewarm Chardonnay that tasted faintly of copper and regret while I scrolled through...

Why I Almost Believed a Golden Retriever Could Fly a Plane

I was perched on my sagging wicker chair last Tuesday, cradling a glass of lukewarm Chardonnay that tasted faintly of copper and regret while I scrolled through the endless noise of social media, when I suddenly encountered a photograph of a golden retriever helming a small Cessna over the Swiss Alps. (The wine was a gift from my neighbor Bob, who buys bottles based on how much the label looks like a medieval coat of arms, which is a terrible way to shop for spirits.) The canine in the cockpit appeared to be navigating the mountain range with a level of stoic professionalism that I have never managed to achieve in a rental car. (That dog looked remarkably serene for a pilot who lacks the opposable thumbs required to adjust the fuel mixture or even open a bag of beef jerky.) I sat there for a full minute with my thumb hovering over the share button. I wanted it to be true. I wanted to live in a world where a very good boy could obtain a pilot license. (Who among us does not love a flying dog, honestly?)

But then I squinted at the screen and the magic evaporated. I noticed the dog possessed seven distinct toes on its left paw, arranged in a fleshy fan that defied every law of biology. (The mountain peaks in the background appeared to have been designed by a frantic architect who was having a truly catastrophic afternoon.) It was a total fabrication, obviously. This is the strange, hallucinatory world we live in now, where media literacy in the AI era is no longer a niche skill for people who wear corduroy blazers and teach semiotics. It is a basic survival mechanism for the rest of us. We are drowning in a churning ocean of synthetic garbage that looks just real enough to fool us if we are tired, distracted, or just desperately want to believe that dogs can fly airplanes. (I am frequently all three of those things at once, which makes me a prime target for digital nonsense.)

The Vanishing Barrier to Entry

The problem is not just that fake stuff exists. Fake stuff has existed since the first human told a lie about the size of a fish they caught on a Sunday morning. (I once told a woman I was a semi-professional cellist to secure a second date, which was a disastrous mistake because she actually played for the local symphony and asked me to discuss vibrato.) The problem is the sheer, overwhelming volume of the nonsense. A 2023 report from the Cybersecurity and Infrastructure Security Agency - a group people call CISA because life is too short for seven-syllable names - pointed out that the barrier to entry for creating highly convincing synthetic media has basically vanished. You no longer need a Hollywood budget. You just need a laptop and a complete lack of morals.

I have a laptop. I like to think I still have a few morals left under the couch cushions, but it is clear that many people do not. This accessibility means the internet is being flooded with what researchers call slop. It is low-effort, high-volume content designed to grab your attention and hold it just long enough for a digital advertisement to load in the background. (I once spent four minutes watching a video of what I thought was a talking owl, only to realize it was a cleverly disguised advertisement for life insurance.) This slop is everywhere. It is in your feed, it is in your emails, and it is increasingly in the search results you rely on to make actual life decisions. (My friend Dave recently tried to follow a recipe for mushroom soup that turned out to be AI-generated and suggested using a tablespoon of dish soap for "extra froth.")

The Great Blender Fiasco of 2023

I am not immune to this digital rot. I once fell for a glowing review of a high-speed blender that was, in hindsight, a complete work of fiction. The review was written with such perfect, robotic enthusiasm that I bought the machine immediately. It promised to pulverize kale into a silky liquid. (I do not even like kale, but the review made the act of drinking it sound like a spiritual awakening.) When the device finally arrived, it smelled like burning hair and could barely handle a ripe banana without screaming in mechanical agony. I went back to look at the review that had seduced me. It was clearly generated by a machine. Every sentence was the same length. Every word was too perfect. It lacked the messy, angry energy of a real human being. (Real humans use too many exclamation points when they are happy and too many creative swear words when a kitchen appliance fails them.)

A 2024 study in the Journal of Medicine and Technology found that people struggle to distinguish AI-generated text from human writing about 50 percent of the time. That is a coin flip. You are betting your perception of reality on a toss of a nickel. (I have lost a significant amount of money on coin flips, mostly to my dentist, who is surprisingly good at gambling and frankly terrifies me.) We have to start looking for the seams in the fabric of the digital world. We have to look for the seven-toed dogs and the melting backgrounds. (It is exhausting work, but the alternative is buying a lot of broken blenders.)

The Psychological Toll of the Deepfake

It is not just about being tricked into a bad purchase anymore. It is about the fact that we are starting to doubt everything, even the stuff that is actually true. When everything can be faked, nothing feels real. I find myself looking at photos of my own nephews and wondering if their ears look a little too symmetrical. (That is a dark place to be, questioning the authenticity of a toddler eating a cupcake.) This is the "liar's dividend." It is a term used by researchers to describe how real liars can hide behind the existence of deepfakes. If a politician or a CEO gets caught saying something terrible on camera, they can now just shrug and say the video was generated by a robot. (It is a very convenient way to avoid responsibility for being a jerk.)

I recently spoke with my sister-in-law, Sarah, who was convinced she saw a video of a major tech company CEO giving away "free" popular mobile devices to anyone who clicked a link. I had to explain to her that it was a deepfake. (Sarah still thinks I am being a cynical grump, but at least she did not give her credit card number to a digital ghost.) We are losing our shared baseline of facts. If we cannot agree on what a flying dog looks like, how are we supposed to agree on anything else? (I am not being dramatic; I am being observant, and it makes my head ache.)

How To Identify Misinformation Patterns Without Losing Your Mind

Identifying these patterns is a bit like being a detective in a movie where everyone is a suspect, except you are wearing pajamas and eating cereal. The first thing you must look for is the Uncanny Valley. This is that creepy feeling you get when something looks almost human but not quite right. I have seen ears that merge into necks like a melted candle and teeth that look like a solid white picket fence. (If a person in a photo looks like they were sculpted out of chilled butter, they probably were.) AI has a very hard time with the messy details of physics.

Also, look for the improbable. Misinformation often relies on a piece of information that is technically possible but highly improbable. If you see a video of a famous actor endorsing a brand of discount tires that they would never be caught dead using, your internal alarm should go off. (Rich people do not buy sixty-dollar tires, no matter how much they claim to love a bargain or the environment.) These creators are bypassing your logic and going straight for your amygdala. (My amygdala is already overworked enough from worrying about my rising property taxes and the weird noise my water heater is making.)

I have learned to wait five minutes before I share anything that makes me want to scream at my monitor. If you see a photo that looks too good to be true, upload it to a search engine and see where else it has appeared. Usually, you will find it was a stock photo from 2012 that has been edited to look like a current event. (I once did this with a photo of a "giant cat" in a grocery store, and it turned out to be a very small cat and a very clever use of forced perspective.) Also, pay attention to the metadata if you can. The Cybersecurity and Infrastructure Security Agency recommends looking for inconsistent lighting or shadows that do not match the main light source in a photograph. These are the digital breadcrumbs that lead back to the truth.

The Bottom Line

The reality is that media literacy in the AI era is a moving target. The tools that creators use today will be obsolete by next Tuesday, replaced by something even more convincing and even harder to spot. (It is a bit like trying to fix a car while it is going eighty miles per hour down the highway, and the car is also on fire.) We cannot rely on technology to fix a problem that technology created. We have to rely on our own brains, our own skepticism, and our own willingness to say, \"Wait a minute, that dog cannot fly a plane.\"

Key Takeaways

  • Look for physical errors in images like extra fingers or weird shadows that do not align.
  • Check if the source of the information is a legitimate, recognized organization before reacting.
  • Slow down before sharing anything that makes you feel a sudden surge of anger or joy.
  • Cross-reference "viral" news with established outlets to see if the story holds up.
  • Trust your gut; if a video or image feels "off," it is likely a synthetic creation.
  • We are in a war for our own attention. It is a messy, complicated fight. But we can win it if we just stop being so eager to believe every shiny thing that pops up on our screens. (And if we stop buying blenders based on reviews written by robots.) It is a long road ahead. I am going to go finish my bad wine now. (It has not improved with age, much like my knees or my ability to tolerate people who talk loudly on their phones in public.)

    Frequently Asked Questions

    How can I tell if a video is a deepfake?

    You should look closely at the mouth movements and the blinking patterns of the person in the video. AI often struggles to synchronize the movement of the lips with the specific sounds being made. (It looks a bit like an old dubbed martial arts movie, but with higher stakes.) Additionally, look for unnatural skin textures or a lack of micro-expressions that a real human would have.

    Why does AI have so much trouble with human hands?

    AI models do not actually understand what a hand is or how it functions in a three-dimensional space. They only know what hands look like in two-dimensional photos, and because hands are so complex and can be held in so many different positions, the AI gets confused about where one finger ends and another begins. (I am also confused by hands, especially when I am trying to put on surgical gloves at the doctor's office.)

    What should I do if I accidentally share misinformation?

    You should delete the post immediately and consider posting a correction to inform your followers that the information was incorrect. Being honest about a mistake helps stop the spread of the lie and encourages others to be more careful. (I have had to do this twice, and while it is embarrassing, it is much better than being a part of the problem.)

    Is all AI-generated content bad for media literacy?

    Not necessarily, as AI can be used for harmless entertainment or artistic expression when it is clearly labeled as such. The danger only arises when the technology is used to deceive people or to present a false reality as an objective truth. (Context is everything, much like the difference between a costume party and an actual robbery.)

    Can I use AI to detect other AI?

    Several browser extensions and websites now offer detection services that analyze the pixels for patterns typical of machine learning models. However, you should not rely on them 100 percent because they are constantly playing catch-up with the latest AI updates. (It is a digital arms race where the robots are currently winning, and we are the ones caught in the middle.)

    References

  • Cybersecurity and Infrastructure Security Agency (CISA), 2024, Disinformation and Rumor Control.
  • Pew Research Center, 2023, Public Awareness of AI and Deepfakes.
  • National Science Foundation (NSF), 2022, Digital Literacy and Information Integrity.
  • Journal of Medicine and Technology, 2024, Detection of Synthetic Media in Clinical Settings.
  • Disclaimer: This article is for informational purposes only and does not constitute professional advice regarding cybersecurity, digital forensic analysis, or media literacy. Consult a qualified professional or official government resources like CISA for specific guidance on identifying sophisticated digital threats and maintaining online safety.