dgunay a day ago

I saw an AI generated video the other day of security camera footage of a group of people attempting to rob a store, then running away after the owner shoots at them with a gun. The graininess and low framerate of the video made it a lot harder to tell that it was AI generated than the usual shiny, high res, oddly smooth AI look. There were only very subtle tells - non-reaction of bystanders in the background, and a physics mistake that was easy to miss in the commotion.

We're very close to nearly every video on the internet being worthless as a form of proof. This bothers me a lot more than text generation because typically video is admissible as evidence in the court of law, and especially in the court of public opinion.

  • atleastoptimal an hour ago

    I saw that, it wasn't AI generated. There were red herrings in the compression artifacts. The real store owner spoke about the experience:

    https://x.com/Rimmy_Downunder/status/1947156872198595058

    (sorry about the x link couldn't find anything else)

    The problem of real footage being discredited as AI is as big as the problem of AI footage being passed as real. But they're subsets of the larger problem: AI can simulate all costly signals of value very cheaply, leading to all the inertia dependent on the costliness of those channels breaking down. This is true for epistemics, but also social bonds (chatbots), credentials, experience and education (AI performing better on many knowledge tasks than experienced humans), and others.

bearjaws a day ago

Doing a project to migrate from one LMS to another, I put ChatGPT in the middle to fix various mistakes in the content, add alt text for images, transcribe audio, etc.

When importing the content back into Moodle, I come to find that one of the transcripts is 30k+ characters, and errored out on import.

For whatever reason, it got stuck in a loop that started like this:

"And since the dawn of time, wow time, its so important, time is so important. What is time, time is so important, theres not enough time, time is so important time"... repeat "time is so important" until token limit.

This really gave me a bit of existential dread.

  • lynx97 a day ago

    Try reducing temperature. The default of 1.0 is sometimes to "creative". Setting it to 0.5 or somesuch should reduce events like you described.

    • bearjaws a day ago

      Was already running .1 or .2 because I didn't want it to deviate far from source content.

ginayuksel 11 hours ago

I once tried prompting an LLM to summarize a blog I had written myself, not only did it fail to recognize the main argument, it confidently hallucinated a completely unrelated conclusion. It was disturbing not because it was wrong, but because it sounded so right.

That moment made me question how easily AI can shape narratives when the user isn’t aware of the original content.

TXTOS 11 hours ago

Honestly, the most disturbing moment for me wasn’t an answer gone wrong — it was realizing why it went wrong.

Most generative AI hallucinations aren’t just data errors. They happen because the language model hits a semantic dead-end — a kind of “collapse” where it can't reconcile competing meanings and defaults to whatever sounds fluent.

We’re building WFGY, a reasoning system that catches these failure points before they explode. It tracks meaning across documents and across time, even when formatting, structure, or logic goes off the rails.

The scariest part? Language never promised to stay consistent. Most models assume it does. We don’t.

Backed by the creator of tesseract.js (36k) More info: https://github.com/onestardao/WFGY

orangepush 11 hours ago

I asked an AI to help me draft an onboarding email for a new feature. It wrote something so human-like, so emotionally aware, that I felt oddly… replaced.

It wasn’t just about writing, it felt like it understood the intention behind the message better than I did. That was the first time I questioned where we’re headed.

theothertimcook a day ago

How much I've come to trust the answers, responses, and information it feeds me for my increasingly frequent queries.

rotexo a day ago

I find myself occasionally wondering if 8.11 is in fact greater than 8.9

diatone a day ago

Deep fakes have always been horrible. The idea that someone - anyone - can take your image and represent you in ways that can ruin your reputation, is appalling. For example, revenge porn.

Having your likeness used to express an opinion that is the opposite of your own is nasty too. You can produce the kind of thing that has no courtesy, no grace, no kindness or care for the people around you.

The mass extraction and substitution of art has also caused a lot of unnecessary grief. Instead of AI enabling us to pursue creative work… it’s producing slop and making it harder for newbies to develop their craft. And making a lot of people anxious, fearful, and angry.

And finally of course astroturfing, phishing, that kind of thing has in principle become a lot more sophisticated.

It unnerves me that people can pull this capital lever against each other in ways that don’t obviously advance the common good.

alganet a day ago

Nothing is disturbing.