ants_everywhere 7 hours ago

> So let’s ask ourselves: would AI have told us this?

Why ask ourselves, when we can ask the AI? Here's the start of my conversation with Gemini:

> Me: What is known about fatty acid combustion in the brain?

> Gemini: The Brain's Surprising Use of Fat for Fuel For a long time, the brain was thought to rely almost exclusively on glucose for its immense energy needs. However, a growing body of research reveals that fatty acid combustion, or beta-oxidation, plays a more significant and complex role in brain energy metabolism and overall neurological health than previously understood. While not the brain's primary fuel source under normal conditions, the breakdown of fatty acids is crucial for various functions, particularly within specialized brain cells and under specific physiological states....

It cites a variety of articles going back at least to the 1990s.

So

> would AI have told us this?

Yes and it did

  • zdragnar 5 hours ago

    If you simply ask Gemini what the brain uses for fuel, it gives an entirely different answer that leaves fatty acids out completely and reinforces the glucose story.

    LLMs tell you what you want to hear, sourced from a random sample of data, not what you need to, based on any professional/expert opinion.

    • ants_everywhere 5 hours ago

      When I ask the same question it says primarily glucose and also mentions ketone bodies. It mentions that the brain is flexible and while it normally metabolizes glucose it may sometimes need to metabolize other things. This is both at gemini.google.com and using google.com in "AI mode" in private browsing.

      gemini.google.com mentions lactate and fat. But it also knows I care about science. I'm not sure how much history is used currently.

      But this is kind of silly because if you're a member of the public and ask a scientist what the brain uses as fuel they'll also say glucose. If you've ever been in a conversation with someone who felt the need to tell you *every detail* of everything they know, then you'll understand that that's not how human communication typically works. So if you want something more specific you have to start the conversation in a way that elicits it.

    • justlikereddit 2 hours ago

      If you ask a neuroscience teacher the same question you're also told it's all glucose and maybe occasionally ketone bodies.

  • skybrian 3 hours ago

    I tried it using Gemini 2.5 Pro and it cited this Hacker News thread for its first paragraph. I can't judge the other citations, other than to say they're not made up. (I see links to PubMed Central.)

  • 1970-01-01 5 hours ago

    What facts did it hallucinate and which are true?

mrbluecoat 2 hours ago

> the constant possibility that something that Everybody Knows will turn out to be wrong

Reminds me of astronomy and also quantum mechanics

zahlman 5 hours ago

I get that this is intended to be parsed "Discovering (what we think we know) is (wrong)", but it took me a while to discard the alternative "discovering (what we think (we know is wrong))".

Sniffnoy 7 hours ago

I think this could use a more informative title? The title this was posted with is actually less informative than the original title.

barisozmen 3 days ago

Answer to his though experiment: Yes, I believe a sufficiently advanced AI could told us that. Scientists who have been fed with wrong information can come up with completely new ideas. Making what we know less wrong.

That being said, I don't think current token-predictors can do that.

  • tptacek 8 hours ago

    My read of this was that AI is fundamentally limited by the lack of access to the new empirical data that drove this discovery; that it couldn't have been inferred from the existing corpus of knowledge.

    • DougBTX an hour ago

      Recent LLMs have larger context windows to process more data and tool use to get new data, so it would be surprising if there’s a fundamental limitation here.

strangattractor 3 days ago

Derek has a little thought experiment at the end.

readthenotes1 8 hours ago

Maybe an AI will be smart enough to realize that there's more than one explanation for a low level of triglycerides in neurons.

The RICE myth and the lactic acid myth will surely be a part of the training material so the AI will realize that there's a fair amount of unjustified conclusions in the bioworld

  • ethan_smith 5 hours ago

    The RICE protocol (Rest, Ice, Compression, Elevation) for injuries has been largely debunked - inflammation is now understood as a necessary healing process. Similarly, lactic acid was wrongly blamed for muscle soreness when it's actually a fuel source during exercise, paralleling how we misunderstood neuronal fatty acid metabolism.

    • zahlman 5 hours ago

      Is inflammation not still considered to be harmful in the long term? (Is that not why we're still expected to care about omega-6 vs omega-3 dietary fatty acids?) What is the new explanation for muscle soreness?

      • greensoap 3 hours ago

        There is a difference between localized inflammation that is bringing the source of healing to injury and systemic inflammation