The proposed approach has a large number of drawbacks:
* It's not reliable, the project’s own readme mentions false positives.
* It adds a source of confusion where an AI agent tells the user that the CLI tool said X, but running it manually with the same command line gives something different.
* The user can't manually access the functionality even if they want to.
Much better to just have an explicit option to enable the new behaviors and teach the AI to use that where appropriate.
I was working on an Android project and needed to add specific vibration patterns for different actions. Our company was maybe a week into our exploration of LLM tools and they still really sucked. I kept getting failures trying to get any thing useful to output. So I dug into the docs and just started doing it all myself. Then I found some Android engineer had named the base functionality Vibrator back in one of the earliest SDKs.
Thee LLM was actually implementing nearly everything, finding the term vibrator, and was then erasing its output.
This library envisions cooperative results, like a code giving extra context to AI agents if it detects it is in an agentic environment, but I worry that some people may try to use this to restrict others.
I guess in that scenario, AI agents would have a project-specific "stealth mode" to protect the user.
As someone who uses AI everyday. People who wish to restrict the use of their code by AI should be allowed to do so, but they should make sure their LICENSE is aligned with that. That is the only issue I see.
This seems like a really bad idea. Agents need to adapt to get good at using tools designed for humans (we have a lot), or use tools specifically designed for agents (soon we will have lots).
But to make your tool behave differently just causes confusion if a human tries something and then gets an agent to take over or vice versa.
On the other hand, if you want to make your tool detect an agent and try a little prompt injection, or otherwise attempt to make the LLM misbehave, this seems like an excellent approach.
I think the term "supply chain attack" is frequently overused, and if I were feeling cantankerous, I might split hairs and argue that I was framing it more as a "watering hole attack" instead. But I agree that it could also be framed as a "supply chain attack", and you seem to have correctly realized that I was suggesting this was an excellent approach to either attack people using LLMs connected to agentic tooling or to render your gadget incompatible with such usage, if that was your goal.
I do not think it's a particularly good way to assist such users.
I also don't see how this requires heuristics, but usecases do exist; eg I set `CLAUDE`, so that a git hook can mark vibe commits -- a prompt would be a waste of tokens and would introduce non-determinism, and MCP is yet another dependency that can get ignored in favour of the CLI equivalent anyway.
Tools can maintain consistent interfaces while still providing agent-aware optimizations through metadata or output formatting that doesn't disrupt the human experience.
This isn't something that's going to need to be in a pitch deck. It's the second open source library I've released this week. But even if it was serious, if Hugging Face hasn't changed its name then I think this is fine
It’s still a ridiculous choice for a name, look at stuff like ScuttleButt whose adoption is only hurt by its crappy name that few people want to bring up in public.
I don’t like that the fact that an agent was used to write the code is bleeding into runtime of that code. Personally I see the agent as a tool but at the end of the day I have to make the code mine and that includes writing error handling and messaging that’s easy to understand for a human because the agent is not going to help when you get an alert at 3am at night. And often what’s easy to understand by a human is easy to understand for a LLM.
Good luck detecting things. Guess what. None of your fucking business. It works, it works. You didn't like that. Go fuck yourself. It's like "anti cheating" shit in academia. I get some random output from things. All I do is have a sample of things I want to mimic and any style I have. I can tell Abby system to make it not sound like itself.
Just be honest. You're failing in this "fat the man, man" thing on AI and llms.
It's better to work with the future than pretend that being a Luddite will work in the long run
The proposed approach has a large number of drawbacks:
* It's not reliable, the project’s own readme mentions false positives.
* It adds a source of confusion where an AI agent tells the user that the CLI tool said X, but running it manually with the same command line gives something different.
* The user can't manually access the functionality even if they want to.
Much better to just have an explicit option to enable the new behaviors and teach the AI to use that where appropriate.
* The online tutorials the LLM was trained on don't match the result the LLM gets when it runs the tool.
We're reaching levels of supply chain attack vulnerability that shouldn't even be possible.
Wasted opportunity to call it: vibrator
I was working on an Android project and needed to add specific vibration patterns for different actions. Our company was maybe a week into our exploration of LLM tools and they still really sucked. I kept getting failures trying to get any thing useful to output. So I dug into the docs and just started doing it all myself. Then I found some Android engineer had named the base functionality Vibrator back in one of the earliest SDKs.
Thee LLM was actually implementing nearly everything, finding the term vibrator, and was then erasing its output.
Ah yes, same sort of thing as https://github.com/orgs/community/discussions/72603
Leaves the name available for a buttplug.io agentic interface plugin.
colon.ai has a nice vibe to it.
Vibe-Rater
Alternative name suggestion: prompt-injection-toolkit
This library envisions cooperative results, like a code giving extra context to AI agents if it detects it is in an agentic environment, but I worry that some people may try to use this to restrict others.
I guess in that scenario, AI agents would have a project-specific "stealth mode" to protect the user.
As someone who uses AI everyday. People who wish to restrict the use of their code by AI should be allowed to do so, but they should make sure their LICENSE is aligned with that. That is the only issue I see.
This seems like a really bad idea. Agents need to adapt to get good at using tools designed for humans (we have a lot), or use tools specifically designed for agents (soon we will have lots).
But to make your tool behave differently just causes confusion if a human tries something and then gets an agent to take over or vice versa.
On the other hand, if you want to make your tool detect an agent and try a little prompt injection, or otherwise attempt to make the LLM misbehave, this seems like an excellent approach.
In other words, a supply chain attack? Let's call it what it is.
I think the term "supply chain attack" is frequently overused, and if I were feeling cantankerous, I might split hairs and argue that I was framing it more as a "watering hole attack" instead. But I agree that it could also be framed as a "supply chain attack", and you seem to have correctly realized that I was suggesting this was an excellent approach to either attack people using LLMs connected to agentic tooling or to render your gadget incompatible with such usage, if that was your goal.
I do not think it's a particularly good way to assist such users.
This seems like a really good idea for projects that reject AI-written code, to detect and early-fail in such environments.
I also don't see how this requires heuristics, but usecases do exist; eg I set `CLAUDE`, so that a git hook can mark vibe commits -- a prompt would be a waste of tokens and would introduce non-determinism, and MCP is yet another dependency that can get ignored in favour of the CLI equivalent anyway.
Tools can maintain consistent interfaces while still providing agent-aware optimizations through metadata or output formatting that doesn't disrupt the human experience.
[dead]
i'm this old: i don't think you should name packages in SWE with names that you will eventually cave in and change if the project gets real use.
This isn't something that's going to need to be in a pitch deck. It's the second open source library I've released this week. But even if it was serious, if Hugging Face hasn't changed its name then I think this is fine
I feel I'd be remiss if I didn't suggest the name "vibe check." (The name doesn't bother me personally, for whatever that's worth.)
It’s still a ridiculous choice for a name, look at stuff like ScuttleButt whose adoption is only hurt by its crappy name that few people want to bring up in public.
Dead babe has a good point there.
Can’t stop laughing
Lol
why would this one need to be changed?
I don’t like that the fact that an agent was used to write the code is bleeding into runtime of that code. Personally I see the agent as a tool but at the end of the day I have to make the code mine and that includes writing error handling and messaging that’s easy to understand for a human because the agent is not going to help when you get an alert at 3am at night. And often what’s easy to understand by a human is easy to understand for a LLM.
Neat! I might monkey patch vitest to show full diffs for expect when being used by an agent
Good luck detecting things. Guess what. None of your fucking business. It works, it works. You didn't like that. Go fuck yourself. It's like "anti cheating" shit in academia. I get some random output from things. All I do is have a sample of things I want to mimic and any style I have. I can tell Abby system to make it not sound like itself.
Just be honest. You're failing in this "fat the man, man" thing on AI and llms.
It's better to work with the future than pretend that being a Luddite will work in the long run
It has nothing to do as a “gotcha”. It’s about improving error codes and other interactions for agentic editors.