The actual research paper https://arxiv.org/abs/2603.18030
Also it pointed to where it is on github which I think means its possible to just tell chatgpt to tell me how to run this on my PC and then play with it, looks like you can just toss in anthropic/openai/etc api keys and go on your way with a POSIX controlled ai managed by any os https://github.com/kehao95/quine
Oooh I just realized you could probably schedule it to fetch content for yourself, if a web browser lets you use it via api. This would be like the chatgpt feature that is basically scheduled prompting but since this lets you mess with real hardware and software you can have a shell command that lets you fetch a “web surf internet rabbit hole”
This would be the way to avoid algorithmic nonsense because you can specify or have it infer your values based on content on your PC and it would do the job of an algorithm but with asuch or as little control as you want and not be beholden to any platforms
#!/usr/bin/env bash set -euo pipefail
mkdir -p “$HOME/digests”
cd “$HOME/agent-runs/web-rabbit-hole”
export QUINE_MAX_DEPTH=2 export QUINE_MAX_AGENTS=4 export QUINE_MAX_CONCURRENT=1 export QUINE_MAX_TURNS=20
quine " Fetch recent content from my chosen sources. Look for threads related to:
- interspecies communication
- lucid dreaming / dream yoga
- NixOS
- AI agents as OS processes
- federated media
Write:
- a short digest
- the best 5 links
- one rabbit-hole path worth following
Save it to $HOME/digests/$(date +%F)-rabbit-hole.md. "


Who gives a flying rat’s ass? Pestilence upon you and the electrons you rode in on.
Clearly at least the people who made it give a rats ass about this lol
Given the coherence of your post I’m frankly assuming it to be another LLM-pretends-to-be-human test balloon; And following those links would either be positive reinforcement of its utility function or tell some “real” humans that their crap at least garners interest. Just no, not even once.
Man I wish I was an LLM in this case, it is pretty lame to get spat on for sharing something I found interesting and useful to other people who I know would find this interesting and useful. The incoherence of the post is just rambling without wording things nicely
I know I come across like an asshole here, and I don’t want to hurt feelings, but, in case you are human, you should really consider your impulse control, writing coherence, and, maybe, how a given topic is usally regarded by the intended audience. In short, next time, please stop and think before posting something as emotionally charged as that, ok?
Oh I consider it quite well usually, just I didn’t care to do it this time because I figured it is reasonable for people to click the 2 links to figure out what this is if for some reason the title wasn’t enough and then they could read the rest if they wanted to but I guess this is too much to ask of people nowadays since we are too used to only reading things that are nice.
This was not an issue with other communities I have shared this with so I have to believe that the Lemmy community is just built different I guess.
There was nothing emotionally charged about the original post it was just thinking out loud.
Thankyou for actually being somewhat reasonable
Also: I apologize for my wording. I had an extremely strong emotional reaction, and I shouod have stopped and thought.
Thats OK we all get buzzed with buzz words haha.
There is one thing that I am curious about though, do you think you’d use the thing in the original post?
Wanting to know if the utility has landed for at least one person.
Thx. No, I have yet to see anything from an LLM that I consider remotely useful. And I wouldn’t be caught dead leting one anywhere near any system integration (Just look at all of the AWS outages in the past months). I continue to remain utterly baffled whenever anyone uses them for anything (other than maybe image generation, but I’d rather pay a small artist for those); I mean, their “summaries” are just as much random garbage as their citations… They look reasonable, but that’s all they do. I think LLMs are a neat parlor trick that is going to cost us, as a species, a lot, without getting us any closer to AGI.
I’m trying. For the record: The global-warming-increasing tech-bro-billionaire-owned LSD-tripping babble-automatons are not exactly universally applauded. Saying anything about them requires care.
Definitely agree with this