

Mostly because the model is incapable
There, fixed that for you.


Mostly because the model is incapable
There, fixed that for you.


That’s their question too, why the hell did Google makes this the default, as opposed to limiting it to the project directory.


Because “agentic”. IMHO running commands is actually cool, doing it without very limited scope though (as he did say in the video) is definitely idiotic.


Well… at least do that for Windows and MacOS, not for Linux.


Because people who runs this shit precisely don’t know what containers, scope, permissions, etc are. That’s exactly the audience.


The user can choose whether the AI can run commands on its own or ask first.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :
rm *filename
versus
rm * filename
where a single character makes the entire difference between deleting all files ending up with filename rather than all files in the current directory and also the file named filename.
Of course here you will spot it because you’ve been primed for it. In a normal workflow, with pressure, then it’s totally different.
Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it. They were obviously painfully wrong but it would have been a reasonable assumption.


It should also be sandboxed with hard restrictions that it cannot bypass
duh… just using it in a container and that’s it. It won’t blue pill its way out.


I think that’s the point, the “agent” (whatever that means) is not running in a sandbox.
I imagine the user assumed permissions are small at first, e.g. single directory of the project, but nothing outside of it. That would IMHO be a reasonable model.
They might be wrong about it, clearly, but it doesn’t mean they explicitly gave permission.
Edit: they say it in the video, ~7min in, they expected deletion to be scoped within the project directory.


Wow… who would have guessed. /s
Sorry but if in 2025 you believe claims from BigTech you are a gullible moron. I genuinely do not wish data loss on anyone but come on, if you ask for it…
Sure, remove the red light but please also remove cars.


so when i buttered my toast this morning that was a political act?
If you are a vegan and consider laws to support your views yes. If you are not a vegan and do not care, also yes.


The plain n8n app is very capable of doing a ton of stuff.
Sorry if I’m a bit slow but what does it actually do? I skimmed through “automations” earlier this morning and I mostly found paid-for GenAI related stuff.


Thanks I’ll dig deeper. I guess I do want something like n8n but ideally :
which makes me wonder what they do provide, e.g. is it mostly indexing existing plugins and then some scaffolding for non coders?


Do you have a specific use case for two containers that you want to talk to each other?
Sure, for example once a Jitsi Meet meeting ends (more than 1 person in a room in, everybody gone), save the chat log to CopyParty e.g. WebDAV push to /meetingname_date.txt would be enough to be useful. It’s something we tend to do manually on a regular basis.
road map of what you are trying to accomplish before hand, and run it by the dev teams.
Yes no rush and I can code so I would be able to test before suggesting anything.
As I’m thinking about it, I wonder if your solution might be automation?
I don’t touch AI but I do think conventions, e.g. not “just” an API but SWAGGER, specific filesystem on mountpoints, etc could facilitate this.


Indeed and for PeerTube for example it has an API, cf https://docs.joinpeertube.org/api-rest-reference.html which I did use. It also provides SWAGGER so that could facilitate integration with others services also providing APIs. I was starting to think that the meta service could have read only public only token generated for each new service and provide a SWAGGER endpoint to facilitate using the API of more than 1 service.


Thanks, that’s indeed exactly the kind of thing I’m looking for “The authentication glue you need.” but even more generalized than that, e.g. just “the glue you need.” not solely for authentication.
Edit: to clarify and coming back after leaving few other comments, the 1 thing authentik has is that it is a cross-service need, namely nearly all services do need authentication AND, probably consequence of that, there are conventions and standards already in place, e.g. SAML, OAuth2/OIDC, LDAP, Auth0. So that makes everything much easier.


most of my services are an island to themselves
same
and I like it that way.
… well that’s the part I’m challenging. I was thinking like this but I’m wondering if that could be improved.
PS: I use ntfy and like it, that was just an example.


Yes I can relate to the process.
Any further interoperability is luck based.
Unfortunately I can relate to that, hence the question here :D


Thanks, are you saying there is a mechanism in place, e.g. does YunoHost suggests plugins or integrations for services it manages?
Yep. That’s exactly why I tend to never discuss “AI” with people who don’t have to actually have a PhD in the domain, or at least a degree in CS. It’s nothing against them specifically, it’s only that they are dangerously repeating what they heard during marketing presentations with no ability to criticize it and, in such cases, it can be quite dangerous.
TL;DR: people who could benefit from it don’t need it, people who would shouldn’t.