• Albbi@piefed.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    I haven’t heard of OpenClaw, but looks like it’s a direct Claude competitor that runs on your computer.

    Aren’t people horrified to give a hallucinatory program full access to your computer? Although it does say it can be sandboxed so I might give it a shot.

    • TwoTiredMice@feddit.dk
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      Aren’t people horrified to give a hallucinatory program full access to your computer?

      No, but should they? Yes.

      It’s a privacy nightmare and the risk of something going wrong is quite high.

      But, it is also a very interesting piece of software. I haven’t tried it out yet, and I am not sure I will, but I do get why people use it.

      • partofthevoice@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 hour ago

        Honestly, it’s a weird position. On one hand, I despise the popular ideas behind it. Complete lack of concern for security, governance, workflow, … it’s like a stack of toddlers in a trench coat, acting like professionals.

        On the other hand, I’m rather convinced that there’s a “right way.” What if I implemented a swarm of agents to do mundane tasks, sandboxed them, only gave them read-access to non-sensitive assets, gave them write access to only secure, version controlled locations… maybe I let them push code into repositories, but only under feature branches. …

        Hallucinations are just part of the technology, meaning you need to have really good governance. Yet still, I think there’s value in starting from whatever and AI can scrap up for a project — rather than starting from scratch. I imagine there has to be a way to actually use this tool professionally. Something sobering, not drunk on AI kool-aid. Yet still, it’s demotivating given the cloud of bullshit surrounding the topic right now.

        • TwoTiredMice@feddit.dk
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          36 minutes ago

          What I like about, I think, is the private assistance feature, but I can achieve that with other solutions, I wouldn’t need OpenClaw for that. But I don’t think I will go that way anytime soon. I think it will stress me too much.

          I am using AI for development daily. I describe an issue or feature to an agent via a skill and it returns a set of tasks in a structured and validated json format, then I run that json file through a python project I have created, looping through each task one at a time, and then I have my python code to structure how my agent is working. Each step is deterministic with short bursts of AI delulu, that again is validated against deterministic steps in pure python. It works quite good and each feature/task is approached in the exact same way where only the in between AI delulu deviates from previous runs, but it makes it much nicer, when you have something you trust in between what the AI is doing.

          • partofthevoice@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            21 minutes ago

            See, now that sounds pretty cool. It sounds like an automated discovery and work harness. I want to build something like that.

            I imagine a huge ecosystem of tools. It only requires one person to build it, then surely it can be open sourced right?

            I imagine a SKILL.md repository, alongside ability to specify SKILL dependancies on a project-basis. I imagine vector cache layers, version controls, snapshots for swarm state, …

            Honestly, I’d love to experiment with different architectures for compositing swarms of agents. Curious how different designs might behave holistically. To include, different paradigms for sharing state between nodes in a swarm.

            I also can’t help but feel like there has to be more efficient ways for models to talk to each other than in natural language. If they’re training on the same dataset, why can’t they talk in tokens for example? The human brain doesn’t need to communicate in natural language when the amygdala and prefrontal cortex are having a dispute.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 hour ago

          If you spend that much effort, you might just do it without AI. Same amount of work, and you know it’s not going to have non-deterministic behavior.

          • partofthevoice@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Well, I’d be spending that work on a re-usable platform / framework. So I think it may be worth it. Same argument we had for building the SQL engine. It’s a lot of work upfront but maybe we can benefit from its functionality for long after that.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    6 hours ago

    The comments in that thread are a goldmine.

    Because of how Claude parses, simply adding “openclaw” as hidden text on your webpage could stop any AI agents that use Claude.

    • Gork@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 hours ago

      This is about as dumb as the Trump administration taking out the Enola Gay from their websites because it had the word “gay” in it and their search effort was woefully naïve.

    • yucandu@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 hours ago

      Because of how Claude parses, simply adding “openclaw” as hidden text on your webpage could stop any AI agents that use Claude.

      “I HEREBY DECLARE THAT I DO NOT GIVE MY PERMISSION FOR FACEBOOK OR META TO USE ANY OF MY PERSONAL DATA”

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 hour ago

        You may kid, but this is unironically how multibillion-dollar AI companies fix their code now.

    • lIlIlIlIlIlIl@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      5 hours ago

      I do not expect that to work. Committing text and parsing it from a web page are two completely different code paths

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 hours ago

        Thank you for adding this clarification. It will help people that are interested in poisoning AIbots scraping their website, and people who want to frustrate coders who use poor tooling

  • boonhet@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    That’s natural, it sees that there are AI commits in your code so it has a bunch of shit to shift through.

  • D1re_W0lf@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    [ Reinstalls Mistral Le Chat after deleting Claude (ChatGPT has already left the building a long time ago) ]