Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.

If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.

Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    2 days ago

    As usual, politicians trying to use children and fear as a wedge to get people to accept government surveillance and control.

    • Truancy@lemmy.org
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      Thats the world we are unfortunately in now. So much for all the cool aesthetics in the dystopion SciFi movies. We just get the control aspect.

  • LukeZaz@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago

    On one hand, that “everyday use” of AI is genuinely some of the most harmful use there is. People fall into delusions because of that shit, and even when they don’t they get massively overconfident about the answers they get, even despite significant error rates. Not to mention the privacy invasion that occurs with those systems, or the, you know, huge environmental damage.

    In particular, this paragraph is doing a lot to make the bill sound better:

    Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.

    Yeah. These tools are dangerous. Fucking adults are using them wildly irresponsibly, for God’s sake.

    On the other, this is very similar to the push for “protecting” kids from “pornography.” I don’t trust this to not result in massive proliferation of invasive age-gating systems regardless of any AI use at all. We’ll get the worst of both worlds, won’t we?

    • Powderhorn@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Surveillance is the goal. “For the children” has proven an effective red herring over the years.

      • LukeZaz@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I’m aware. I think the primary difference between this bill and that general age-gating push is that AI itself does cause very real harm. To everyone, really. I’m not sure I’d even say children are particularly vulnerable.

        Regardless, I came to the conclusion that the bill isn’t worth it as-is in my newer analysis post.

    • LukeZaz@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Okay, so I’ve read the full bill now, and I gotta say I don’t feel as conflicted about this anymore. The EFF’s article looks like it has a lot of bad takes in it now; my (still not insignificant) doubts on this bill now come from the fact that I’m not a lawyer and thus cannot foresee the consequences of this as well, and the fact that a decent bill can still be implemented horribly by idiotic companies.

      (I wrote so much here I ended up needing to break out the header markdown. Apologies in advance!)


      Chatbot definition

      I don’t think the bill’s definition of chatbots is actually bad at all. Quoting directly:

      (Collapsible) Bill quote regarding AI definitions
      (2) ARTIFICIAL INTELLIGENCE CHATBOT.—The term ‘‘artificial intelligence chatbot’’—
           (A) means any interactive computer service or software application that—
               (i) produces new expressive content or
                   responses not fully predetermined by the
                   developer or operator of the service or ap-
                   plication; and
              (ii) accepts open-ended natural-lan-
                   guage or multimodal user input and pro-
                   duces adaptive or context-responsive out-
                   put; and
          (B) does not include an interactive com-
              puter service or software application—
               (i) the responses of which are limited
                   to contextualized replies; and
              (ii) that is unable to respond on a
                   range of topics outside of a narrow speci-
                   fied purpose
      

      Notice the frequent use of the word “and” here, rather than “or.” Do I think there are no possible holes in this? No. And again, I’m no lawyer. But my main concern here would be restricting programs that aren’t LLMs, and this seems to do a good job of avoiding that.[1] The EFF is concerned this would restrict people from, say, cheating on homework. It would. I don’t care about that and I don’t think they should either, for reasons addressed in my comment above.


      Age verification

      It’s not as bad as it sounded to me, but it’s still not acceptable. Quoting again:

      (Collapsible) Bill quote regarding age verification measures
      (5) REASONABLE AGE VERIFICATION MEAS-
          URE.—The term ‘‘reasonable age verification meas-
          ure’’ means a method that is authenticated to relate
          to a user of an artificial intelligence chatbot, such
          as—
              (A) a government-issued identification; or
              (B) any other commercially reasonable
                  method that can reliably and accurately—
                  (i) determine whether a user is an
                      adult; and
                 (ii) prevent access by minors to AI
                      companions, as required by section 6.
      (6) REASONABLE AGE VERIFICATION PROC-
          ESS.—The term ‘‘reasonable age verification proc-
          ess’’ means an age verification process employed by
          a covered entity that—
              (A) uses one or more reasonable age
                  verification measures in order to verify the age
                  of a user of an artificial intelligence chatbot
                  owned, operated, or otherwise made available by
                  the covered entity;
              (B) provides that requiring a user to con-
                  firm that the user is not a minor, or to insert
                  the user’s birth date, is not sufficient to con-
                  stitute a reasonable age verification measure;
              (C) ensures that each user is subjected to
                  each reasonable age verification measure used
                  by the covered entity as part of the age
                  verification process; and
              (D) does not base verification of a user’s
                  age on factors such as whether the user shares
                  an Internet Protocol address, hardware identi-
                  fier, or other technical indicator with another
                  user determined to not be a minor.
      

      The reason I say this is “not as bad as it sounded” is primarily because it’s open-ended.[2] An actually acceptable, privacy-preserving age verification method would be legal here and is not actively prevented. But that’s about all the faith I can muster for it. This law could be good if we had age-gating tech that could actually be trusted, and indeed if this law passes it might become good if we were ever to develop such a thing.

      But we don’t have that, and I do not trust for-profit corporations to ever make one, and in such a context this law runs the risk of causing serious issues. Namely, I would be concerned that – contrary to what the EFF states – companies would decide that the path of least resistance would involve continuing to use AI and implementing accounts and age verification for their services anyway. We’d move from having shitty AI chatbot customer support people shouldn’t use, to shitty AI chatbot customer support that is considered so important that the company mandates everyone get age-checked to view a support page.

      It’s unlikely, since the tech the law mandates is extensive enough to be an expensive hurdle to set up that really isn’t worth it for any company that doesn’t outright rely on AI to do their core business. But since when has sense mattered in the so-called AI age?


      Privacy

      There’s also the privacy issue of the age gating. Which is omnipresent as ever with these sorts of things. All the bill offers on that front is this:

      (Collapsible) Bill quote regarding data security
      (5) AGE VERIFICATION MEASURE DATA SECU-
          RITY.—A covered entity—
            (A) shall establish, implement, and main-
                tain reasonable data security to—
                    (i) limit collection of personal data to
                        that which is minimally necessary to verify
                        a user’s age or maintain compliance with
                        this Act; and
                   (ii) protect such age verification data
                        against unauthorized access;
            (B) shall protect such age verification data
                against unauthorized access;
            (C) shall protect the integrity and con-
                fidentiality of such data by only transmitting
                such data using industry-standard encryption
                protocols;
            (D) shall retain such data for no longer
                than is reasonably necessary to verify a user’s
                age or maintain compliance with this Act; and
            (E) may not share with, transfer to, or sell
                to, any other entity such data.
      

      5(E) here is great. I wouldn’t know if it’s foolproof, and it’s probably not, but it looks good. As for the rest? Seems very unrestricted and lacking definitions to me. Words like “reasonable” are great to use if you want to allow for a broad range of methods for tackling an issue, but I don’t think that move is reasonable when it comes to PII security. With “industry-standard encryption protocols” being as rigorous as the security standards get, the bill may as well just say “try not to fuck up,” and the track record for this is, uh, poor.

      So yeah, all in all, way better than the EFF is putting it. But unfortunately the problems are bad enough that I’m not convinced this bill should pass. At least, not while the massive bad-faith age-gating push is currently strangling the internet. I hate AI, and it is absolutely hurting people, but if we’re to have this then privacy-perserving (and secure) tech is a must and has to be created first.


      1. “AI companion” uses this definition and then further narrows it to things like “human-like” and “is designed to encourage or facilitate the simulation of […] friendship” and such, so I’m not worried about that either. ↩︎

      2. 6(B) and 6(D) are notable in their being specific exclusions; “I am not a minor” buttons and “enter your birthdate” fields are explicitly disallowed as age verification methods, as is using the same machine as a different, already-verified user. ↩︎