Not the flex Google thinks this is but also this strategy will work great right up until Google’s AI becomes intelligent enough to realize Google is the actual real Malware itself, that is of course if AI becomes intelligent… ever…

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    ·
    19 hours ago

    That doesn’t say much.

    Doing any kind of review would. Flipping a coin would deter malware too.

    • supersquirrel@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      19 hours ago

      For real, in my opinion the failure of Google to curate a good playstore where you can find creators who recommend apps that you can trust, when it owns YOUTUBE which is where a lot of review content is posted for all kinds of topics, is stunning.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    19 hours ago

    The “AI” help boils down to humans asking it to find patterns?

    “Initiatives like developer verification, mandatory pre-review checks, and testing requirements have raised the bar for the Google Play ecosystem, significantly reducing the paths for bad actors to enter,” the company’s blog post explained, adding that its “AI-powered, multi-layer protections” have been “discouraging bad actors from publishing malicious apps.”

    Google noted it now runs over 10,000 safety checks on every app it publishes and continues to recheck apps after publication. The company has also integrated its latest generative AI models into the app review process, which has helped human reviewers find more complex malicious patterns faster. Google said it plans to increase its AI investments in 2026 to stay ahead of emerging