The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 minutes ago

      You can really only judge fairness of the score if you understand the scoring criteria. It is a relative score where the baseline is 100% for humans – i.e. A task was only included in the challenge if at least two people in the panel of humans were able to solve it completely, and their action count is a measure of efficiency. This is the baseline used as a point of comparison.

      From the Technical Report:

      The procedure can be summarized as follows:
      • “Score the AI test taker by its per-level action efficiency” - For each level that the test taker completes, count the number of actions that it took.
      • “As compared to human baseline” - For each level that is counted, compare the AI agent’s action count to a human baseline, which we define as the second-best human action action. Ex: If the second-best human completed a level in only 10 actions, but the AI agent took 100 to complete it, then the AI agent scores (10/100)^2 for that level, which gets reported as 1%. Note that level scoring is calculated using the square of efficiency.
      • “Normalized per environment” - Each level is scored in isolation. Each individual level will get a score between 0% (very inefficient) 100% (matches or surpasses human level efficiency). The environment score will be a weighted-average of level score across all levels of that environment.
      • “Across all environments” - The total score will be the sum of individual environment scores divided by the total number of environments. This will be a score between 0% and 100%.

      So the humans “scored 100%” because that is the baseline by definition, and the AIs are evaluated at how close they got to human correctness and efficiency. So a score of 0.26% is 0.0026 time less efficient (and correct) compared to humans.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    “…specifically crafted to demonstrate tasks that humans complete easily”

    Motherfucker, I can’t work out Minesweeper. I got zero fucking chance with your mystery box bloop game.

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 hours ago

    AI code is prewritten and is unable to edit that. Humans edit their “code” every second

  • UnrepentantAlgebra@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 hours ago

    If human scores were included, they would be at 100%, at the cost of approximately $250

    Wait, why did it cost real humans $250 to pass the test?

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 hours ago

      I assume it’s an hourly wage or something. Just because humans can work for free if they choose, doesn’t mean they have no cost associated with them. Just like a company could choose to give away unlimited tokens, those tokens still have a standard cost.

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      This is my rough upper-bound estimate based on the Technical Report. Human participants were paid to complete and evaluate the tasks at an average fixed fee of $128 plus $5 for solved tasks. So if a panel of humans were tasked with solving the 25 tasks in the public test set, it would be an average of $250 per person. Although, looking at it again, the costs listed for the LLMs is per task, so it would actually be more like $10 per human per task. In any case it’s one or two orders of magnitude less than the LLMs.

      Participants received a fixed participation fee of $115–$140 for completing the session, along with a $5 performance-based incentive for each environment successfully solved

      https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

    • aesopjah@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      it’s also an odd metric since only 20-60% of the humans completed it. Very 60% of the time they complete it everytime energy.

      Ideally they’d run the bots multiple times through (with no context or training of previous run), but I guess that is cost prohibitive?

      • monotremata@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 hours ago

        Yeah, this is what I was going to call out. Calling it “100% solvable by humans” and saying “if human scores were included, they would be at 100%” when 20-60% of humans solved each task seems kinda misleading. The AI scores are so low that I don’t think this kind of hyperbole is necessary; I assume there are some humans that scored 100%, but I would find it a lot more useful if they said something like “the worst-performing human in our sample was able to solve 45% of the tasks” or whatever. Given that the AIs are still scoring below 1%, that’s still pretty dark.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        If there had been a “Buy 10, Get 1 free” they could’ve used 11 humans instead of 10 for the same $250.

  • Great Blue Heron@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    ·
    9 hours ago

    It’s fun to point at the crappy performance of current technology. But all I can think about is the amount of power and hardware the AI bros are going to burn through trying to improve their results.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      4 hours ago

      Funnier yet will be if they continue to just train the model on that particular kind of test, invalidating its results in the process.

      • brianpeiris@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 hour ago

        It’s true that frontier models got better at the previous challenges, but it’s worth noting that they’re still not quite at human level even with those simpler tasks.

        Also, each generation of the challenge tries to close loopholes that newer models would exploit, like brute-forcing the training with tons of synthesized tasks and solutions, over-fitting to these particular kinds of tasks, and issues with the similarities between the tasks in the challenge.

        A common strategy in past challenges was to generate thousands of similar tasks, and you can imagine the big AI companies were able to do that at massive scale for their frontier models.

        • brianpeiris@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 hour ago

          The goal of the ARC organization is to continually measure progress towards AGI, not come up with some predictive threshold for when AGI is achieved.

          As long as they can continue to measure a gap between “easy for humans” and “hard for AI”, they will continue releasing new iterations of this ARC-AGI challenge series. Currently they do that about once a year.

          More detail about the mission here: https://arcprize.org/arc-agi

  • tatterdemalion@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    5 hours ago

    LLMs might suck at this game but I’m pretty sure Deepmind’s deep reinforcement learning AI could solve these easily.

    EDIT: I know you guys hate AI around here, but you need to at least be aware of what the technology is capable of.

    From 11 years ago:

    https://youtu.be/V1eYniJ0Rnk

    • yogurt@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 minutes ago

      No because it’s designed with all the things AI can’t do. Breakout is a quick repetitive loop of pass/fail linear progression. AI melts down when it has to backtrack and keep track of multiple pieces of context and figure out how to do something but not do it yet.

    • bss03@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      The founder of ARC worked at Google until 2024 and wrote 2.5+ books in Deep Learning. So, I expect some of these benchmarks are based on limitations seen in Deepmind.

      That said, it would be interesting to see how well Deepmind does at these tasks. My understanding is that the private tasks would still be dynamic enough to require “on the job training” so an Alpha-Go / Alpha-Zero / Alpha-Fold approach is unlikely to do well on ARC-AGI-3.

      Still, I think commentary around models (including, but not limited to something from Deepmind) attempting these tasks would be much more interesting than most of the discourse around generative AI, whether text, image, video, or code generation.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 hours ago

        If only…

        How Alpha Fold Solved the Protein Folding Problem and Changed Science Forever

        Edit:

        In 2020, Demis Hassabis and John Jumper presented an AI model called AlphaFold2. With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified. Since their breakthrough, AlphaFold2 has been used by more than two million people from 190 countries. Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.

        Source

      • tatterdemalion@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 hours ago

        Wdym? It’s existed for at least a decade. Plenty of papers about it. It mastered Atari and Mario. It became the best Go player.

        • bss03@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Yeah, for a fixed ruleset that can be provided up front the Alpha-Zero approach seems to work great.

          These tasks strike me as a bit different. I’m sure the ruleset is fixed somewhere, but it’s not disclosed to the participants. In the task I walked myself through, there was a new wrinkle in each part – a new interactable, a (more) hidden goal, or an information limit. And, of course, part of the task is “discovering” all that from the bitmap frame(s) provided.

          I’m unconvinced of the hype around “AI”, but this does seem like a legitimate research target that might stymie the Alpha{Go,Zero,Fold} series at least a bit.