• Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 minutes ago

    He combines LLMs with numbers and wonders why this does not work? Under which rock does he live?

  • mech@feddit.org
    link
    fedilink
    English
    arrow-up
    39
    ·
    7 hours ago

    The core functionality is simple:

    Automatically, upon each payment, add the expense to my app
    Update an Apple Watch complication with the % of my monthly budget spent
    Categorize the purchase for later analysis

    Can someone enlighten me? I don’t understand why you need AI for this in the first place.

        • panda_abyss@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 hours ago

          Yea, but those are all using heaps of proprietary heuristics.

          The beauty of LLMs and one of their most useful tasks is taking unstructured natural language content and converting it into structured machine readable content.

          The core transformer architecture was original designed for translation, and this is basically just a subset of translation.

          This is basically an optimal use case for LLMs.

          • MolochHorridus@lemmy.ml
            link
            fedilink
            English
            arrow-up
            8
            ·
            5 hours ago

            Quite obviously not the optimal use case. “The tensor outputs on the 16 show numerical values an order of magnitude wrong.”

            • JPAKx4@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 hour ago

              That’s the hardware issue he was talking about, it has no relation to the effectiveness of the usage of the LLM. It sounded to be mostly a project he was doing for fun rather then out of necessity

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 hours ago

      Certainly his use of LLM was stupidly egregious, but he found that even by those standards the math results underpinning the LLM were way off.

  • Coolcoder360@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 hours ago

    I went with quantized Gemma

    Well, was it quantized in a way that iphone 16 supports?

    Often it’s the quantization where things break down, and the hardware needs to support the quantization, can’t run FP16 on int8 hardware… And sometimes the act of quantization can cause problems too.

    And yeah, LLMs are likely going to be very hit or miss anyway.

    • First_Thunder@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 hours ago

      Given he apparently found a bunch of forum posts of people complaining about erratic behaviour so it may be more widespread