• Credibly_Human@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    6 hours ago

    This is funny, but just to be clear, the firms that are doing automated trading have been using ML for decades and have high powered computers with custom algorithms extremely close to trading centers (often inside them) to get the lowest latency possible.

    No one who does not wear their pants on their head uses an LLM to make trades. An LLM is just a next word fragment guesser with a bunch of heuristics and tools attached, so it won’t be good at all for something that specialized.

    • thespcicifcocean@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      4 hours ago

      I hate that ai just means llm now. ML can actually be useful to make predictions based on past trends. And it’s not nearly as power hungry

      • Bazell@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 hour ago

        Yeah, especially it is funny how people forgot that even small models the size of like 20 neurons used for primitive NPCs in a 2D games are called AI too and can literally run on a button phone(not Nokia 3310, something slightly more powerful). And these small ones specialized models exist for decades already. And the most interesting is that relatevly small models(few thousands of neurons) can work very well in predicting trends of prices, classify objects by their parameters, calculate chances of having specific disease by only symptoms and etc. And they generally work better than even LLMs in the same task.

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.

        • thespcicifcocean@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          14 minutes ago

          What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.