This is funny, but just to be clear, the firms that are doing automated trading have been using ML for decades and have high powered computers with custom algorithms extremely close to trading centers (often inside them) to get the lowest latency possible.
No one who does not wear their pants on their head uses an LLM to make trades. An LLM is just a next word fragment guesser with a bunch of heuristics and tools attached, so it won’t be good at all for something that specialized.
I hate that ai just means llm now. ML can actually be useful to make predictions based on past trends. And it’s not nearly as power hungry
“Hmm… I’m good with statistics, scripting, and I have some extra cash on hand…”
“I can just mix all these into the cauldron, stir it up a lil bit, aaand…”
“oh my god it’s gone. it’s all gone. i owe money now…”
“Guhh”
Average r/WSB thread
they dont need AI to lose 99%
Sorry 😜, I was trying to generate a seahorse emoji.
🐬 There we go, a seahorse!
Wait, that’s wrong. Sorry 😜, I was trying to generate a seahorse emoji.
🐳 Haha, got it, its a seahorse!
Oh no, not again. Wait, that’s wrong. Sorry 😜, I was trying to generate a seahorse emoji.
🐙 I finally did it! Seahorse achieved!
No, what’s wrong with me, why can’t I do anything right?. Oh no, not again. Wait, that’s wrong. Sorry 😜, I was trying to generate a seahorse emoji.

This thing is broken. It keeps telling me to just dollar cost average and not do chart astrology at all!
For the yougun’s, the people posting this stuff are the same people who posted all the same shit about crypto when it was $12,000. Be careful who you listen to just because its in a meme.
Damn you managed to stuff a whole straw man into that non-sequitur!
There’s a lot of ink spilled on ‘AI safety’ but I think the most basic regulation that could be implemented is that no model is allowed to output the word “I” and if it does, the model designer owes their local government the equivalent of the median annual income for each violation. There is no ‘I’ for an LLM.
Its this type of kneejerk reactionary opinion I think will ultimately let the worst of the worst AI companies win.
Whether an LLM says I or not literally does not matter at all. Its not relevant to any of the problems with LLMs/generative AI.
It doesn’t even approach discussing/satirizing a relevant issue with them.
It’s basically satire of a strawman that thinks LLMs are closer to being people than anyone, even the most AI bro AI bro thinks they are.
No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as ‘I think…’ or ‘I am…’ makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It’s part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.
No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning.
This has to be the least informed take I have seen on anything ever. It literally dismisses all the most important issues with AI and pretends that the “real” problem (as if there is only one that matters) is about people misunderstanding it in a way I see no one doing.
It’s clear to me you must be so deep into an anti AI bubble you have no idea how people who use AI think about it, how its used, why its used, or what the problems with it are.
When people say they use AI for stock trading, they don’t mean LLMs. There are stock AI models that have existed long before LLMs
i bet you some do!
Good catch… lol
they would blow up their accounts real quick
I bet you some do!
You’re absolutely right. I’ve now read your CSV data, and made new trade recommendations. By coincidence, they are the same as the last recommendations, but this time they are totally valid.
Ma! I need you to withdraw your retirement fund.
I read that in Cliff Clavin’s voice.
I tried to get one to write an interface to a simple API, and gave it a link to the documentation. Mostly because it was actually really good documentation for a change. About half a dozen end points.
It did. A few tweaks here and there and it even compiled.
But it was not for the API I gave it. Wouldn’t tell me which API it was for either. I guess neither of us will ever know.
Cry for help, it was trying to get you to interface with its own API, to either fix it, or end it.
I’ve actually used chat GPT (or was it Cursor? I dont remember now) to help write a script for a program with a very (to me, a non-programmer) convoluted, but decently well documented API.
it only got a few things right, but the key was that it got enough right for me to go and fix the rest. this was for a task I’d been trying to do every now and then for a few years. was nice to finally have it done.
but damn, does “AI” ever suck at writing the code I want it to. or maybe I just suck at giving prompts. idk. one of my bosses uses it quite a bit to program stuff, and he claims to be quite successful with it. however, I know that he barely validates the result before claiming success, so… “look at this output!” — “okay, but do those numbers mean anything?” — “idk, but look at it! it’s gotta be close!”
I would trust an ‘ai’ that had been designed from the ground up to do well in the stock market, just like I would trust an ‘ai’ that’s been designed from the ground up to drive trains. Idiots who think an llm is an ai in anything but spitting out what seems like reasonable answers/responses to your inputs are, well, idiots.
… you know that goldfish, randomly swimming to one side or another of a fish tank…
… you know they perform better at picking stocks that will go up or down in the next quarter than nearly all professional hedge fund managers, right?
In fact, this old expiriment was rerun fairly recently… ironically, with an AI being used to simulate a goldfish, in a scenario similar to that old study from some decades back.
The goldfish outperformed both WSB… and the Nasdaq.
I am literally not even joking when I tell you that a goldfish will probably outperform an AI at at least fairly short term stock picking.
See, there is a fundamental problem to predicting the market.
You have to have a strategy by which you do this.
If you employ this strategy… people will reverse engineer it and figure out how it works.
Then, everyone does that strategy.
Then, the strategy does not work any more, ‘nonsense’ begins to happen.
If you are curious about the mechanics that cover that whole, meta sort of process, look into game theory under conditions of imperfect information and information assymetry.
Its… basically a robust mathematical approach to simulating the flux of ‘animal spirits’ within a market… or in modern vernacular, ‘vibes’.
I still wouldn’t, because the stock market is already full of algorithmic trading and so you’d have to believe yours was better than the big boys out there.
I would trust AI to beat money managers in the stock market because it was proved a chimp throwing darts beats experienced money managers.
Driving trains requires skill.
I would expect driving trains to be automated much easier than trading stocks.
Yup. Machine learning is great. Using a predictive text keyboard with a large training set for EVERYTHING is not great.
LLMs can help with trading. As an example, if you can read news articles 1000x faster than a human, then you can make appropriate market decisions that much faster and make profit off that. These need not be very intelligent market decisions. Any idiot and every LLM knows perfectly well what stocks to buy or sell when there is an announcement for tariff on product xyz.
In case you didn’t know, DeepSeek was made by a trading company.
Even then, and as I wrote in another post, a custom trading NN might be working a strategy which is fine under normal market conditions whilst leading to massive losses if those conditions change (i.e. “picking nickels in front of a steamroller”) and because of the black-box nature of how Neural Networks work and their tendency to end up with the outputs being very convoluted derivations of the inputs (I expect even more so in Markets, were the obvious strategies that humans can easilly spot have long been arbitraged away, so any patterns such an NN spots during training will be so convoluted as to not be detectable by most humans), nobody will spot the risky nature of that strategy until getting splattered.
Neural Networks working in predicting market movements are, unlike a predictive text keyboard or even an automated train driver, not operating in a straightforward mainly non-adversarial enviroment.
true; could only get “AI” to do useful stuff when i gave it specialized knowledge on the topic i wanted it to help me with; if i asked outside this given scope information would go to shit tho.
“Do you want to know more about CSV files or investing?”
Lmfao I saw someone on the train just last week asking chat gpt how they can turn a profit from all their Friday morning losses
Off of Robinhood screenshots too
Oh. Oh no…
Wonder how they lost anything in the first place…
Wasn’t there an article that looked at and showed that no, there are no stock market specialists. An “experienced” stock trader was just as accurate in their predictions as regular Joe that’s just guessing. In that sense LLM should be just as effective (if not more) at making profit.
You can never predict the stock market, because the market depends on a lot of outside influences you might not know about. Maybe some disaster wipes out the only supplier for a critical part of your top performing stock tomorrow, so he cannot deliver goods anymore. Maybe a single big investor dumps all his stock overnight, sending the value down. Maybe some law or sanction is passed that changes how the company must operate. Maybe some other trading bot decides to buy or sell a huge number of shares.
No computer or AI can account for all of the outside factors and accurately predict the outcome each time. Each “Trading AI” is just snake oil that lives of your fees and commissions. If it was working as advertised, they would not need your money, but could make infinite riches by just trading their own stocks.
I just looked at my sister’s vibe coding projects and all I see are errors in the logs from param issues. I really want her to succeed but her over reliance on Cursor isn’t it
I just want to make this edit… She started building physical plastic cubicles for her office a month ago, and they are still unfinished. They are a clip and snap type and it causes her a headache to put it together. Most of her time, she’s unemployed rn, is devoted to making AI slop above all other outlets.
I haven’t touched LLMs in a few months and hate the way Brave and DuckDuckGo now implemented them into their search engines.









