Larry Ellison can fuck off while we’re at it, the meddling technofascist bastard. Hope he loses everything!
Can we do away with trillion dollar companies already,. please? They’re not doing anything good for anyone, it always ends with some CEO’s and shareholders enriching themselves over the backs of others
No company should ba r a worth of more than a billion dollars
No single person should have a net worth of over 10 million
Where the fuck do they get this money from? $300 billion is fucking nuts. And 1.4 trillion in costs is literally bigger than my countries’ entire GDP (Australia)
probably borrowing from large banks, selling shares, or bonds, or offering something as COLLATERAL.
And ripping off their customers.
The 1% is willing to pay whatever imaginary fiat currency they can to eliminate the need for the working class. Then they can finally get rid of us.
While simultaneously acting like a bunch of whining little crybabies about declining fertility rates.
Which is it: AI (and other automation) will replace jobs, or there aren’t enough people to work all the jobs?
Hedging their bets. They’re only capable of being sure of the future as the next person, which isn’t very capable at all. But they have the ill-gotten means to back both sides so they continue to be on top.
Oh I know what they’re doing. I just don’t give a shit and will point out how ridiculous their sense of entitlement is.
No one is owed another human being.
It’s not that. Nobody really expects to achieve firm satisfactory result in something never done before, to justify the risk.
It’s a bubble. That they found money to make such an input into inflating it just means the outcome of said bubble bursting is this good for them.
I’m interested what exactly will happen when it bursts. A dictatorship, or a blitzkrieg against half of the world, or what else.
Well, it’s called One Rich Asshole Called Larry Ellison, after all… 🤷♂️
trying to buy WB, so he can turn CNN into a fox clone. of course with the help of the SAUDIs.
Any combination of:
- Issue new shares and sell them
- Issue corporate bonds and sell them
- Borrow money from banks
If they kill oracle, will that kill the last Unix after IBM stole the parent OS of Solaris and put it into Novell’s oubliette to reduce competition?
OpenAI’s mounting costs — set to hit $1.4 trillion
Sorry, but WTF!? $1.4 Trillion in costs? How are they going to make all of that back with just AI?
I think there’s only one way they can make this back: if AI gets so good they can really replace most employees.
I don’t think it will happen, but either way it’s going to be an economic disaster. Either the most valuable companies in the world, offering services that the next couple of hundred companies in the world depend on, are suddenly bankrupt. Or suddenly everybody is unemployed.
How are they going to make all of that back with just AI?
Government bailouts is how.
Socialism for the rich, dog-eat-dog capitalism for everyone else.
its a ponzi scheme.
1,400,000,000,000
I used to be amazed at how much a billion was, but this many 0s makes my head explode.
These must be bubble inflated costs to match the bubble inflated revenue.
If LLMs fail and they invested: bailout
If LLMs succeed and they invested: rich
If LLMs fail and they passed: everyone else bailed out
If LLMs succeed and they passed: out of business
Therefore, the logical choice for a business is to invest in LLMs. The only mechanism to not do the stupid thing that everyone else is doing is gone.
Prediction: the bubble is real but financiers will find ways to kick the bull down the road until they can force enough adoption & ad insertion to not lose out. The other option is that we pay it, of course. Takes on which is worse?
is that why palintir is so desperate trying to sell its “suvellience” tech to mulitple countries, and why all of them suddenly want facial recognition, biometric data.
They’ll do both just like they did in 2007/2008. These AI companies and their investors will get bailed out while the rest of us lose our jobs and have to move back in with our parents in the van they already live in.
How is a haunted typewriter supposed to replace all those employees?
I’ve tried explaining AI to people before and only could get so far before they fall back on “but it’s magic dude” but I love the idea of explaining it as a haunted typewriter.
I use the “very articulated parrot” analogy.
They’re systems trained to give plausible answers, not correct ones. Of course correct answers are usually plausible, but so do wrong answers, and on sufficiently complex topics, you need real expertise to tell when they’re wrong.
I’ve been programming a lot with AI lately, and I’d say the error rate for moderately complex code is about 50%. They’re great at simple boilerplate code, and configuration and stuff that almost every project uses, but if you’re trying to do something actually new, they’re nearly useless. You can lose a lot of time going down a wrong path, if you’re not careful.
Never ever trust them. Always verify.
Some of the more advanced LLMs are getting pretty clever. They’re on the level of a temp who talks too much, misses nuance, and takes too much initiative. Also, any time you need them to perform too complex a task, they start forgetting details and then entire things you already told them.
Sounds like they are a liability when you put it that way.
I use something similar. “Child with enormous vocabulary.”
It can recognize correlations, it understands the words themselves, but it really how those connections or words work.
I call dibs on the ghost of Harlan Ellison.
“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”
Glados: “just offer them cake and a fire pit and calm down”
i didn’t ask how it suplexxed a train, i just stayed out of its way
Ok but if it gets so good it replaces all the employees, how do people have enough money to pay for their services?
Who cares about the money of people when they have all the money?
that’s what they got excited about, no doubt. profit would go through the roof if they could take people out of the loop. nevermind the economy.
Good.
Do y’all think investors will wake up and realize that techbros are a bunch of fraudster scammers? Oracle deserves bankruptcy for being stupid with money. All my homies hate the AI-Bubble.
Bro even the way journalists talk about AI like it being a bet couldn’t be more obvious that it’s all a scam. If this AI-Bubble is profitable where are the actual god damn profits.
They don’t want to wake up until they have something else more appealing to put their money on. They NEED something to invest. They don’t care what it is, or even if it works but it has to be plausible enough to make money, more money.
Until there is another scam to put their money in, they are stuck in the bubble, like us.
I imagine there’s quite a few who believe it a fraud but want to profit anyway.
That’s how every successful fraud works. If it’s not attractive to people who see it’s a fraud, it won’t have their support. If it’s hard to discover as a fraud, it’s also hard to maintain and always has the risk of discovery.
So the best frauds are those where everyone knows it’s a fraud, and plenty think it’s a fraud they can profit from.
Personally, I am eyeballs deep in this industry and even I’m now hoping to see it all burn to the ground. I’ve already concluded that I’ll never make it to retirement in my field, probably because of automation. Fuck ‘em all.
The industry is not that bad, but it’s just one of them.
People need art. And art doesn’t survive in environments where there should be a winner and winner takes all.
Art is the social alternative of recessive genes. It allows to preserve more than needed “right now in this particular situation”. Without art there’s degeneracy.
I am not sure what you are implying. There will always be art. Art does not need to be a commercial success to be expressive.
Same for me… It’s depressing. And I no faith the government will do anything besides make it worse. If we’re lucky we’ll get the Expanse 's version of basic.
Yeah, same here honestly. I’m sick of the ai cringe fest and the egotistical tech bros being so annoying and full of themselves and being arrogant. The tech bros are insufferable
Which part of the industry?
Optometry
I would describe it as the application layer of all this AI shit. We are doing very well right now, but I’m just waiting for the turn.
Same for me. I was directly responsible for automation of AI infrastructure builds. It was miserable and I felt terrible. I transferred out of that org but now I’m writing software using the tools created by our AI infra. I made a lot this year due to equity increase and maybe next year but I want to be out.
Oh good. AI is collapsing and it’s taking Oracle with it.
Me, watching my company pivot to their cloud infrastructure: haha, I’m in danger
I understand why people keep using oracle, but i have never understood why anyone starts.
Because they have a series of ERP systems and services that some idiot CTO at the company looks at and goes: Yes, give me one of those.
Then once you’re on that, you get pulled into more and more Oracle ecosystem shit and you think some day you’ll have control and be able to get out. But you never do.
Oracle is like the loanshark of the tech industry.
Once you’re in, you’re in for life. Good fucking luck getting out.
Because their data centers are run by clowns, and going to the circus is an improvement
It’s the only way to get a pay rise. You have to work with the idiots or they don’t give you any money.
The problem is the people in charge are not the people that should be in charge. I suppose it’s my fault for not getting an MBA.
In the modern day, I agree. Dunno about 30 years ago.
It’s the people who chose solaris before the purchase i feel sorry for.
I’ve been telling my employer that they should be moving away from the Microsoft cloud for a whole bunch of reasons. Someone said they’re aware of it, so with the speed stuff here is moving, we might actually move to something else in 10 years.
But personally I wouldn’t lose any sleep if the whole bubble collapsed next year.
Good. Larry Ellison does not appear to be a force for good in the world. Steve Jobs had negative things to say about him and his obsession with increasing his billions.
Steve jobs knew something about forces not for good.
Can’t wait. Payback for Sun, for Java…
Fuck yeah. More of this. A lot more.
Couldn’t have happened to a worse company! Hope it hurts even worse later on and fractures the Execucultist’s will to shill AI further. 😈
already bought paramount, and trying to take over WB to get access to CNN.
300 billion on OpenAI? Why? LLMs in general are trash, but ChatGPT isn’t even the best LLM
The only good LLM is one that is being used by a highly specialized field to search useful information and not in consumer hands in the form of a plagiarism engine otherwise known as “AI”. Techbros took something that once had the potential to be useful and made it a whole shitty affair. Thanks, I hate it.
idk man LLMs help me code bro
Oof, sad, but you do you…
Chatgpt is the name recognition brands. Like calling all electric cars a tesla
GPT goes beyond chat, copilot code generation is also based on that. They also have generative visual stuff, like Sora.
Then there is brand recognition I guess, tech bros and finance bros seem to love OpenAI.
Brand recognition cannot be overstated.
If there was a better-than-YouTube alternative right now, YouTube would still dominate.
If there was a phone OS superior to Android and iOS, they would both still dominate.
If there was a search engine that worked far better than Google, Google would still dominate.
The average person won’t look into LLM reasoning benchmarks. They’ll just use the one they know, ChatGPT.
Youtube and android have strong network effects. I don’t think openai has anything close to comparable. They tried I am sure, I recall an app platform they added to chatgpt, but I haven’t heard of it in ages so I assume it hasn’t been a dominant factor.
I also don’t get the impression there is enough training material available exclusively to openai it’d be such a factor.But Windows and Google can shove it in your face because you’re already on their platform and they are doing that. You have to go to openai website.
I don’t think openAI is as well-known a Google.
ChatGPT might be, which is the point.
You are comparing very well established brands to a company in a sector that is far less established. Yes, OpenAI is the most well known, but not to the degree of $300B.
OpenAI is pretty well established.
I know Lemmy users avoid it, but a lot of people use LLMs, and when most people think LLMs, they think ChatGPT. I doubt the average person could name many or even any others.
That means whenever these people want to use an LLM, they automatically go to OpenAI.
As for to the degree of $300bn, who knows. Big tech has had crazy valuations for a long time.
I doubt the average person could name many or even any others.
I mean, it’s an easy answer to got the other 3 main ones: Gemini, copilot and MechaHitler
Copilot is just an implementation of GPT. Claude’s the other main one, at least as far as performance goes.
I totally agree with you. In fact, I know people who use ChatGPT exclusively and don’t touch the web anymore. Who knows who will have the best models, but they are definitely capturing a lot of people early.
What they know is Google though. Most normal people doing a search now just take the Gemini snippet at the top. They don’t know or care what AI even is really. I don’t know how OpenAI can possibly compete with web search defaults.
OpenAI isn‘t very good in any of those categories and they still have no business model. Subscriptions would have to be ridiculously high for them to turn a profit. Users would just leave. But to be fair that goes for all AI companies at the moment. None of their models can do what they promise and they‘re all bleeding money.
Yeah, I figured brand recognition was part of it. Everyon’e heard of ChatGPT- hell, last time I checked, ChatGPT was the number 1 app on the planet- but Claude isn’t nearly as popular, even though (in my opinion) it’s a lot better with code. It’s just a lot more thorough than the slop ChatGPT spits out
ChatGPT isn’t even the best LLM
Normie here. Which one is?
The one that I developed and costs $300 a week. Want it to gaslight you? Done. Make up shit? Done. Shout at you? Done. Randomly stop working while still taking your money? Done.
Not sure, but I hear the Claude Super Duper Extreme Fucking Pro ($200/month) is like the Ferrari of LLM assisted coding
the Ferrari of LLM assisted coding
So…4th in the Constructors and 5th+6th in the Driver’s Championships?
unfortunately your code placed last in the driver’s so AI would be a HUGE step up for you
As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
Every LLM is the same bullshit guessing machine.
Functions with arguments that don’t do anything… hey Claude why did you do that? Good catch…!
AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language
Generative AI has an average error rate of 9-13%. Nobody should trust it wholesale and what it spits out.
It has some excellent use cases. Vibe code/sysadmin/netadmin’ing are not one of those things.
Where does this 9-13% number come from?
I don’t trust it wholesale. No one who knows what they’re talking about trusts it wholesale. Hallucination rates vary depending on who you ask. And you’re wrong about vibe coding, it works great if you’re working on some random side project and not working with a team that has to push to production
Native language == assembly?
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not an exception, or an improper use of the tech.
it’s an inherent, fundamental flaw.
whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right
Ferrari
So expensive, looks great, takes significant capital to maintain, and anyone who has one uses something else when they actually need to do something useful.
it literally doesn’t cost as much as a ferrari
What’s with tech people always stating (marketing) things as akin to high end sports cars. The state of AI is more like arguing over which donkey is best, lol.
The sheer amount of AI slop shorts on YouTube must be generating entire dollars in revenue by now. Who isn’t entertained and eagerly awaiting the next five million videos of the same scenario over and over again?
I wonder … will it be another case of “Too Big To Fail” … or will it be … “Let The Market Decide”?
I’m guessing the answer depends on how many medals the CEO of Oracle can bestow upon the Orange.
Me … cynical … no … just been here for a while.
OpenAI CEO Sam Altman declared a “code red” last week as the upstart faces greater rivalry from Google, threatening its ability to monetize its AI products and meet its ambitious revenue targets.
Interesting that even Sam Altman is worried now!
AFAIK there are also problems that Chinese companies have their own tool chain, and are releasing high level truly open source solutions for AI.Seems to me a problem for the sky high profits could be that it is hard to make AI lock in, like is popular with much software and cloud services. But with AI you can use whatever tool is best value, and switch to the competition whenever you want.
It’s nice that it will probably be impossible for 1 company to monopolize AI, like Microsoft did with operating systems for decades.
AFAIK there are also problems that Chinese companies have their own tool chain, and are releasing high level truly open source solutions for AI.
One interesting thing about the Chinese “AI Tigers” is the lack of Tech Bro evangelism.
They see their models as tools. Not black box magic oracles, not human replacements. And they train/structure/productize them and such.
But with AI you can use whatever tool is best value, and switch to the competition whenever you want.
Big Tech is making this really hard, though.
In the business world, there’s a lot of paranoia about using Chinese LLM weights. Which is totally bogus, but also understandably hard to explain.
And OpenAI and such are working overtime to lock customers in. See: iOS being ChatGPT-only; no “pick your own API.” Or Disney using Sora when they should really be rolling their own finetune.
OpenAI and such are working overtime to lock customers in.
Of course they are, I just thought they hadn’t figured out how yet. 🤥
they cant do it like google and ms can, just jam it into thier DEVICES OR OS, openai doesnt have any services outside of being llm.
Please, government of the USA, do not bail them* out. At least not any more than what you’re already giving them.
* OpenAI
You clearly want the economy to collapse. The bailout will actually be profitable for the government. /s
Altman just needs to cobble together a gold Trump statue, deliver it to the White House, and any bailout needed is his.
Oracle doesn’t need a bailout, they are loaded, and can afford this loss. But of course an investment not being as profitable as they promised means the stock goes down. It’s not like the company is anywhere near being in trouble.
I don‘t know of a single
truly open source solutions for AI
from China. China doesn‘t seem very keen on open source as a whole to be honest. That is unless they can monetize on open source projects from outside of China. Their companies love doing that.
Your ignorance is not a valid point.
https://techwireasia.com/2025/07/china-open-source-ai-models-global-rankings/
DeepSeek being an LLM is far from open source and especially not „truly“ open. The very article you linked basically says as much but wraps it in pretty words. Talking about ignorance.
Yes I found out I was wrong, and I thought I had edited most of the wrong posts claiming deepseek is open source.
You are right it isn’t, despite articles claiming it is.
deleted by creator
DeepSeek the software is open source.
It‘s open weights but definitely not
truly open source
Feel free to blame the technology as a whole but open source doesn‘t make exceptions for AI models.
same with qwen, ernie, minimax, and kimi
Unless the dataset, weighting, and every aspect is open source, it’s not truly open source, as the OSI defines it.
The dataset is massive and impractical to share, and a dataset may include bias and conditions for use, and the dataset is a completely separate thing from the code. You would always want to use a dataset that fit your needs. From known sources. It’s easy to collect data. Programming a good AI algorithm not so much.
Saying a model isn’t open source because collected data isn’t included is like saying a music player isn’t open source, because it doesn’t include any music.EDIT!!!
TheGrandNagus is however right about the source code missing, investigating further, the actual source code is not available. and the point about OSI (Open Source Initiative) is valid, because OSI originally coined the term and defined the meaning of Open Source, so their description is per definition the only correct one.
https://en.wikipedia.org/wiki/Open_source
Open source as a term emerged in the late 1990s by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term “free software” and sought to reframe the discourse to reflect a more commercially minded position.[14] In addition, the ambiguity of the term “free software” was seen as discouraging business adoption.[15][16] However, the ambiguity of the word “free” exists primarily in English as it can refer to cost. The group included Christine Peterson, Todd Anderson, Larry Augustin, Jon Hall, Sam Ockman, Michael Tiemann and Eric S. Raymond. Peterson suggested “open source” at a meeting[17] held at Palo Alto, California, in reaction to Netscape’s announcement in January 1998 of a source code release for Navigator.[18] Linus Torvalds gave his support the following day
no,
your changing the definition of open source software. which has been around a lot longer than AI has.
source code is what defines open source.
what deepseek has is open weights. they publish the results of their learning only. not the source that produced it.
your changing the definition of open source software.
https://techwireasia.com/2025/07/china-open-source-ai-models-global-rankings/
The tide has turned. With the December 2024 launch of DeepSeek’s free-for-all V3 large language model (LLM) and the January 2025 release of DeepSeek’s R1 (the AI reasoning model that rivals the capabilities of OpenAI’s O1), the open-source movement started by Chinese firms has sent shockwaves through Silicon Valley and Wall Street.
And:
DeepSeek, adopting an open-source approach was an effective strategy for catching up, as it allowed them to use contributions from a broader community of developers.”
I’ve read similar descriptions in other articles, seems your claim is false.
EDIT PS:
Turns out on further investigation that Deepseek is NOT open source, there is NO access to the source code for Deepseek. Only the weights as others have rightfully claimed.can you show me the actual source code?
the human readable code, not the weights.
Still debatable, the weights are the code. That’s a bit like saying “X software is not open source because it has equations but it doesn’t include the proofs that they’re derived from”.
And LLM is simply such a bad example for Open Source in general. They couldn‘t have chosen a worse example to make their point. That‘s what’s frustrates me.
what has been published by deep seek is the music, not the software, the music.
the weights are the code
In the same way as an Excel spreadsheet containing a crosstab of analytics results is “the code.”
It’s processed input for a visualization/playback mechanism, not source code.
every major chinese model is open source
Where can I see the source code?
They are releasing lots of open weight models. If you want to run AI stuff on your own hardware, Chinese models are generally the best.
They also don’t care about copyright law/licensing, so going forward they will be training their models on more material than Western companies are legally able to.
Honestly, tulips were a better investment than Tesla or OpenAI. In fact, the continued success of the latter two tells you by itself there is something deeply, seriously wrong with the stock markets and the economy as a whole.























