This morning, the news broke that Larian Studios, developer of Baldur's Gate 3 and the upcoming, just-announced Divinity, is apparently using generative AI behind the scenes. The backlash has been swift, and now Larian founder and game director Swen Vincke is responding to clarify his remarks.
There are AI’s that are ethically trained. There are AI’s that run on local hardware. We’ll eventually need AI ratings to distinguish use types, I suppose.
Sure. My company has a database of all technical papers written by employees in the last 30-ish years. Nearly all of these contain proprietary information from other companies (we deal with tons of other companies and have access to their data), so we can’t build a public LLM nor use a public LLM. So we created an internal-only LLM that is only trained on our data.
It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).
That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣
The AI hater extremists seem to be in two camps:
Data center haters
AI-is-killing-jobs
The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).
Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.
Power source is only one impact. Water for cooling is even bigger. There are data centers pumping out huge amounts of heat in places like AZ, TX, CA where water is scarce and temps are high.
Is the water “consumed” when used for this purpose? I don’t know how data centers do it but it wouldn’t seem that it would need to be constantly drawing water from a local system. They could even source it from elsewhere if necessary.
Some use up the water through evaporation, so they constantly draw water. Some “consume” the water, meaning they have a closed system of cooling water, but that uses a lot more electricity than evaporative cooling, which also uses water to generate.
Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It’s not much—compared to the size of the data center—but it’s still a non-trivial amount.
A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.
This statistic has been used in the news quite a lot recently but it’s a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That’s typically water that would come from ponds and similar that would’ve been built right alongside the power plant (your classic “cooling pond”). So it’s not like the data centers are using 0.5L of fresh water that could be going to people’s homes.
For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.
Another stat from the study that’s relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).
So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models… To generate images 👍
Closed loop systems are expensive. A lot of them are literally spraying water directly on to heat exchangers. And they often pull directly from city drinking water. As some Texas towns have been asked to reduce water consumption so the data center doesn’t run out
colloquially today most people mean genAI like LLMs when they say “AI” for brevity.
Because there’s this default assumption that data centers can never be powered by renewable energy
that’s not the point at all. the point is, even before AI, our increasing energy needs were outpacing our ability/willingness to switch to green energy. Even then we were using more fossil fuels than at any point in the history of the world. Now AI is just adding a whole other layer of energy demand on top of that.
sure, maybe, eventually, we will power everything with green energy, but… we aren’t actually doing that, and we don’t have time to catch up. every bit longer it takes us to eliminate fossil fuels will add to negative effects on our climate and ecosystems.
The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.
AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.
Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.
The cat’s out of the bag. Focus your energy on stopping fascist oligarchs then regulating AI to be as green and democratic as possible. Or sit back and avoid it out of ethical concerns as the fascists use it to target and eliminate you.
The number of people who think that saying that the cat’s out of the bag is somehow redeeming is completely bizarre. Would you say this about slavery too in the 1800s? Just because people are doing it doesn’t mean it’s morally or ethically right to do it, nor that we should put up with it.
The “cat” does not refer to unethical training of models. Tell me, if we somehow managed to delete every single unethically trained model in existence AND miraculously prevent another one from being ever made (ignoring the part where the AI bubble pops) what would happen? Do you think everyone would go “welp, no more AI I guess.” NO! People would immediately get to work making an “ethically trained” model (according to some regulatory definition of “ethical”), and by “people” I don’t mean just anyone, I mean the people who can afford to gather or license the most exclusive training data: the wealthy.
“Cat’s out of the bag” means the knowledge of what’s possible is out there and everyone knows it. The only thing you could gain by trying to put it “back in the bag” is to help the ultra wealthy capitalize on it.
So, much like with slavery and animal testing and nuclear weapons, what we should do instead is recognize that we live in a reality where the cat is out of the bag, and try to prevent harm caused by it going forward.
No one [intelligent] is using an LLm for workflow organization. Despite what the media will try to convince you, Not every AI is an LLM or even and LLM trained on all the copyrighted shit you can find in the Internet.
We’ve had tools to manage workflows for decades. You don’t need Copilot injected into every corner of your interface to achieve this. I suspect the bigger challenge for Larian is working in a development suite that can’t be accused of having “AI Assist” hiding somewhere in the internals.
You know it doesn’t have to be all or nothing, right?
In the early design phase, for example, quick placeholder objects are invaluable for composing a scene. Say you want a dozen different effigies built from wood and straw – you let the clanker churn them out. If you like them, an environment artist can replace them with bespoke models, as detailed and as optimized as the scene needs it. If you don’t like them, you can just chuck them in the trash and you won’t have wasted the work of an artist, who can work on artwork that will actually appear in the released product.
Larian haven’t done anything to make me question their credibility in this matter.
You know it doesn’t have to be all or nothing, right?
Part of the “magic” of AI is how much of the design process gets hijacked by inference. At some scale you simply don’t have control of your own product anymore. What is normally a process of building up an asset by layers becomes flattened blobs you need to meticulously deconstruct and reconstruct if you want them to not look like total shit.
That’s a big part of the reason why “AI slop” looks so bad. Inference is fundamentally not how people create complex and delicate art pieces. It’s like constructing a house by starting with the paint job and ending with the framing lumber, then asking an architect to fix where you fucked up.
If you don’t like them, you can just chuck them in the trash and you won’t have wasted the work of an artist
If you engineer your art department to start with verbal prompts rather than sketches and rough drawings, you’re handcuffing yourself to the heuristics of your AI dataset. It doesn’t matter that you can throw away what you don’t like. It matters that you’re preemptively limiting yourself to what you’ll eventually approve.
That’s a big part of the reason why “AI slop” looks so bad. Inference is fundamentally not how people create complex and delicate art pieces. It’s like constructing a house by starting with the paint job and ending with the framing lumber, then asking an architect to fix where you fucked up.
This is just the whole robot sandwich thing to me.
A tool is a tool. Fools may not use them well, but someone who understands how to properly use a tool can get great things out of it.
Doesn’t anybody remember how internet search was in the early days? How you had to craft very specific searches to get something you actually wanted? To me this is like that. I use generative AI as a search engine and just like with altavista or google, it’s up to my own evaluation of the results and my own acumen with the prompt to get me where I want to be. Even then, I still need to pay attention and make sure what I have is relevant and useful.
I think artists could use gen AI to make more good art than ever, but just like a photographer… a thousand shots only results in a very small number of truly amazing outcomes.
Gen AI can’t think for itself or for anybody, and if you let it do the thinking and end up with slop well… garbage in, garbage out.
At the end of the day right now two people can use the same tools and ask for the same things and get wildly different outputs. It doesn’t have to be garbage unless you let it be though.
I will say, gen AI seems to be the only way to combat the insane BEC attacks we have today. I can’t babysit every single user’s every email, but it sure as hell can bring me a shortlist of things to look at. Something might get through, but before I had a tool a ton of shit got through, and we almost paid tens of thousands of dollars in a single bogus but convincing looking invoice. It went so far as a fucking bank account penny test (they verified two ach deposits) Four different people gave their approvals - head of accounting included… before a junior person asked us if we saw anything fishy. This is just one example for why gen AI can have real practical use cases.
This is just the whole robot sandwich thing to me.
If home kitchens were being replaced by pre-filled Automats, I’d be equally repulsed.
A tool is a tool. Fools may not use them well, but someone who understands how to properly use a tool can get great things out of it.
The most expert craftsman won’t get a round peg to fit into a square hole without doing some damage. At some point, you need to understand what the tool is useful for. And the danger of LLMs boils down to the seeming industrial scale willingness to sacrifice quality for expediency and defend the choice in the name of business profit.
Doesn’t anybody remember how internet search was in the early days? How you had to craft very specific searches to get something you actually wanted?
Internet search was as much constrained by what was online as what you entered in the prompt. You might ask for a horse and get a hundred different Palominos when you wanted a Clydesdale, not realizing the need to be specific. But you’re never going to find a picture of a Vermont Morgan horse if nobody bothered to snap a photo and host it where a crawler could find it.
Taken to the next level with LLMs, you’re never going to infer a Vermont Morgan if it isn’t in the training data. You’re never going to even think to look for one, if the LLM hasn’t bothered to index it properly. And because these AI engines are constantly eating their own tails, what you get is a basket of horses that are inferred between a Palomino and a Clydesdale, sucked back into training data, and inferred in between a Palomino and a Palomino-Clydesdale, and sucked back into the training data, and, and, and…
I think artists could use gen AI to make more good art than ever
I don’t think using an increasingly elaborate and sophisticated crutch will teach you to sprint faster than Hussein Bolt. Removing steps in the artistic process and relying on glorified Clipart Catalogs will not improve your output. It will speed up your output and meet some minimum viable standard for release. But the goal of that process is to remove human involvement, not improve human involvement.
I will say, gen AI seems to be the only way to combat the insane BEC attacks we have today.
Which is great. Love to use algorithmic defenses to combat algorithmic attacks.
But that’s a completely different problem than using inference to generate art assets.
Yup! Certifying a workflow as AI-free would be a monumental task now. First, you’d have to designate exactly what kinds of AI you mean, which is a harder task than I think people realize. Then, you’d have to identify every instance of that kind of AI in every tool you might use. And just looking at Adobe, there’s a lot. Then you, what, forbid your team from using them, sure, but how do you monitor that? Ya can’t uninstall generative fill from Photoshop. Anyway, that’s why anything with a complicated design process marked “AI-Free” is going to be the equivalent of greenwashing, at least for a while. But they should be able to prevent obvious slop from being in the final product just in regular testing.
Coincidentally, this paper published yesterday indicates that LLMs are worse at coding the closer you get to the low level like assembly or binary. Or more precisely, ya stop seeing improvements pretty early on in scaling up the models. If I’m reading it right, which I’m probably not.
Yeah, do you use any Microsoft products at all (like 98% of corporate software development does)? Everything from teams to word to visual studio has copilot sitting there. It would just take one employee asking it a question to render a no-AI pledge a lie.
I get the knee jerk reaction because everything has been so horrible everywhere lately with AI, but they’re actually one of the few companies using it right.
Nothing wrong with using AI to organize or supplement workflow. That’s literally the best use for it.
Except for the ethical question of how the AI was trained, or the environmental aspect of using it.
There is no ethics under capitalism, so that’s a moot point.
There are AI’s that are ethically trained. There are AI’s that run on local hardware. We’ll eventually need AI ratings to distinguish use types, I suppose.
Can you please share examples and criteria?
It can use public domain licenced data
Adobe’s image generator (Firefly) is trained only on images from Adobe Stock.
Sure. My company has a database of all technical papers written by employees in the last 30-ish years. Nearly all of these contain proprietary information from other companies (we deal with tons of other companies and have access to their data), so we can’t build a public LLM nor use a public LLM. So we created an internal-only LLM that is only trained on our data.
It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).
That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣
The AI hater extremists seem to be in two camps:
The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).
Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.
Power source is only one impact. Water for cooling is even bigger. There are data centers pumping out huge amounts of heat in places like AZ, TX, CA where water is scarce and temps are high.
Is the water “consumed” when used for this purpose? I don’t know how data centers do it but it wouldn’t seem that it would need to be constantly drawing water from a local system. They could even source it from elsewhere if necessary.
https://thecurrentga.org/2025/08/26/data-centers-consume-massive-amounts-of-water-companies-rarely-tell-the-public-exactly-how-much/
Some use up the water through evaporation, so they constantly draw water. Some “consume” the water, meaning they have a closed system of cooling water, but that uses a lot more electricity than evaporative cooling, which also uses water to generate.
Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It’s not much—compared to the size of the data center—but it’s still a non-trivial amount.
A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.
This statistic has been used in the news quite a lot recently but it’s a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That’s typically water that would come from ponds and similar that would’ve been built right alongside the power plant (your classic “cooling pond”). So it’s not like the data centers are using 0.5L of fresh water that could be going to people’s homes.
For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.
Another stat from the study that’s relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).
So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models… To generate images 👍
Closed loop systems are expensive. A lot of them are literally spraying water directly on to heat exchangers. And they often pull directly from city drinking water. As some Texas towns have been asked to reduce water consumption so the data center doesn’t run out
colloquially today most people mean genAI like LLMs when they say “AI” for brevity.
that’s not the point at all. the point is, even before AI, our increasing energy needs were outpacing our ability/willingness to switch to green energy. Even then we were using more fossil fuels than at any point in the history of the world. Now AI is just adding a whole other layer of energy demand on top of that.
sure, maybe, eventually, we will power everything with green energy, but… we aren’t actually doing that, and we don’t have time to catch up. every bit longer it takes us to eliminate fossil fuels will add to negative effects on our climate and ecosystems.
The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.
AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.
Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.
The cat’s out of the bag. Focus your energy on stopping fascist oligarchs then regulating AI to be as green and democratic as possible. Or sit back and avoid it out of ethical concerns as the fascists use it to target and eliminate you.
Holy false dichotomy. I can care about more than one thing at a time. The existence of fascists doesn’t mean I need to use and like AI lmao
That’s 👏 not 👏 an 👏 excuse 👏 to be 👏 SHITTY!
The number of people who think that saying that the cat’s out of the bag is somehow redeeming is completely bizarre. Would you say this about slavery too in the 1800s? Just because people are doing it doesn’t mean it’s morally or ethically right to do it, nor that we should put up with it.
No one 👏👏 is 👏👏 excusing 👏👏 being 👏👏 shitty.
The “cat” does not refer to unethical training of models. Tell me, if we somehow managed to delete every single unethically trained model in existence AND miraculously prevent another one from being ever made (ignoring the part where the AI bubble pops) what would happen? Do you think everyone would go “welp, no more AI I guess.” NO! People would immediately get to work making an “ethically trained” model (according to some regulatory definition of “ethical”), and by “people” I don’t mean just anyone, I mean the people who can afford to gather or license the most exclusive training data: the wealthy.
“Cat’s out of the bag” means the knowledge of what’s possible is out there and everyone knows it. The only thing you could gain by trying to put it “back in the bag” is to help the ultra wealthy capitalize on it.
So, much like with slavery and animal testing and nuclear weapons, what we should do instead is recognize that we live in a reality where the cat is out of the bag, and try to prevent harm caused by it going forward.
The world is on fire, but if you don’t add fire to the fire, you might get burned.
There’s more to AI than LLM.
No one [intelligent] is using an LLm for workflow organization. Despite what the media will try to convince you, Not every AI is an LLM or even and LLM trained on all the copyrighted shit you can find in the Internet.
The only good use for LLMs and generative AI is spreading misinformation.
We’ve had tools to manage workflows for decades. You don’t need Copilot injected into every corner of your interface to achieve this. I suspect the bigger challenge for Larian is working in a development suite that can’t be accused of having “AI Assist” hiding somewhere in the internals.
You know it doesn’t have to be all or nothing, right?
In the early design phase, for example, quick placeholder objects are invaluable for composing a scene. Say you want a dozen different effigies built from wood and straw – you let the clanker churn them out. If you like them, an environment artist can replace them with bespoke models, as detailed and as optimized as the scene needs it. If you don’t like them, you can just chuck them in the trash and you won’t have wasted the work of an artist, who can work on artwork that will actually appear in the released product.
Larian haven’t done anything to make me question their credibility in this matter.
Part of the “magic” of AI is how much of the design process gets hijacked by inference. At some scale you simply don’t have control of your own product anymore. What is normally a process of building up an asset by layers becomes flattened blobs you need to meticulously deconstruct and reconstruct if you want them to not look like total shit.
That’s a big part of the reason why “AI slop” looks so bad. Inference is fundamentally not how people create complex and delicate art pieces. It’s like constructing a house by starting with the paint job and ending with the framing lumber, then asking an architect to fix where you fucked up.
If you engineer your art department to start with verbal prompts rather than sketches and rough drawings, you’re handcuffing yourself to the heuristics of your AI dataset. It doesn’t matter that you can throw away what you don’t like. It matters that you’re preemptively limiting yourself to what you’ll eventually approve.
This is just the whole robot sandwich thing to me.
A tool is a tool. Fools may not use them well, but someone who understands how to properly use a tool can get great things out of it.
Doesn’t anybody remember how internet search was in the early days? How you had to craft very specific searches to get something you actually wanted? To me this is like that. I use generative AI as a search engine and just like with altavista or google, it’s up to my own evaluation of the results and my own acumen with the prompt to get me where I want to be. Even then, I still need to pay attention and make sure what I have is relevant and useful.
I think artists could use gen AI to make more good art than ever, but just like a photographer… a thousand shots only results in a very small number of truly amazing outcomes.
Gen AI can’t think for itself or for anybody, and if you let it do the thinking and end up with slop well… garbage in, garbage out.
At the end of the day right now two people can use the same tools and ask for the same things and get wildly different outputs. It doesn’t have to be garbage unless you let it be though.
I will say, gen AI seems to be the only way to combat the insane BEC attacks we have today. I can’t babysit every single user’s every email, but it sure as hell can bring me a shortlist of things to look at. Something might get through, but before I had a tool a ton of shit got through, and we almost paid tens of thousands of dollars in a single bogus but convincing looking invoice. It went so far as a fucking bank account penny test (they verified two ach deposits) Four different people gave their approvals - head of accounting included… before a junior person asked us if we saw anything fishy. This is just one example for why gen AI can have real practical use cases.
If home kitchens were being replaced by pre-filled Automats, I’d be equally repulsed.
The most expert craftsman won’t get a round peg to fit into a square hole without doing some damage. At some point, you need to understand what the tool is useful for. And the danger of LLMs boils down to the seeming industrial scale willingness to sacrifice quality for expediency and defend the choice in the name of business profit.
Internet search was as much constrained by what was online as what you entered in the prompt. You might ask for a horse and get a hundred different Palominos when you wanted a Clydesdale, not realizing the need to be specific. But you’re never going to find a picture of a Vermont Morgan horse if nobody bothered to snap a photo and host it where a crawler could find it.
Taken to the next level with LLMs, you’re never going to infer a Vermont Morgan if it isn’t in the training data. You’re never going to even think to look for one, if the LLM hasn’t bothered to index it properly. And because these AI engines are constantly eating their own tails, what you get is a basket of horses that are inferred between a Palomino and a Clydesdale, sucked back into training data, and inferred in between a Palomino and a Palomino-Clydesdale, and sucked back into the training data, and, and, and…
I don’t think using an increasingly elaborate and sophisticated crutch will teach you to sprint faster than Hussein Bolt. Removing steps in the artistic process and relying on glorified Clipart Catalogs will not improve your output. It will speed up your output and meet some minimum viable standard for release. But the goal of that process is to remove human involvement, not improve human involvement.
Which is great. Love to use algorithmic defenses to combat algorithmic attacks.
But that’s a completely different problem than using inference to generate art assets.
How do you think a human decides what to sketch? They talk about the requirements.
Yup! Certifying a workflow as AI-free would be a monumental task now. First, you’d have to designate exactly what kinds of AI you mean, which is a harder task than I think people realize. Then, you’d have to identify every instance of that kind of AI in every tool you might use. And just looking at Adobe, there’s a lot. Then you, what, forbid your team from using them, sure, but how do you monitor that? Ya can’t uninstall generative fill from Photoshop. Anyway, that’s why anything with a complicated design process marked “AI-Free” is going to be the equivalent of greenwashing, at least for a while. But they should be able to prevent obvious slop from being in the final product just in regular testing.
It’s simple: go back to binary.
Or just have a hard cut-off for software released after 2022.
It’s the only way I search for recipes anymore - a date filter from 1/1/1990 - 1/1/2022.
Keep going. Handmade analog mediums only.
Coincidentally, this paper published yesterday indicates that LLMs are worse at coding the closer you get to the low level like assembly or binary. Or more precisely, ya stop seeing improvements pretty early on in scaling up the models. If I’m reading it right, which I’m probably not.
Just stop using computers at all to program computer games.
Yeah, do you use any Microsoft products at all (like 98% of corporate software development does)? Everything from teams to word to visual studio has copilot sitting there. It would just take one employee asking it a question to render a no-AI pledge a lie.
I was saying that as well.
I get the knee jerk reaction because everything has been so horrible everywhere lately with AI, but they’re actually one of the few companies using it right.