Taking “will it run doom” one step too far.
We need to take it back a step and see if it’ll run Linux.
Asking someone to compile the kernel with their mind sure is one way to give someone a headache
I wanna see Linus’ reaction to that pull request.
I can imagine a new AI hellscape where LLMs are run on human brain cells in a test tube. So you’re never quite sure if you’re talking to a mere algorithm … or an enslaved proto-human who might be conscious and whose entire existence revolves around answering your inane online queries.
oooo nice another method I hadn’t considered of It Getting Worse!
Worst fucking sitcom ever
A closer example would be if it could use the console in linux since it’s only playing dom not running it, I wonder if wetware would be any better as a coding assistant than ai
playing dom
I do not think we should train a bioengineered intelligence with guns to do that.
Don’t kink shame
Hey human doms I’m 100% okay with, it’s these things I’m worried about
https://static.wikia.nocookie.net/doom/images/2/27/Meet_spider.png/revision/latest
Is anyone else really skeeved out by the term “wetware”, or is that just me
Shadowrun used the term “bioware” instead, do you like that better?
Nah I prefer wetware. This should heebie your jeebies
Yeah now it’s “will it speedrun DOOM?”
Can they run console commands?
It’s having more fun than I am.
Maybe the real hell is the life we’ve lived along the way.
Maybe this experience is just a higher being’s analog of Doom and we’re just cells in a petri dish running an MMORPG version it. Stupidest simulation theory ever, but hey man, we’re sorta doing it so how stupid can it be?
This is the bad place!
Having worked with human neurons harvested from dead people, there are worse ways to extend your life. At least these ones get to play games instead of getting poked and zapped by me.
Could you poke and zap me?
Poke, yes. Zap is too weak to do much in live people but if I break out the old electrolytic brain lesion maker you’ll feel it.
OwO
You know this isn’t even the weirdest flirting I’ve seen online today. It’s a notable thread, but not too bad.
Enjoy the shocks
What can I say, they saw a spark and went for it. May the odds forever be in their favor lol
Can I gulp down your arm to see what happens? Fairly sure I can fit it.
This happened in American Dad!
IIRC, there were consequences.
I have neurons, can you zap me?
*Unplugs spark plug wire with kinky intent*
I mean, I was thinking more like TENS …
Sure you were, you fucking freak.
Maybe I misunderstood lemmy constituents.
deleted by creator
This is my favorite comment thread I have experienced on Lemmy so far.
… Am I missing something, or is this not like, the practical, if not lore accurate first step toward actually creating a:

Next step, give it spider legs and a gattling gun!
I mean, Boston Dynamics figured out how to build essentially robot mules and cats like a decade ago, and they’re actually currently building and improving on humanoid designs.
They got basically acquired by/folded into Hyundai, you know, an actual manufacturing company, unlike Elon’s ongoing fradulent shitshows.
the only missing components are a minigun, robotic spyder legs and positive reinforcement coctail whenever it kills a person.
“We decided to leave those out of our first test, staring down the barrels of a minigun during neural training were putting our scientists off”



Why did hell have its own R&D department doing high tech cybernetics anyway?
What other advance industry does hell have. It’s obviously a highly capitalistic place, so I imagine banking/finance?
I mean, pick all your deadly sins, right?
Brothels, Restaurants, Blood Sports…
I’m not sure if it’s the same scientists, but the YouTube channel The Thought Emporium has an ongoing series about growing neurons to play retro games (such as OG DOOM).
The playlist of this series is fittingly called Building the Torment Nexus.
It looks like their setup, but I don’t see any recent videos. Jan 15th was their last one. I think they have a patreon with bonus clips and advance stuff.
Might be from that.
perhaps they did the experiment and are now working on a video about it
So no seems this is a different group, doing this more professionally and marketing themselves as an ai cloud startup.
Of course it has to be an AI startup…
The TE guys were from Canada, iirc?
Raises uncomfortable questions about consciousness. The only difference between these neurons and your own are the number of them and the structures they form. Of course it doesn’t know what it’s doing, but… Neither do our own neurons
Science and Ethics — the age old enmity between “I wanna know” and “I’m not allowed to find out”
Science and Ethics — the age old enmity between “I wanna know” and
“I’m not allowed to find out”“Am I able to find out without doing something monstrously inhumane”FTFY
I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.
I simplified for comedic effect. You’re absolutely right that the “compromise” would be finding some humane and ethical solution, but “The most effective and direct way of finding out is cruel and callous” isn’t quite as snappy.
I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.
That kinda dodges the conflict by not engaging with ethical concerns at all. I feel like calling it a solution would be morbid, but it does make the problem stop being a problem…
That kinda dodges the conflict by not engaging with ethical concerns at all.
I guess I…kinda lost the plot a bit when I wrote the second part, eh?
There’s ethics…and then there’s what the government in the country a scientist operates in views as “morally and ethically acceptable”.
Stem cell research was banned in most places for a long time. The US is banning CRISPR, if I remember right, the OG Nazis, Soviets and Empire of Japan (and honestly basically everyone else too, just those are the three that were highlighted when I was in school) rubber-stamped and funded research that should warrant execution by vivisection…die by your own methods and all that.You’re right it’s not really a solution. However the realities of modern society means that there’s room within what is morally and ethically acceptable in any country to operate in both a humane and inhumane fashion. And if it doesn’t then money and connections to those in power allow further leeway to be an example of humanity at it’s best…or a monster in a human suit…
I guess I…kinda lost the plot a bit when I wrote the second part, eh?
I think I got where you were going, I was just saying that someone trying to find a way around the legal restrictions indicates they’re not actually concerned about ethics, just about not getting in trouble for it. In that context, the problem “How do I do this in an ethically acceptable manner?” is “solved” with the answer “I don’t care”.
Generally, laws are the standard solution to ambiguities. Ethics are a murky and often subjective topic, so it makes sense to form some sort of common agreement on what is okay and what isn’t. And where there are laws, there are gonna be cunts proving exactly why we had to write it down in the first place…
Nueralink did pretty much the same thing to monkeys that are actually conscious. So it this different only because those are human neurons? Is human consciousness different than animal consciousness?
i dont think op made a mutually exclusive statement?..
I’m not sure this is quite analagous to neuralink’s monkey experiments. That said,
So is this different only because those are human neurons?
To my mind, a neuron is a neuron. The only difference between your brain and a monkey brain is, again, the number of neurons and the structures they form. I don’t see this as any different from monkey or rat or ant or entirely digital neurons.
I’m not sure this is quite analagous to neuralink’s monkey experiments.
Why not? It’s a chip reading inputs from neurons. This meme doesn’t make it clear if the chip was also stimulation neurons but Neuralink has plans for neural stimulation and it’s possible this was also tested on monkeys. So what’s the difference?
You seem to be arguing against a point that no one has made.
You seem not to understand what is being discussed here.
Correct. That was basically my point – I don’t think anything is being discussed, people are talking past each other.
Yes. Because it’s us. Anything not us is always going to be less valuable. You’d kill 100 lions if it means saving 1 human.
Lions are not conscious. And I’m not asking about value. Of course we value human consciousness more than monkey consciousness. We don’t grant monkeys any rights. Hell, we assign more value to unconscious (brain dead) humans than to conscious monkeys. But how exactly is human consciousness different?
What leads you to assume that lions lack consciousness exactly?
Shit, turns out lions are conscious! They are just stupid. Stephen Hawking said it in 2012. I honestly didn’t know that.
Sounds like those are uncomfortable questions being raised…
That was just to try and make the equipment work at all, it wasn’t about doing anything with software. It’s the opposite where you’re only worried about the physical damage and infection.
I was focusing more on the “hooking up conscious brain to computer” part than about the damage and infection part.
Thought experiment: let’s say we have a dead brain patient. You have verified that there is no neural activity in the brain beyond cerebellum. There’s no consciousness in the brain. Legally it’s still considered a person. You can’t for example shoot them.
We also have a 5kg blob of lab grown human brain tissue. We have verified there is neural activity in the entire blob but we don’t know what it’s doing and we can’t communicate with it.
Which one is more conscious? Which one should be considered more human and should have more rights?
Hooking up to a computer is just installing a software keyboard in your brain, that doesnt really mean or do anything. It’s what software you load after that’s relevant.
Do those neurons interact with hormones like mine do?
I mean it’s the same question we’ve been asking all our lives about the animals, fetuses and now AI. When does it stop being a flowchart and start being a consciousness.
And now bring artificial neural networks, i.e., AI, into the picture to make it even more spicy.

Finally, I knew when I saved this to my phone there would be a perfect moment. (Humanity is too predictable)
Attribution: https://lemmy.world/post/43077529
OK but hear me out here, I think I have the beginnings of a business plan:
-
Create the Torment Nexus
-
?
-
Profit
Some components of the plan are still under development, but let’s not lose momentum. We can advance with the initial phase while brainstorming to refine the plan in real time as we progress. It’s an exciting opportunity and we mustn’t forfeit our first-to-market advantage.
-
Scientists: “No, this isn’t The Torment Nexus, this is ‘The Nexus of Torment’! It’s totally different!”
In other news, Torment (with the patches) was a really good game
Wait is it a real cover? Was it made before or after squid game? It uses the same font
Cortical Labs are the ones who pulled this off. They already have biological computers running on 800,000 lab-grown neurons available for ~$35,000 (just going on what a quick Google search told me) and are planning to open up a cloud computing service with its own API soon.
This makes me feel uneasy. Imagine if reincarnation were a thing and you get brought back into this world, and your purpose is to learn how to play DOOM.
Personally my worry really isn’t reincarnation, there’s no reason to believe that that’s true. But if these are fundamentally the same neurons that make up our brains, then how much do you need to put together before they acquire some form of “sentience”? Does a clump of 800,000 human neurons experience pain, sadness, a sense of self? Where is the line between an emotionless biocomputer and torturing a living organism for its entire lifespan?
Despite the fact that I really hate “AI”, that question was of course already sort of relevant for the latest AI models, even though we can generally conclude that they’re not there yet at all. But real neurons are different, we know what they’re capable of. How many do you need before a clump of neurons has rights?
Large language models are not intelligent. They are predictive text applications with massive dictionaries of circumstantial sentence structures to choose from. Nothing more. They do not feel and do not think for themselves. The only time they do anything is when the API calls them to produce more text with an updated context string.
I don’t know how many neurons are in a human brain, but if you made an artificial human brain, could it have consciousness?
Maybe. That’s certainly not my field of experience. But LLMs will never produce thought the way a human brain does. Certainly not without substantial change in how the tech functions fundamentally.
It has to be a full fetus with a heartbeat to have rights. /s In all seriousness, the human brain is estimated to have 86 billion neurons.
Sure, but is the full human brain the minimum set necessary?
Sentience/sapience is probably an emergent property of a set of neurons needing to coordinate, plan, predict the future and oneself in relation to it.
I suspect that AI is capable of sentience with sufficient complexity and training, but it’s not there yet. I also suspect we’ll be well past the point where it is there before we realize it is, but not until we make some kind of fundamental change in how we do it - we know human level intelligence is possible in the volume and power consumption of, well, a brain so we’re orders of magnitude off of efficiency limits.
It’s estimated that mice have 70 million to 100 million neurons in their brains. They are capable of feeling pain and have social hierarchy. They also experience emotions like fear, pleasure, and anxiety. (We use them in pharmacology models of many mental illnesses.)
Have you ever heard the phrase, “the neurons that fire together, wire together” ? Our neurons are in a constant feedback loop with the environment we experience. Our experiences shape how our neurons make interconnected networks, which then impacts how we behave upon the environment.
If those neurons connected to the computer chip only ever experience playing the game “DOOM,” how would they know about anything else? How could they know about pain without having limbs to innervate and experience the pain with? How could they have a social hierarchy without others to interact with? We may as well be god to those neurons on the PC chip, because we are controlling the entire world they have access to.
What I find sad is that our society is ok with hooking living cells up to a computer to make smarter computers, but has a problem with ethically harvesting stem cells to be used to treat diseases.
It’s because the stem cells somehow threatened the religious hegemony.
“Do lab grown neurons have a soul?”
I would say consciousness is required for that, so no.
People used to say animals were not concious.
Recent science suggest that some animals have what humans would consider to be language. This is a slippery slipe.
People used to say animals were not concious.
A lot of religious people still say that.
aw sweet, man made horrors beyond my comprehension 😍
There’s another bunch of guys who are trying to do the same thing with rat neurons on the cheap using Gatorade as a growth medium.
Ha, i understand why it would make someone uneasy but personally that sounds like heaven to me. Seriously,.take a slice of my neurons and hook them up to play doom forever, that’s what I want done with my remains. (I guess the rest of me cremate or something idc)
I have no mouth and I must Doom.
Honestly? Sounds preferable to being stuck in the universe of I Have No Mouth And I Must Scream… I’ll take a challenging power fantasy with some massively overpowered weapons over millennia of endless physical and psychological torture by an insane AI… might just be me though…
I have no MOUSE and I must DOOM
The original DOOM is entirely playable on a keyboard, though. It’s essentially a 2D game, as you can’t look up or jump.
I just remembered, back in the day in russia we used to call keyboard players “tractor driver”
Computer Scientists: We can make Doom run on any device! Bioscientists: Watch this!
Cosmologists: *cracks knuckles* Check this shit out.
We are old gods who punish life for fun.

Stupid Sexy Goldberg

*hell-spawned brain cell noises
Ok, so maybe 200K brain cells would be sufficient to run for public office, but you can’t really call that a complete brain, containing approximately 100 billion cells.
Public office might still be borderline, but we have a living proof that POTUS is within reach.
“hey you, glad you’re awake…”
DO NOT give Todd any more ideas…please.
That’s fucked up though. What happened to bioethics and review boards?
We don’t understand enough about human consciousness to say that those cells aren’t sentient. We have no idea what sort of experience, if any, they’re perceiving.
This is not okay…
thats not nearly enough cells to have an internal experience, they’re fine.
We don’t know enough about human consciousness to know that for sure. Plenty of animals have fewer braincells than humans, but we don’t know enough about their consciousness to say whether they have an internal experience.
That’s what I mean. It’s hubris to assume we can culture human braincells in a petri dish just because there’s a lack of evidence one way or the other whether it’s aware.
There’s a lack of evidence for anything not being conscious.
Neurons work by generating electrical signals in response to stimulus (either electrical inputs from other neurons or physical/sensory inputs activated by light or touch etc.) and they do this in a physical way.
If they’re conscious, then there’s a pretty good chance that power plants are conscious, computers are conscious and pretty much everything else in the world is conscious.
I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness. Humans make a lot of assumptions about the world to make it fit the patterns we’re used to.
A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human, just like a rock or a plant or an ant or an iPhone might have consciousness.
Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.
I think there’s a lower limit of complexity for sentience, based on memory-persistence, self-firing, and self-recognition. I think there’s no need for moral concern for non-sentient things. (But, that’s just my ethical framework and philosophical worldview; the only “evidence” I’m at all aware of is thin and vague.)
But, as far as having a subjective experience, I think that might go quite small and alien including fungi and plant or even certain sub-cellular structures. Probably anything that maintains a border and internal homeostasis including parts of the bodies of larger experiencers could be having an internal perspective – and any human words applied so those experiences would tell you more about human bias than the experience.
In my view, although I am neither a neurologist nor a philosopher, things should absolutely scale with neuron blob complexity, and it should do so in a non-linear way. I dislike harming an animal with a complex brain like mammals, cephalopods etc. much more than I dislike harming the equivalent nerve mass in insects, for instance.
That’s also the way I feel, but I think that’s probably human bias and closely related to the evolutionary pressure behind my mirror neurons and how strongly they trigger correlates with outside sentient phenotype.
I think if I knew what it felt like (if anything) to be an ant colony, I might have different views around the causal use of boric acid (and related) to keep them out of human spaces.
There’s a lack of evidence for anything not being conscious.
So should we just assume that nothing is conscious? After all, I can’t prove that you’re conscious, nor you I. So should we relegate ourselves to an amoral solipsism?
Neurons work by generating electrical signals in response to stimulus and they do this in a physical way.
I know how neurons work. Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.
I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness.
That’s… irrelevant. I never said they have “human-like reasoning” or “understanding.” I said we don’t understand enough, meaning humanity writ large, including the experts. There are too many unknowns about the nature of consciousness.
A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human
Again, it doesn’t need to think like a human in order to be capable of experiencing suffering. Babies don’t “think like humans,” or at least we don’t have any solid evidence that they do, but they’re certainly capable of suffering.
Your mentality is the same one people have used for generations to justify circumcising infants without anaesthetics. How far are you willing to extend it? Do pets “think like humans”? Do uncontacted tribes “think like humans,” in whatever vague way you define it in order to justify cultivating human braincells in a petri dish?
Do you not see how problematic this is? What if the technology grows and in a decade they’re studying a clump of 2 billion neurons in a vat? Will it suddenly become human enough to deserve your consideration? What about when it becomes 20 billion?
Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.
Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is? Amoral people exist, moral cynicism exists, nihilism exists, solipsism exists, hell even social darwinism exists.
Any of those frameworks and worldviews can be used to justify atrocities in the minds of those who hold them. And yes, an unethical or even anti-ethical persuasion is still an “ethical framework,” in the strictest sense of the term.
Just because something can be seated in philosophical jargon doesn’t mean we should grant it license to do whatever it wants.
So should we just assume that nothing is conscious?
Not at all! In fact, I believe that we should assume almost everything is conscious. I think it’s a bit of human arrogance to think that we brain creatures have a monopoly on perspective.
Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.
Exactly my point.
That’s… irrelevant
I don’t think it is. If the argument is that it’s unethical to poke a neuron because it might have consciousness, would the same argument not apply to anything else? I think you might be getting a bit hung up on the “think like a human” thing. My point is not that it’s okay to torture something if it doesn’t “think like a human.” It’s that there are potentially a lot of things in the world that are conscious that don’t often get the same consideration.
capable of experiencing suffering
This is an interesting one. It shifts the question from “does it have a consciousness?” to “does it have a consciousness that is suffering or able to suffer?”. The idea of suffering is a very human concept that we have a whole section of our brains devoted to. There’s a lot of ethics devoted to alleviating suffering (eg. Humanitarianism) and we sorta use it as a means of directing our goals - we avoid things that make us suffer and seek things that bring us happiness. What makes us happy or makes us suffer varies a bit from person to person due to experience and learning/training but a lot of it is biologically evolved. Physical and emotional pain makes us suffer for evolutionary reasons.
So in one sense, you could define suffering as a stimulus that some conscious system avoids? In which case, training neurons essentially teaches them what suffering is. They’re trained to activate or not activate based on what avoids irregular stimulus (suffering) and results in regular stimulus (happiness).
If that’s how you define it though, there could be many other systems that work the same way. Obviously animals and plants and fungi etc. But also Computers and lots of mechanical systems do that too. If making decisions to avoid or seek electrical stimulus is suffering then a computer is basically a pleasure/torture box.
Personally I think that suffering is more than that. I think it’s a larger system we brain creatures have developed that doesn’t necessarily apply very well outside the context in which we use it. Would a vat of 20 billion neurons be able to suffer? I think that depends on how they’re arranged and whether they have that concept.
Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is?
Just because different ethical frameworks and worldviews exist, doesn’t mean they should all be treated equally. Sure, if someone is super utilitarian they might be fine with torturing people for medical research when they feel that the ends justify the means. If someone has a strict deontological code of ethics that tells them homosexuality is a sin punishable by death, they might campaign for that. I think those people suck and their beliefs are evil because of my own ethics and worldview.
When it comes to a question like “is an ant capable of suffering?” Or “is it okay to swat a fly or set a mouse trap?” Or “how many human neurons does it take to suffer while changing a light bulb?” You’ll get varying answers from people based on who they are. Personally, I think the right answer to those questions is dependent on the brain of the person answering them.
Moral universalists have the same slippery slopes you mentioned. If right and wrong are fixed and objective and not dependent on people, then groups claiming to know the one true morality will use it to persecute those labelled as evil or morally bankrupt (see the homophobic asshole example above).
Moral relativism doesn’t mean that morality doesn’t matter or that it’s wrong to fight against what you think is evil. I believe you should fight for what is right and I’m hopeful that the things that I think are good will win out against the things that I think are evil. Absolutism is maybe a bit easier for that because it simplifies moral choices a lot, but I think it’s hubris to think that evil is the same everywhere to everyone and not an artifact of the human mind.
We should teach ants to play Doom.
Nononono we don’t want World War Ants
I’d say they’re experiencing being doom guy but that’s just a guess
I’m no scientist but I’m pretty sure we know enough to say there’s no consciousness at this level. Consciousness as we know it is pretty fragile.
There may be a point if this is scaled up that could be a concern…
We do not, in fact, know enough to say there’s no consciousness at that level.
Anyone who tells you that is being intellectually dishonest.
deleted by creator
Sounds like something from a horror manga.
I’m seeing the chimera from FMAB saying “Edu…wardo… koroshite…kure… onegai…”
The goal is to grow brains in lab and prevent robot cops to tear off their own faces
The end goal is probably a vat that billionaires can hook their brains into at the end of their lives so they never die.
That probably has something to do with their push for virtual reality and the ‘metaverse’ (fuck zuck for appropriating the greek language for his pet project; I used to use that word to describe a sort of hypothetical hyperdimensional multiverse where “spirits” inhabit 4D/5D topologies).
Oh and why they’re training AI agents in “environments” now (basically, 3D-scanned renderings of real life spaces).
If they can put all the pieces together before they die, then they can hook their brains into these computers and control a little avatar so they never have to die and can continue making our lives hell (at least as long as they maintain ownership of private capital, or until we seize the means of production and redistribute the wealth).




























