• frustrated_phagocytosis@fedia.io
    link
    fedilink
    arrow-up
    115
    ·
    21 天前

    Having worked with human neurons harvested from dead people, there are worse ways to extend your life. At least these ones get to play games instead of getting poked and zapped by me.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    87
    ·
    edit-2
    21 天前

    … Am I missing something, or is this not like, the practical, if not lore accurate first step toward actually creating a:

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 天前

        I mean, Boston Dynamics figured out how to build essentially robot mules and cats like a decade ago, and they’re actually currently building and improving on humanoid designs.

        They got basically acquired by/folded into Hyundai, you know, an actual manufacturing company, unlike Elon’s ongoing fradulent shitshows.

    • Avicenna@programming.dev
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      21 天前

      the only missing components are a minigun, robotic spyder legs and positive reinforcement coctail whenever it kills a person.

    • Aussieiuszko@aussie.zone
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      21 天前

      Why did hell have its own R&D department doing high tech cybernetics anyway?

      What other advance industry does hell have. It’s obviously a highly capitalistic place, so I imagine banking/finance?

  • HailHydra@infosec.pub
    link
    fedilink
    English
    arrow-up
    74
    ·
    edit-2
    21 天前

    I’m not sure if it’s the same scientists, but the YouTube channel The Thought Emporium has an ongoing series about growing neurons to play retro games (such as OG DOOM).

    The playlist of this series is fittingly called Building the Torment Nexus.

  • starman2112@sh.itjust.works
    link
    fedilink
    arrow-up
    60
    ·
    21 天前

    Raises uncomfortable questions about consciousness. The only difference between these neurons and your own are the number of them and the structures they form. Of course it doesn’t know what it’s doing, but… Neither do our own neurons

      • Sturgist@lemmy.ca
        link
        fedilink
        arrow-up
        16
        ·
        edit-2
        21 天前

        Science and Ethics — the age old enmity between “I wanna know” and “I’m not allowed to find out” “Am I able to find out without doing something monstrously inhumane”

        FTFY

        I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.

        • luciferofastora@feddit.org
          link
          fedilink
          arrow-up
          3
          ·
          21 天前

          I simplified for comedic effect. You’re absolutely right that the “compromise” would be finding some humane and ethical solution, but “The most effective and direct way of finding out is cruel and callous” isn’t quite as snappy.

          I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.

          That kinda dodges the conflict by not engaging with ethical concerns at all. I feel like calling it a solution would be morbid, but it does make the problem stop being a problem…

          • Sturgist@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            21 天前

            That kinda dodges the conflict by not engaging with ethical concerns at all.

            I guess I…kinda lost the plot a bit when I wrote the second part, eh?

            There’s ethics…and then there’s what the government in the country a scientist operates in views as “morally and ethically acceptable”.
            Stem cell research was banned in most places for a long time. The US is banning CRISPR, if I remember right, the OG Nazis, Soviets and Empire of Japan (and honestly basically everyone else too, just those are the three that were highlighted when I was in school) rubber-stamped and funded research that should warrant execution by vivisection…die by your own methods and all that.

            You’re right it’s not really a solution. However the realities of modern society means that there’s room within what is morally and ethically acceptable in any country to operate in both a humane and inhumane fashion. And if it doesn’t then money and connections to those in power allow further leeway to be an example of humanity at it’s best…or a monster in a human suit…

            • luciferofastora@feddit.org
              link
              fedilink
              arrow-up
              3
              ·
              21 天前

              I guess I…kinda lost the plot a bit when I wrote the second part, eh?

              I think I got where you were going, I was just saying that someone trying to find a way around the legal restrictions indicates they’re not actually concerned about ethics, just about not getting in trouble for it. In that context, the problem “How do I do this in an ethically acceptable manner?” is “solved” with the answer “I don’t care”.

              Generally, laws are the standard solution to ambiguities. Ethics are a murky and often subjective topic, so it makes sense to form some sort of common agreement on what is okay and what isn’t. And where there are laws, there are gonna be cunts proving exactly why we had to write it down in the first place…

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      21 天前

      Nueralink did pretty much the same thing to monkeys that are actually conscious. So it this different only because those are human neurons? Is human consciousness different than animal consciousness?

      • starman2112@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        21 天前

        I’m not sure this is quite analagous to neuralink’s monkey experiments. That said,

        So is this different only because those are human neurons?

        To my mind, a neuron is a neuron. The only difference between your brain and a monkey brain is, again, the number of neurons and the structures they form. I don’t see this as any different from monkey or rat or ant or entirely digital neurons.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          4
          ·
          21 天前

          I’m not sure this is quite analagous to neuralink’s monkey experiments.

          Why not? It’s a chip reading inputs from neurons. This meme doesn’t make it clear if the chip was also stimulation neurons but Neuralink has plans for neural stimulation and it’s possible this was also tested on monkeys. So what’s the difference?

      • Paddzr@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        21 天前

        Yes. Because it’s us. Anything not us is always going to be less valuable. You’d kill 100 lions if it means saving 1 human.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          1
          ·
          21 天前

          Lions are not conscious. And I’m not asking about value. Of course we value human consciousness more than monkey consciousness. We don’t grant monkeys any rights. Hell, we assign more value to unconscious (brain dead) humans than to conscious monkeys. But how exactly is human consciousness different?

      • MDCCCLV@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 天前

        That was just to try and make the equipment work at all, it wasn’t about doing anything with software. It’s the opposite where you’re only worried about the physical damage and infection.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          1
          ·
          21 天前

          I was focusing more on the “hooking up conscious brain to computer” part than about the damage and infection part.

          Thought experiment: let’s say we have a dead brain patient. You have verified that there is no neural activity in the brain beyond cerebellum. There’s no consciousness in the brain. Legally it’s still considered a person. You can’t for example shoot them.

          We also have a 5kg blob of lab grown human brain tissue. We have verified there is neural activity in the entire blob but we don’t know what it’s doing and we can’t communicate with it.

          Which one is more conscious? Which one should be considered more human and should have more rights?

          • MDCCCLV@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            20 天前

            Hooking up to a computer is just installing a software keyboard in your brain, that doesnt really mean or do anything. It’s what software you load after that’s relevant.

    • KindnessisPunk@piefed.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 天前

      I mean it’s the same question we’ve been asking all our lives about the animals, fetuses and now AI. When does it stop being a flowchart and start being a consciousness.

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      5
      ·
      21 天前

      And now bring artificial neural networks, i.e., AI, into the picture to make it even more spicy.

    • bampop@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      21 天前

      OK but hear me out here, I think I have the beginnings of a business plan:

      1. Create the Torment Nexus

      2. ?

      3. Profit

      Some components of the plan are still under development, but let’s not lose momentum. We can advance with the initial phase while brainstorming to refine the plan in real time as we progress. It’s an exciting opportunity and we mustn’t forfeit our first-to-market advantage.

    • ouRKaoS@lemmy.today
      link
      fedilink
      arrow-up
      7
      ·
      21 天前

      Scientists: “No, this isn’t The Torment Nexus, this is ‘The Nexus of Torment’! It’s totally different!”

  • Clbull@lemmy.world
    link
    fedilink
    arrow-up
    41
    ·
    edit-2
    21 天前

    Cortical Labs are the ones who pulled this off. They already have biological computers running on 800,000 lab-grown neurons available for ~$35,000 (just going on what a quick Google search told me) and are planning to open up a cloud computing service with its own API soon.

    This makes me feel uneasy. Imagine if reincarnation were a thing and you get brought back into this world, and your purpose is to learn how to play DOOM.

    • gerryflap@feddit.nl
      link
      fedilink
      arrow-up
      26
      ·
      21 天前

      Personally my worry really isn’t reincarnation, there’s no reason to believe that that’s true. But if these are fundamentally the same neurons that make up our brains, then how much do you need to put together before they acquire some form of “sentience”? Does a clump of 800,000 human neurons experience pain, sadness, a sense of self? Where is the line between an emotionless biocomputer and torturing a living organism for its entire lifespan?

      Despite the fact that I really hate “AI”, that question was of course already sort of relevant for the latest AI models, even though we can generally conclude that they’re not there yet at all. But real neurons are different, we know what they’re capable of. How many do you need before a clump of neurons has rights?

      • Jyek@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        ·
        21 天前

        Large language models are not intelligent. They are predictive text applications with massive dictionaries of circumstantial sentence structures to choose from. Nothing more. They do not feel and do not think for themselves. The only time they do anything is when the API calls them to produce more text with an updated context string.

        • sem@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          20 天前

          I don’t know how many neurons are in a human brain, but if you made an artificial human brain, could it have consciousness?

          • Jyek@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            20 天前

            Maybe. That’s certainly not my field of experience. But LLMs will never produce thought the way a human brain does. Certainly not without substantial change in how the tech functions fundamentally.

        • Schadrach@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          21 天前

          Sure, but is the full human brain the minimum set necessary?

          Sentience/sapience is probably an emergent property of a set of neurons needing to coordinate, plan, predict the future and oneself in relation to it.

          I suspect that AI is capable of sentience with sufficient complexity and training, but it’s not there yet. I also suspect we’ll be well past the point where it is there before we realize it is, but not until we make some kind of fundamental change in how we do it - we know human level intelligence is possible in the volume and power consumption of, well, a brain so we’re orders of magnitude off of efficiency limits.

          • Washedupcynic@lemmy.ca
            link
            fedilink
            English
            arrow-up
            12
            ·
            21 天前

            It’s estimated that mice have 70 million to 100 million neurons in their brains. They are capable of feeling pain and have social hierarchy. They also experience emotions like fear, pleasure, and anxiety. (We use them in pharmacology models of many mental illnesses.)

            Have you ever heard the phrase, “the neurons that fire together, wire together” ? Our neurons are in a constant feedback loop with the environment we experience. Our experiences shape how our neurons make interconnected networks, which then impacts how we behave upon the environment.

            If those neurons connected to the computer chip only ever experience playing the game “DOOM,” how would they know about anything else? How could they know about pain without having limbs to innervate and experience the pain with? How could they have a social hierarchy without others to interact with? We may as well be god to those neurons on the PC chip, because we are controlling the entire world they have access to.

            What I find sad is that our society is ok with hooking living cells up to a computer to make smarter computers, but has a problem with ethically harvesting stem cells to be used to treat diseases.

    • Bluescluestoothpaste@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      19 天前

      Ha, i understand why it would make someone uneasy but personally that sounds like heaven to me. Seriously,.take a slice of my neurons and hook them up to play doom forever, that’s what I want done with my remains. (I guess the rest of me cremate or something idc)

    • Sturgist@lemmy.ca
      link
      fedilink
      arrow-up
      18
      ·
      21 天前

      Honestly? Sounds preferable to being stuck in the universe of I Have No Mouth And I Must Scream… I’ll take a challenging power fantasy with some massively overpowered weapons over millennia of endless physical and psychological torture by an insane AI… might just be me though…

  • red_sock@lemmy.ml
    link
    fedilink
    arrow-up
    34
    ·
    edit-2
    21 天前

    Computer Scientists: We can make Doom run on any device! Bioscientists: Watch this!

  • Atelopus-zeteki@fedia.io
    link
    fedilink
    arrow-up
    27
    ·
    21 天前

    Ok, so maybe 200K brain cells would be sufficient to run for public office, but you can’t really call that a complete brain, containing approximately 100 billion cells.

  • wonderingwanderer@sopuli.xyz
    link
    fedilink
    arrow-up
    22
    ·
    21 天前

    That’s fucked up though. What happened to bioethics and review boards?

    We don’t understand enough about human consciousness to say that those cells aren’t sentient. We have no idea what sort of experience, if any, they’re perceiving.

    This is not okay…

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        21 天前

        We don’t know enough about human consciousness to know that for sure. Plenty of animals have fewer braincells than humans, but we don’t know enough about their consciousness to say whether they have an internal experience.

        That’s what I mean. It’s hubris to assume we can culture human braincells in a petri dish just because there’s a lack of evidence one way or the other whether it’s aware.

        • TechLich@lemmy.world
          link
          fedilink
          arrow-up
          12
          ·
          edit-2
          21 天前

          There’s a lack of evidence for anything not being conscious.

          Neurons work by generating electrical signals in response to stimulus (either electrical inputs from other neurons or physical/sensory inputs activated by light or touch etc.) and they do this in a physical way.

          If they’re conscious, then there’s a pretty good chance that power plants are conscious, computers are conscious and pretty much everything else in the world is conscious.

          I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness. Humans make a lot of assumptions about the world to make it fit the patterns we’re used to.

          A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human, just like a rock or a plant or an ant or an iPhone might have consciousness.

          Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.

          • bss03@infosec.pub
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            21 天前

            I think there’s a lower limit of complexity for sentience, based on memory-persistence, self-firing, and self-recognition. I think there’s no need for moral concern for non-sentient things. (But, that’s just my ethical framework and philosophical worldview; the only “evidence” I’m at all aware of is thin and vague.)

            But, as far as having a subjective experience, I think that might go quite small and alien including fungi and plant or even certain sub-cellular structures. Probably anything that maintains a border and internal homeostasis including parts of the bodies of larger experiencers could be having an internal perspective – and any human words applied so those experiences would tell you more about human bias than the experience.

            • LH0ezVT@sh.itjust.works
              link
              fedilink
              arrow-up
              4
              ·
              21 天前

              In my view, although I am neither a neurologist nor a philosopher, things should absolutely scale with neuron blob complexity, and it should do so in a non-linear way. I dislike harming an animal with a complex brain like mammals, cephalopods etc. much more than I dislike harming the equivalent nerve mass in insects, for instance.

              • bss03@infosec.pub
                link
                fedilink
                English
                arrow-up
                2
                ·
                21 天前

                That’s also the way I feel, but I think that’s probably human bias and closely related to the evolutionary pressure behind my mirror neurons and how strongly they trigger correlates with outside sentient phenotype.

                I think if I knew what it felt like (if anything) to be an ant colony, I might have different views around the causal use of boric acid (and related) to keep them out of human spaces.

          • wonderingwanderer@sopuli.xyz
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            21 天前

            There’s a lack of evidence for anything not being conscious.

            So should we just assume that nothing is conscious? After all, I can’t prove that you’re conscious, nor you I. So should we relegate ourselves to an amoral solipsism?

            Neurons work by generating electrical signals in response to stimulus and they do this in a physical way.

            I know how neurons work. Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.

            I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness.

            That’s… irrelevant. I never said they have “human-like reasoning” or “understanding.” I said we don’t understand enough, meaning humanity writ large, including the experts. There are too many unknowns about the nature of consciousness.

            A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human

            Again, it doesn’t need to think like a human in order to be capable of experiencing suffering. Babies don’t “think like humans,” or at least we don’t have any solid evidence that they do, but they’re certainly capable of suffering.

            Your mentality is the same one people have used for generations to justify circumcising infants without anaesthetics. How far are you willing to extend it? Do pets “think like humans”? Do uncontacted tribes “think like humans,” in whatever vague way you define it in order to justify cultivating human braincells in a petri dish?

            Do you not see how problematic this is? What if the technology grows and in a decade they’re studying a clump of 2 billion neurons in a vat? Will it suddenly become human enough to deserve your consideration? What about when it becomes 20 billion?

            Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.

            Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is? Amoral people exist, moral cynicism exists, nihilism exists, solipsism exists, hell even social darwinism exists.

            Any of those frameworks and worldviews can be used to justify atrocities in the minds of those who hold them. And yes, an unethical or even anti-ethical persuasion is still an “ethical framework,” in the strictest sense of the term.

            Just because something can be seated in philosophical jargon doesn’t mean we should grant it license to do whatever it wants.

            • TechLich@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              20 天前

              So should we just assume that nothing is conscious?

              Not at all! In fact, I believe that we should assume almost everything is conscious. I think it’s a bit of human arrogance to think that we brain creatures have a monopoly on perspective.

              Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.

              Exactly my point.

              That’s… irrelevant

              I don’t think it is. If the argument is that it’s unethical to poke a neuron because it might have consciousness, would the same argument not apply to anything else? I think you might be getting a bit hung up on the “think like a human” thing. My point is not that it’s okay to torture something if it doesn’t “think like a human.” It’s that there are potentially a lot of things in the world that are conscious that don’t often get the same consideration.

              capable of experiencing suffering

              This is an interesting one. It shifts the question from “does it have a consciousness?” to “does it have a consciousness that is suffering or able to suffer?”. The idea of suffering is a very human concept that we have a whole section of our brains devoted to. There’s a lot of ethics devoted to alleviating suffering (eg. Humanitarianism) and we sorta use it as a means of directing our goals - we avoid things that make us suffer and seek things that bring us happiness. What makes us happy or makes us suffer varies a bit from person to person due to experience and learning/training but a lot of it is biologically evolved. Physical and emotional pain makes us suffer for evolutionary reasons.

              So in one sense, you could define suffering as a stimulus that some conscious system avoids? In which case, training neurons essentially teaches them what suffering is. They’re trained to activate or not activate based on what avoids irregular stimulus (suffering) and results in regular stimulus (happiness).

              If that’s how you define it though, there could be many other systems that work the same way. Obviously animals and plants and fungi etc. But also Computers and lots of mechanical systems do that too. If making decisions to avoid or seek electrical stimulus is suffering then a computer is basically a pleasure/torture box.

              Personally I think that suffering is more than that. I think it’s a larger system we brain creatures have developed that doesn’t necessarily apply very well outside the context in which we use it. Would a vat of 20 billion neurons be able to suffer? I think that depends on how they’re arranged and whether they have that concept.

              Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is?

              Just because different ethical frameworks and worldviews exist, doesn’t mean they should all be treated equally. Sure, if someone is super utilitarian they might be fine with torturing people for medical research when they feel that the ends justify the means. If someone has a strict deontological code of ethics that tells them homosexuality is a sin punishable by death, they might campaign for that. I think those people suck and their beliefs are evil because of my own ethics and worldview.

              When it comes to a question like “is an ant capable of suffering?” Or “is it okay to swat a fly or set a mouse trap?” Or “how many human neurons does it take to suffer while changing a light bulb?” You’ll get varying answers from people based on who they are. Personally, I think the right answer to those questions is dependent on the brain of the person answering them.

              Moral universalists have the same slippery slopes you mentioned. If right and wrong are fixed and objective and not dependent on people, then groups claiming to know the one true morality will use it to persecute those labelled as evil or morally bankrupt (see the homophobic asshole example above).

              Moral relativism doesn’t mean that morality doesn’t matter or that it’s wrong to fight against what you think is evil. I believe you should fight for what is right and I’m hopeful that the things that I think are good will win out against the things that I think are evil. Absolutism is maybe a bit easier for that because it simplifies moral choices a lot, but I think it’s hubris to think that evil is the same everywhere to everyone and not an artifact of the human mind.

    • bdonvr@thelemmy.club
      link
      fedilink
      arrow-up
      11
      ·
      21 天前

      I’m no scientist but I’m pretty sure we know enough to say there’s no consciousness at this level. Consciousness as we know it is pretty fragile.

      There may be a point if this is scaled up that could be a concern…

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        21 天前

        We do not, in fact, know enough to say there’s no consciousness at that level.

        Anyone who tells you that is being intellectually dishonest.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        21 天前

        Sounds like something from a horror manga.

        I’m seeing the chimera from FMAB saying “Edu…wardo… koroshite…kure… onegai…”

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        20 天前

        The end goal is probably a vat that billionaires can hook their brains into at the end of their lives so they never die.

        That probably has something to do with their push for virtual reality and the ‘metaverse’ (fuck zuck for appropriating the greek language for his pet project; I used to use that word to describe a sort of hypothetical hyperdimensional multiverse where “spirits” inhabit 4D/5D topologies).

        Oh and why they’re training AI agents in “environments” now (basically, 3D-scanned renderings of real life spaces).

        If they can put all the pieces together before they die, then they can hook their brains into these computers and control a little avatar so they never have to die and can continue making our lives hell (at least as long as they maintain ownership of private capital, or until we seize the means of production and redistribute the wealth).