While trying to move my computer to Debian, after allowing the installer to do it’s task, my machine will not boot.

Instead, I get a long string of text, as follows:

Could not retrieve perf counters (-19)
ACPI Warning: SystemIO range 0x00000000000000B00-0x0000000000000B08 conflicts withOpRegion 0x0000000000000B00-0x0000000000000B0F (\GSA1.SMBI) /20250404/utaddress-204)
usb: port power management may beunreliable
sd 10:0:0:0: [sdc] No Caching mode page found
sd 10:0:0:0: [sdc] Assuming drive cache: write through
amdgpu 0000:08:00.0 amdgpu: [drm] Failed to setup vendor infoframe on connector HDMI-A-1: -22

And the system eventually collapses into a shell, that I do not know how to use. It returns:

Gave up waiting for root file system device. Common problems:
- Boot args (cat /proc/cmdline)
 - Check rootdelay= (did the system wait lomg enough?)
- Missing modules (cat /proc/modules; ls /dev)

Alert! /dev/sdb2 does not exist. Dropping to a shell!

The system has two disks mounted:

– an SSD, with the EFI, root, var, tmp and swap partition, for speeding up the overall system – an hdd, for /home

I had the system running on Mint until recently, so I know the system is sound, unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning. Under Debian, it booted once and then stopped booting all together.

The installation I made was from a daily image, as I am/was aiming to put my machine on the testing branch, in order to have some sort of a rolling distro.

If anyone can offer some advice, it would be very much appreciated.

  • Eggymatrix@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    Congrats, you found the only debian that breaks regularly: testing

    You can file a bug report and then install something that does not require you to debug early boot issues, like debian 13 or if you really want a rolling release arch or tubleweed.

    • data1701d (He/Him)@startrek.website
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Eh; testing doesn’t break THAT often. Having used it on many of my devices for almost 4 years, I can count on one hand the number of times it broke in a way I had to chroot in to fix it.

      This is very unlikely to be because they are using testing.

      Still, using Debian Stable is probably a smarter idea for this user.

  • okwhateverdude@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    3 days ago

    Sounds like your /etc/fstab is wrong. You should be using UUID based mounting rather than /dev/sdXY. Very likely you’ll need to boot from a usb stick with a rescue image (the installer image should work), and fix up /etc/fstab using blkid

    • qyron@sopuli.xyzOP
      link
      fedilink
      arrow-up
      11
      ·
      3 days ago

      You made me think that perhaps the BIOS/EFI is fudging something up. I checked and I had four separate entries pointing towards the SSD.

      • kumi@feddit.online
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        This gives a little bit of credence to the theory of an old installation taking precedence.

        • Are there other EFI partitions around? Try booting explicitly from each one and see if you get different results

        • Are there old bootloaders or entries from no longer existing installations lingering around on yor EFI drive? Move them from a live env to a backup or just delete them if you are confident.

        • How about NVRAM? It’s a way for the OS to configure boot straight to your mobo; separate from any disks attached. It doesn’t look like it to me but perhaps it is possible your mobo is still trying to load stale OS from NVRAM config and your newest installation didnt touch it? Manually overriding boot in BIOS like above should root out this possibility.

        • qyron@sopuli.xyzOP
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          I developed the habit of formatting my disks before a new install, so I’m going to push that hypothesis aside for now.

          Before installing Debian I tried Sparky and I noticed it had set up a /boot_EFI and a /boot partition, which sounded off to me, so I wiped the SSD clean and manually partioned it, leaving only a 1GB /boot, configured for EFI.

          NVRAM is not completely off the board but I find it odd to just flare up as an issue now, under Debian, and having no problems under Mint or Sparky.

  • IsoKiero@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Do you happen to have any USB (or other) drives attached? Optical drive maybe? In the first text block kernel suggests it found ‘sdc’ device which, assuming you only have ssd and hdd plugged in and you haven’t used other drives in the system, should not exist. It’s likely your fstab is broken somehow, maybe a bug in daily image, but hard to tell for sure. Other possibility is that you still have remnants of Mint on EFI/whatever and it’s causing issues, but assuming you wiped the drives during installation that’s unlikely.

    Busybox is pretty limited, so it might be better to start the system with a live-image on a USB and verify your /etc/fstab -file. It should look something like this (yours will have more lines, this is from a single-drive, single-partition host in my garage):

    # / was on /dev/sda1 during installation
    UUID=e93ec6c1-8326-470a-956c-468565c35af9 /               ext4    errors=remount-ro 0       1
    # swap was on /dev/sda5 during installation
    UUID=19f7f728-962f-413c-a637-2929450fbb09 none            swap    sw              0       0
    
    

    If your fstab has things like /dev/sda1 instead of UUID it’s fine, but those entries are likely pointing to wrong devices. My current drive is /dev/sde instead of comments on fstab mentioning /dev/sda. With the live-image running you can get all the drives from the system running ‘lsblk’ and from there (or running ‘fdisk -l /dev/sdX’ as root, replace sdX with actual device) you can find out which partition should be mounted to what. Then run ‘blkid /dev/sdXN’ (again, replace sdXN with sda1 or whatever you have) and you’ll get UUID of that partition. Then edit fstab accordingly and reboot.

      • IsoKiero@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Rootfs location is passed via kernel parameter, for example my grub.cfg has “set root=‘hd4,msdos1’”. That’s used by kernel and initramfs to locate the root filesystem and once ‘actual’ init process starts it already has access to rootfs and thus access to fstab. Initramfs update doesn’t affect on this case, however verifying kernel boot parameters might be a good idea.

    • Bane_Killgrind@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Tbf he said he doesn’t know how to use the terminal, and he’ll need to use at least sudo, vim and cat plus the stuff you mentioned. A drive getting inserted into the disk order is probably the correct thing, I thought UUID was the default on new installs for that reason…

      • IsoKiero@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I’d argue that if the plan is to run Debian testing it’s at the very least beneficial, if not mandatory, to learn some basics of the terminal. Debian doesn’t ship with sudo by default, so it’s either logging in directly as root or ‘su’. Instead of vim (which I’d personally use) I’d suggest nano, but with live setup it’s also possible to use mousepad or whatever gui editor happens to be available.

        I suppose it’d be possible to use gparted or something to dig up the same information over GUI but I don’t have debian testing (nor any other live distro) at hand to see what’s available on it. I’m pretty sure at least stable debian installs with UUIDs by default, but I haven’t used installer from testing in a “while” so it might be different.

        The way I’d try to solve this kind of problem would be to manually mount stuff from busybox and start bash from there to get “normal” environment running and then fix fstab, but it’s not the most beginner friendly way and requires some prior knowledge.

        • Bane_Killgrind@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          mandatory

          Yes but, not in the first few weeks.

          My holistic suspicion is that OP has his home folder on a USB/esata drive and he’s not telling yet.

          Edit

          Apparently no

  • ThomasWilliams@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Why are you using multiple partitions ?

    Linux is not like Windows where you can run programs on any partition or inserted media ; you can only run executables on the primary boot partition. Its therefore pointless IMO to have more than one partition (plus a swap partition).

    Have you tried asking ChatGPT or Gemini ?

    This is what Bing said :

    Fixing “Gave up waiting for root device” error in Debian The error “Gave up waiting for root device” in Debian can be caused by missing modules or incorrect partition references. To fix this issue, you can follow these steps: Boot into a live session and list the UUIDs of all partitions using sudo blkid. Check the /etc/fstab file to ensure the correct UUID is listed for the root partition. If the UUID is missing or incorrect, replace it in the /etc/fstab file. If the error persists, you may need to rebuild the initramfs file by running sudo update-initramfs -u after installing the necessary modules with apt-get install lvm2 cryptsetup if you are using logical volumes. 1

    These steps should help you resolve the boot error and restore your system’s functionality.

    source :Ubuntu

    • kumi@feddit.online
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      2 days ago

      Linux is not like Windows where you can run programs on any partition or inserted media ; you can only run executables on the primary boot partition. Its therefore pointless IMO to have more than one partition (plus a swap partition).

      What are you on? You can run executables from any partition (filesystem) as long as it is not mounted with the noexec mount option.

    • qyron@sopuli.xyzOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Because I like having my disk properly partitioned, to keep things properly separated. Unlike windows.

      And no, I haven’t queried any AI. Because why question a machine when I can ask real human beings and learn from them instead?

  • wickedrando@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    Can you reinstall? If possible, use the whole disk (no dual booting and bootloader to deal with).

    • qyron@sopuli.xyzOP
      link
      fedilink
      arrow-up
      5
      ·
      3 days ago

      I can, already done before coming here and I risk I’m going to do it again because people are telling me to do this and that and I’m feeling way over my head.

      But not in the mood to quit. Yet.

      I’m running a clean machine. No secondary OS. The only thing more “unusual” that I am doing is partitioning for different parts of the system to exist separately and putting /home on a disk all to itself.

      • IsoKiero@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Just in case you end up with reinstallation, I’d suggest using stable release for installation. Then, if you want, you can upgrade that to testing (and have all the fun that comes with it) pretty easily. But if you want something more like rolling release, Debian testing isn’t really it as it updates in cycles just like the stable releases, it just has a bit newer (and potentially broken) versions until the current testing is frozen and eventually released as new stable and the cycle starts again. Sid (unstable) version is more like a rolling release, but that comes even more fun quirks than testing.

        I’ve used all (stable/testing/unstable) as a daily driver at some point but today I don’t care about rolling releases nor bleeding edge versions of packages, I don’t have time nor interest anymore to tinker with my computers just for the sake of it. Things just need to work and stay out of my way and thus I’m running either Debian stable or Mint Debian edition. My gaming rig has Bazzite on it and it’s been fine so far but it’s pretty fresh installation so I can’t really tell how it works in the long run.

        • qyron@sopuli.xyzOP
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          I’m on track for that, I admit.

          As I read this, I’m trying a freshly installed live image.

          I have to try… I’m already too invested in this stupidity to just quit at this point.

          Why am I interested in a somewhat rolling release of Debian? Because I’m a dreamer with not enough technical capabilities. I like the stability Debian offers and the years I’ve used it as my default distro is a fond memory.

          The bare bones mentality, the basic, clean approach to the UI/desktop distro customization and the minimal starting software package was a big plus, especially when using very underpowered machines, like I had then.

          What is not a fond memory is having an OS remain static for such a long time span to the extent it feels like jumping into a completely new OS when migrating to the next release and lacking on having newer versions of software. Yes, I do know Backports are a thing but nonetheless.

          But the more user friendly distros overcompensate on this, by overloading the starting software package and bloating the distro. Polishing can be too much.

          No, I am not about to go and try LFS, Gentoo, or whatever distro that puts me in charge of everything. I have a life. Kind of. But still.

          Like you say, I want things to work, I don’t mind doing some work but I really don’t care about nor need the extra bells and whistles the (excessive) polishing carries.

          End of rant.

          I’m going to torture myself trying to figure whatever might have gone wrong for a bit more.

      • wickedrando@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 days ago

        Ah, yes I saw all the comment suggestions and was hoping a fresh reinstall would work for you.

        Did you format before reinstall? Definitely seems like fstab issue dropping you into initramfs that would need some manual intervention.

        A format and fresh install should totally resolve this (assuming installation options and selections are sound).

        What does ‘ls /dev/sd*’ ran from shell show you?​​​​​​​​​​​​​​​​

      • pinball_wizard@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Once time I’ve had two bad installs in a row, it was due to my install media.

        Many install media tools have an image checker (check-sum) step, which is meant to prevent this.

        But corrupt downloads and corrupt writes to the USB key can happen.

        In my case, I think it turned out that my USB key was slowly dying.

        If I recall, I got very unlucky that it behaved during the checksums, but didn’t behave during the installs. (Or maybe I foolishly skipped a checksum step - I have been known to get impatient.)

        I got a new USB key and then I was back on track.

        • qyron@sopuli.xyzOP
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 days ago

          Through a cable, to the onboard SATA ports…? But somehow I don’t think that was the answer you were expecting.

          • Bane_Killgrind@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Yeah I was thinking you might be using a portable drive for home, which might not be detected early enough in the boot process to mount.

            If you haven’t reinstalled yet, swapping the order of the SATA cables might change the order they are detected, so your home disk that was B to the installer will once again be B to the boot drive.

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    3 days ago
    1. Boot into a LiveUSB of the same version of distro you tried to install
    2. View the drive mappings to see what they are detected as
    3. Ensure your newly created partitions can mount correctly
    4. Check /etc/fstab on your root drive (not the LveUSB filesystem) to ensure they match as the ones detected while in LiveUSB
    5. Try rebooting

    Report changes here.

  • GNUmer@sopuli.xyz
    link
    fedilink
    arrow-up
    12
    ·
    3 days ago

    Can you run lsblk within the emergency shell? Sounds a bit like the HDD has taken theplacde of /dev/sdb, upon which there’s no second partition nor a root filesystem -> root device not found.

    • qyron@sopuli.xyzOP
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      3 days ago

      Perhaps? It fell into a busybox. How can I do what you are requesting?

  • doodoo_wizard@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    3 days ago

    Since you dont know what’s happening you dont need to be fucking around with busybox. Boot back into your usb install environment (was it the live system or netinst?) and see how fstab looks. Pasting it would be silly but I bet you can take a picture with your phone and post it itt.

    What you’re looking for is drives mounted by dynamic device identifiers as opposed to uuids.

    Like the other user said, you never know how quick a drive will report itself to the uefi and drives with big cache like ssds can have hundreds of operations in their queue before “say hi to the nice motherboard”.

    If it turns out that your fstab is all fucked up, use ls -al /dev/disk/by-uuid to show you what th uuids are and fix your fstab on the system then reboot.

  • Telorand@reddthat.com
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    I think everyone here has offered good advice, so I have nothing to add in that regard, but for the record, I fucked up a Debian bookworm install by doing a basic apt update && apt upgrade. The only “weird” software it had was Remmina, so I could remote into work; nothing particularly wild.

    I recognize that Debian is supposed to be bulletproof, but I can offer commiseration that it can be just as fallible as any other base distro.

    • qyron@sopuli.xyzOP
      link
      fedilink
      arrow-up
      8
      ·
      3 days ago

      Debian is well known for its stability but it is also known for being tricky to handle when moving into the Testing branch and I did just that, by wanting to have a somewhat rolling distro with Debian.

      I’m no power user. I know how to install my computer (which is a good deal more than most people), do some configurations and tinker a bit but situations like this throw me into uncharted territory. I’m willing to learn but it is tempting to just drop everything and go back to a more automated distro, I’ll admit.

      Debian is not to blame here. Nor Linux. Nor anyone. We’re talking about free software in all the understandings of the word. Somewhere, somehow, an error is bound to happen. Something will fail, break or go wrong.

      At least in Linux we know we can ask for help and eventually someone will lend a pointer, like here.

      • IcyToes@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        OpenSuse Tumbleweed is a great balance between stable and updates (rolling updates). Worth considering if Debian doesn’t work out.

        • qyron@sopuli.xyzOP
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          I’m a sucker for Debian. It was my first good and reliable workhorse. First love is hard to forget.

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 days ago

      And that’s why I immediately fell in love with immutable distros. While such problems are rare, they can and do happen. Immutable distros completely prevent them from happening.

      • Telorand@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        I love them, too. Ironically, I’m not currently running one, but that’s more because I need a VPN client that I haven’t been able to get working on immutable distros, but I’d use one if I that was solved

          • Telorand@reddthat.com
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Oh, Private Internet Access. The way it installs itself is wonky on immutable systems (i.e. it was written for mutable systems in an odd way). I remember seeing someone say on the PIA GitHub that there’s a workaround, but I haven’t given that a go, and my own experience trying in the past still led to problems, even if you got the client and daemon working.

            You can utilize the OpenVPN configs just fine, but you lose out on some nice features in the client, like WireGuard and some other QoL things.

            • FooBarrington@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              Ah, bummer. Looks like they don’t provide a Fedora repo, otherwise it would have been easy to layer onto Silverblue etc. There’s probably still some way, but I get not wanting to go through that trouble.

              • Telorand@reddthat.com
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                Yeah, I even tried rolling my own downstream distro based on Bazzite by trying to install it at build time (when they do most of their system changes), but I kept running into trouble either with extracting the files or moving the files where they needed to go.

    • LeFantome@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      3 days ago

      Nothing that uses apt is remotely bullet-proof. It has gotten better but it is hardly difficult to break.

      pacman is hard to break. APK 3 is even harder. The new moss package manager is designed to be hard to break but time will tell. APK is the best at the moment IMHO. In my view, apt is one of the most fragile.

      • data1701d (He/Him)@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Eh, I disagree with you on Pacman. It could be possible I was doing something stupid, but I’ve had Arch VMs where I didn’t open them for three months, and when I tried to update them I got a colossally messed up install.

        I just made a new VM, as I really only need it when I need to make sure a package has the correct dependencies on Arch.

        • LeFantome@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          I can almost guarantee that the problem you encountered was an outdated archlinux-keyring that meant you did not have the GPG keys to validate the packages you were trying to install. It is an annoying problem that happens way too often on Arch. Things are not actually screwed up but it really looks that way if you do not know what you are looking at. One line fix if you know what to do.

          It was my biggest gripe when I used Arch. I did not run into it much as I updated often but it always struck me as a really major flaw.

          • data1701d (He/Him)@startrek.website
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I feel like it was more than the package manager whining; I think xorg literally wouldn’t start after the update, although it’s been so long now that I could be misremembering.

            Honestly, I probably could have salvaged the install if I’d wanted to without too much difficulty, but it was just a VM for testing distro packaging rather than a daily driver device.

            Still, what you say is good to know, and perhaps I should hold back on the Pacman slander. I’ve just been using Debian for around 4 years now and had pretty good reliability; then again, Debian (and most distros, with their pitiful documentation) would probably be very hard to use without Archwiki.

  • moonpiedumplings@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning

    This happened to me. It still showed up in kde’s partition manager (when I plugged the ssd into another computer), with the drive named as an error code.

  • LeFantome@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 days ago

    It could be that /dev/sdb2 really does not exist. Or it could be mapped to another name. It is more reliable to use UUiD as others have said.

    What filesystem though? Another possibility is that the required kernel module is not being loaded and the drive cannot be mounted.

    • qyron@sopuli.xyzOP
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      Ext4 on all partitions, except for swap space and the EFI partition, that autoconfigures the moment I set it as such.

      At the moment, I’m tempted to just go back and do another reinstallation.

      I haven’t played around with manually doing anything besides setting up the size of the partitions. Maybe I left some flag to set or something. I don’t know how to set disk identification scheme. Or I do, just don’t realize it.

      Human error is the largest probability at this point.

      • kumi@feddit.online
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        OP, in case you still haven’t given up I think I can fill in the gaps. You got a lot of advice somewhat in the right direction but no one telling you how to actually sort it out I think.

        It’s likely that your /dev/sdb2 is now either missing (bad drive or cable?) or showing up with a different name.

        You want to update your fstab to refer to your root (and /boot and others) by UUID= instead of /dev/sdbX. It looks like you are not using full-disk encryption but if you are, there is /etc/crypttab for that.

        First off, you actually have two /etc/fstabs to consider: One on your root filesystem and one embedded into the initramfs on your boot partition. It is the latter you need to update here since it happens earlier in the boot process and needed to mount the rootfs. It should be a copy of your rootfs /etc/fstab and gets automatically copied/synced when you update the initrams, either manually or on a kernel installation/upgrade.

        So what you need to do to fix this:

        1. Identify partition UUIDs
        2. Update /etc/fstab
        3. Update initramfs (update-initramfs -ukall or reinstall kernel package)

        You need to do this every time you do changes in fstab that need to be picked up in the earlier stages of the boot process. For mounting application or user data volume it’s usually not necessary since the rootfs fstab also gets processed after the rootfs has been successfully mounted.

        That step 3 is a conundrum when you can’t boot!

        Your two main options are a) boot from a live image, chroot into your system and fix and update the initramfs inside the chroot, or b) from inside the rescue shell, mount the drive manually to boot into your normal system and then sort it out so you don’t have to do this on every reboot.

        For a), I think the Debian wiki instructions are OK.

        For b), from the busybox rescue shell I believe you probably won’t have the lsblk or blkid like another person suggested. But hopefully you can ls -la /dev/disk/by-uuid /dev/sd* to see what your drives are currently named and then mount /dev/XXXX /newroot from there.

        In your case I think b) might be the most straightforward but the live-chroot maneuver is a very useful tool that might come in handy again in other situations and will always work since you are not limited to what’s available in the minimal rescue shell.

        Good luck!