While trying to move my computer to Debian, after allowing the installer to do it’s task, my machine will not boot.

Instead, I get a long string of text, as follows:

Could not retrieve perf counters (-19)
ACPI Warning: SystemIO range 0x00000000000000B00-0x0000000000000B08 conflicts withOpRegion 0x0000000000000B00-0x0000000000000B0F (\GSA1.SMBI) /20250404/utaddress-204)
usb: port power management may beunreliable
sd 10:0:0:0: [sdc] No Caching mode page found
sd 10:0:0:0: [sdc] Assuming drive cache: write through
amdgpu 0000:08:00.0 amdgpu: [drm] Failed to setup vendor infoframe on connector HDMI-A-1: -22

And the system eventually collapses into a shell, that I do not know how to use. It returns:

Gave up waiting for root file system device. Common problems:
- Boot args (cat /proc/cmdline)
 - Check rootdelay= (did the system wait lomg enough?)
- Missing modules (cat /proc/modules; ls /dev)

Alert! /dev/sdb2 does not exist. Dropping to a shell!

The system has two disks mounted:

– an SSD, with the EFI, root, var, tmp and swap partition, for speeding up the overall system – an hdd, for /home

I had the system running on Mint until recently, so I know the system is sound, unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning. Under Debian, it booted once and then stopped booting all together.

The installation I made was from a daily image, as I am/was aiming to put my machine on the testing branch, in order to have some sort of a rolling distro.

If anyone can offer some advice, it would be very much appreciated.

  • LeFantome@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 days ago

    It could be that /dev/sdb2 really does not exist. Or it could be mapped to another name. It is more reliable to use UUiD as others have said.

    What filesystem though? Another possibility is that the required kernel module is not being loaded and the drive cannot be mounted.

    • qyron@sopuli.xyzOP
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      Ext4 on all partitions, except for swap space and the EFI partition, that autoconfigures the moment I set it as such.

      At the moment, I’m tempted to just go back and do another reinstallation.

      I haven’t played around with manually doing anything besides setting up the size of the partitions. Maybe I left some flag to set or something. I don’t know how to set disk identification scheme. Or I do, just don’t realize it.

      Human error is the largest probability at this point.

      • kumi@feddit.online
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        OP, in case you still haven’t given up I think I can fill in the gaps. You got a lot of advice somewhat in the right direction but no one telling you how to actually sort it out I think.

        It’s likely that your /dev/sdb2 is now either missing (bad drive or cable?) or showing up with a different name.

        You want to update your fstab to refer to your root (and /boot and others) by UUID= instead of /dev/sdbX. It looks like you are not using full-disk encryption but if you are, there is /etc/crypttab for that.

        First off, you actually have two /etc/fstabs to consider: One on your root filesystem and one embedded into the initramfs on your boot partition. It is the latter you need to update here since it happens earlier in the boot process and needed to mount the rootfs. It should be a copy of your rootfs /etc/fstab and gets automatically copied/synced when you update the initrams, either manually or on a kernel installation/upgrade.

        So what you need to do to fix this:

        1. Identify partition UUIDs
        2. Update /etc/fstab
        3. Update initramfs (update-initramfs -ukall or reinstall kernel package)

        You need to do this every time you do changes in fstab that need to be picked up in the earlier stages of the boot process. For mounting application or user data volume it’s usually not necessary since the rootfs fstab also gets processed after the rootfs has been successfully mounted.

        That step 3 is a conundrum when you can’t boot!

        Your two main options are a) boot from a live image, chroot into your system and fix and update the initramfs inside the chroot, or b) from inside the rescue shell, mount the drive manually to boot into your normal system and then sort it out so you don’t have to do this on every reboot.

        For a), I think the Debian wiki instructions are OK.

        For b), from the busybox rescue shell I believe you probably won’t have the lsblk or blkid like another person suggested. But hopefully you can ls -la /dev/disk/by-uuid /dev/sd* to see what your drives are currently named and then mount /dev/XXXX /newroot from there.

        In your case I think b) might be the most straightforward but the live-chroot maneuver is a very useful tool that might come in handy again in other situations and will always work since you are not limited to what’s available in the minimal rescue shell.

        Good luck!