Solution: download an earlier version of the LM Studio AppImage.

Or 0.4.6-1, which works better for me.

I recently bought a new computer, and, after again trying Windows 11 for a bit, I decided I wanted to keep using OpenSUSE.

Probably unpopular here: I enjoy screwing around with local LLM models. I used LM Studio on my old computer (on which I had also installed OpenSUSE Leap). I also tested it on my new computer in Windows 11, and it worked very nicely. Now I’m trying on my new computer with OpenSUSE Leap 16, and it doesn’t work at all.

Specifically: no runtimes nor engines are present, and my hardware isn’t recognised at all - not my GPU nor my CPU, nothing.

I’m thinking it’s a driver issue. I’ve looked around quite a bit, and also looked up (what seems to me) the most important error messages I got when running the AppImage from the console:

spoiler

[BackendManager] Surveying hardware with backends with options: {“type”:“newAndSelected”} [BackendManager] Surveying new engine ‘llama.cpp-linux-x86_64-avx2@2.12.0’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13407 [ProcessForkingProvider][NodeProcessForker] Exited process 13407 21:17:29.644 › Failed to survey hardware with engine ‘llama.cpp-linux-x86_64-avx2@2.12.0’: LMSCore load lib failed - child process with PID 13407 exited with code 127 [BackendManager] Survey for engine ‘llama.cpp-linux-x86_64-avx2@2.12.0’ took 9.47ms [BackendManager] Surveying new engine ‘llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13408 [ProcessForkingProvider][NodeProcessForker] Exited process 13408 21:17:29.648 › Failed to survey hardware with engine ‘llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0’: LMSCore load lib failed - child process with PID 13408 exited with code 127 [BackendManager] Survey for engine ‘llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0’ took 3.70ms [BackendManager] Surveying new engine ‘llama.cpp-linux-x86_64-vulkan-avx2@2.12.0’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13409 [ProcessForkingProvider][NodeProcessForker] Exited process 13409 21:17:29.651 › Failed to survey hardware with engine ‘llama.cpp-linux-x86_64-vulkan-avx2@2.12.0’: LMSCore load lib failed - child process with PID 13409 exited with code 127 [BackendManager] Survey for engine ‘llama.cpp-linux-x86_64-vulkan-avx2@2.12.0’ took 3.57ms

This is my system with installed drivers:

I did get Ollama to work… Any thoughts?

  • Ⓜ3️⃣3️⃣ 🌌@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    16 hours ago

    I use lmstudio too, from the official appimage.

    Never managed to get the nvidia working, probably because the nvidia drivers are a pain to install properly (secure boot, drivers, kernel, display … everything must be perfect or it doesn’t work).

    But the CPU is just enough for local models if you are a bit patient and have plenty RAM.

    It is very unusual for lmstudio failing to detect your CPU and your system memory. I would start looking for excessive restriction affecting lmstudio itself (appimage ? Native ? Something with selinux or system hardening ?)

    • Don Antonio Magino@feddit.nlOP
      link
      fedilink
      arrow-up
      1
      ·
      16 hours ago

      Thanks for the tips. How do I go about doing this? What I’ve tried is run LM Studio from the console with --no-sandbox. No idea if that has something to do with what you’re referring to.

      I’ve also tried running it with sudo, which gives this error message:

      spoiler

      [15557:0409/215138.000614:ERROR:ui/ozone/platform/x11/ozone_platform_x11.cc:249] Missing X server or $DISPLAY [15557:0409/215138.000647:ERROR:ui/aura/env.cc:257] The platform failed to initialize. Exiting. Segmentatiefout