**Solution: download an earlier version of the LM Studio AppImage.
Or 0.4.6-1, which works better for me.**
I recently bought a new computer, and, after again trying Windows 11 for a bit, I decided I wanted to keep using OpenSUSE.
Probably unpopular here: I enjoy screwing around with local LLM models. I used LM Studio on my old computer (on which I had also installed OpenSUSE Leap). I also tested it on my new computer in Windows 11, and it worked very nicely. Now I’m trying on my new computer with OpenSUSE Leap 16, and it doesn’t work at all.
Specifically: no runtimes nor engines are present, and my hardware isn’t recognised at all - not my GPU nor my CPU, nothing.


I’m thinking it’s a driver issue. I’ve looked around quite a bit, and also looked up (what seems to me) the most important error messages I got when running the AppImage from the console:
spoiler
[BackendManager] Surveying hardware with backends with options: {“type”:“newAndSelected”} [BackendManager] Surveying new engine ‘llama.cpp-linux-x86_64-avx2@2.12.0’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13407 [ProcessForkingProvider][NodeProcessForker] Exited process 13407 21:17:29.644 › Failed to survey hardware with engine ‘llama.cpp-linux-x86_64-avx2@2.12.0’: LMSCore load lib failed - child process with PID 13407 exited with code 127 [BackendManager] Survey for engine ‘llama.cpp-linux-x86_64-avx2@2.12.0’ took 9.47ms [BackendManager] Surveying new engine ‘llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13408 [ProcessForkingProvider][NodeProcessForker] Exited process 13408 21:17:29.648 › Failed to survey hardware with engine ‘llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0’: LMSCore load lib failed - child process with PID 13408 exited with code 127 [BackendManager] Survey for engine ‘llama.cpp-linux-x86_64-nvidia-cuda-avx2@2.12.0’ took 3.70ms [BackendManager] Surveying new engine ‘llama.cpp-linux-x86_64-vulkan-avx2@2.12.0’ [ProcessForkingProvider][NodeProcessForker] Spawned process 13409 [ProcessForkingProvider][NodeProcessForker] Exited process 13409 21:17:29.651 › Failed to survey hardware with engine ‘llama.cpp-linux-x86_64-vulkan-avx2@2.12.0’: LMSCore load lib failed - child process with PID 13409 exited with code 127 [BackendManager] Survey for engine ‘llama.cpp-linux-x86_64-vulkan-avx2@2.12.0’ took 3.57ms
This is my system with installed drivers:

I did get Ollama to work… Any thoughts?
I know this issue, I had a similer issue trying to get the client for krunker.io working with my nvidia gpu. I might have the solution saved somewhere, this comment is so I can remind myself to check.
This looks like a sandboxing issue. Using the “no-sandbox” flag has never worked on AppImage from what I remember, except for very light runtimes. Running with sudo will throw that error because the root user has no display manager running.
Just try running the installer if you don’t want to mess around with debugging the AppImage. Check the GitHub Issues for related keywords and see if others are running into the same issue, maybe it’s just a specific release, or SELinux causing the problem.
You mean the Debian installer? Seems like a bad idea on OpenSUSE.
EDIT: looking in the bug reports on Github, I’ve found a very recent bug report identical to what I’ve described, so it doesn’t seem to be isolated at least.
They have a simple bash installer from what I see. You can also install everything via pip as well. Couple quick commands.
That bug report mentions a few versions, so maybe just go back to whatever version was working on your other machine.
I couldn’t get it to install… Something about ldconfig not being in the path.
I’ll try pip later, then…
EDIT: never mind, this is a barebones version (‘lmster’) anyway.
But going back to version 0.3.39-2 (which I found here) works perfectly, so great!
EDIT AGAIN: not quite perfectly. It wouldn’t start Gemma 4 giving an error about its architecture. Seems like it’s too new a model? I’m now trying 0.4.6-1 and this version does run Gemma 4.
I use lmstudio too, from the official appimage.
Never managed to get the nvidia working, probably because the nvidia drivers are a pain to install properly (secure boot, drivers, kernel, display … everything must be perfect or it doesn’t work).
But the CPU is just enough for local models if you are a bit patient and have plenty RAM.
It is very unusual for lmstudio failing to detect your CPU and your system memory. I would start looking for excessive restriction affecting lmstudio itself (appimage ? Native ? Something with selinux or system hardening ?)
Thanks for the tips. How do I go about doing this? What I’ve tried is run LM Studio from the console with --no-sandbox. No idea if that has something to do with what you’re referring to.
I’ve also tried running it with sudo, which gives this error message:
spoiler
[15557:0409/215138.000614:ERROR:ui/ozone/platform/x11/ozone_platform_x11.cc:249] Missing X server or $DISPLAY [15557:0409/215138.000647:ERROR:ui/aura/env.cc:257] The platform failed to initialize. Exiting. Segmentatiefout
That may be related to your desktops environment with Wayland instead of good old X11
Is LM Studio better than Jan? Just asking as a user wo never had a Problem with Jan on Arch Linux.
It detects my hardware at least, but for whatever reason the Hub is empty and I can’t download the default Jan model… But it works when I import the models manually.



