The Hyprland devs (or one of them, I don’t exactly know who) is toxic and known for it. Is that person this guy?
Yes, it’s this guy.
The Hyprland devs (or one of them, I don’t exactly know who) is toxic and known for it. Is that person this guy?
Yes, it’s this guy.
Right, I just mean if your connection speed is faster than your server can transcode, then the transcode speed will be the bottleneck
It’s limited to the transcode speed, but it’s important to keep in mind that e.g. if you transcode to a lower resolution especially it’ll usually transcode faster than realtime.
FYI Jellyflix also supports that
Nah, XP was peak. The last time the backwards compatibility worked with any sort of consistency
NVME SSDs vs HDDs, perhaps?
This. Jellyfin has a direct HDHR integration and works as a DVR directly with one.
I run LMDE on my N100 mini-pc that I use as a server. It was super easy to setup
The person you’re replying to linked their literal reliability stats lmao
So first of all, you shouldn’t involve yourself in your friend’s business. Fraud is generally frowned upon.
But secondly, you know that ChatGPT was trained on the entire internet, right? Like, every book. I don’t think “more books” is gonna help.
I hope you take your computer skills and make something of yourself. Try not to get any more involved in this scheme, seriously. You don’t need this crap marring your reputation.
Besides, there are better reasons/ways to fight the system than helping other people avoid learning.
Just that they’re no easier to use to fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.
Basically, they’re “boring text detectors” more than anything else.
I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
That’s not how it works, sorry.
Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.
Jellyfin doesn’t need any particular setup to work directly from LAN because it doesn’t ever try to use a central login provider the way Plex does.
The only reason OP is struggling with it is because they set it up so that they can only connect to it via Tailscale.