

Have you even read the issues and understood them?
Yes, those should be fixed, but unless you are worried about someone hijacking a video stream when you use a generic media path, there is not that much to worry about.


Have you even read the issues and understood them?
Yes, those should be fixed, but unless you are worried about someone hijacking a video stream when you use a generic media path, there is not that much to worry about.


deleted by creator


Sure in the gigantic wall of text. Also it doesn’t tell you why, or what to do about it. All they’d have to do is say “run dist-upgrade to update these packages.”
It is literally in the summary that gets presented in the last few lines before you have to press Y to continue.
Since you are already overwhelmed by the wall of text, you would probably not read the suggestion antways.


I didn’t mean to put words in your mouth, but your replies are exhausting. Lighten up.
I think you got my point. Not sure why you feel the need to try to discuss another discuasion topic with me.
Apt could use some usability improvements, specifically around doing full upgrades. This isn’t a controversial take.
No its not. And again, I never said apt is good or perfect or bad.
Googling apt full upgrade CLI leads to various articles, all of which have a series of commands that are named orthogonally to this fairly common use case, and must be run in order, and sometimes repeated.
I am fully aware, it is not like i ever had to dig down and resolve dependency hell.
There’s good reasons it is the way it is, and it can certainly be improved.
But it is something different if you say that tools could be made better, than writing a whole article with a click bait title on “How i ignored the output of my package manager”.


Nothing of what you said is on topic. I never said linux is for everyone and so on…
First, its about server administration. Second, I am neither saying that this behavior is good or bad.
I am saying that the behavior is clearly stated in the output. Or what else does “packages were held back” mean.
Blaming ignorance in reading the output prompt on the tools is really childish.


But it is clearly stated in the output that it holds back packages.


When a kernel update requires a change in dependencies, something Proxmox kernels do frequently, apt just quietly “keeps back” the package. It doesn’t fail, it doesn’t break the system, and it doesn’t trigger a rollback. It just waits for me to notice.
This should save a click for hopefully everyone.
Yes obviously, if you do not update the packages then they do not get updated.
If you do not read the output of a command then you will not notuce that.


7.0-rc7 is probably due to the 7.0 release early mid april. So the fix was in the mainline on 1st of April. The commit on 11th from GKH was probably due to the release.
I am not that familiar with the commit and release structure to get more into detail. But to me it clearly looks like the statement on copy.fail is correct, that the fix was in mainline on 1st of April.
From my point of view, I would suggest that maybe the communication downstream to the distros was not handled that well? But who would be to blaim? The researches that would need to communicate this issue to most existing distros? Linux maintainers? Distro maintainers?
Hard to say, without knowing the communication of the related mailinglists and disclousre etc.


Honestly, thats a really bad take. Yes obviously, you should not let attackers access the terminal, but there are linux servers that rely on multiuser operations, like Servers that are meant for terminal access, like HPC.
Then services get hosted via container these days, so even with rootless containers you get root access if you only get RCE on one service. And even if there are additional VMs for more isolation between host, you still get root on the whole VM.


Looking at the CVE on NIST,i found following commit which dates to 30.03


The patches where proposed over a month ago and the patch to the kernel was commited on 1th of April.
Either the Vulnerability was not proper communicated to the distro maintainers or they were the ones sleeping.
This was probably executed as a responsible discllsure where clear timelines and release dates get communicated from the beginning.
I find it hard to blame the security team here when there was 1 month of time between first commited patch and release of the PoC.
I heard the wisdom once that you should self host everything except for email. I’m sure there are great tools to make it manageable but the effort/gain is just very high.
I find it irretating that you speak on the matter with hearsay without having even tried it with modern tools or project.
With projects like Mailcow its a simple setup. Rspamd handles spam better than many professional industry spam filters i have encountered.
Yes there are some pitfalls someone should be aware of and some know how required, but as of right know, it very easy with very little maintenance.
No one ever said that the new model would not be usefull. But Anthropic hyped it up to a 0-Day machine, who finds 0-Days in every project with easy and in places they could not have been found by humans.


I have q xyz Domain as my main Domain. And there are basically no issues here. But in 1 or 2 places i noticed that the domain gets blocked, a couple of free/open wifis or a couple of DNS servers. But nothing major imho.


By default this applications allows when adding a server, that the communication is not encrypted between the app and the server. This should be configured by default to enforce TLS encryption. If someone would want to disable dis behavior and allow unencrypted communication, then this should take extra steps.
As i commented somewhere else, to say that since it is turned off it is secure by default, is like saying: “The SSH server is turned off by default so the configuration that comes with it does not need to be secure when shipped”


Thats like saying:
“The SSH Server configuration does not need to be secure because the SSH Server is turned off by default”


Yes, this is what we’re discussing… Are you a bot?
Obviously no. But you keep dodging the point here. And instead of comming up with an argument against my point, you seem to try to attack me personally.


In security and development there is a statement, called “secure by default”. That means the default settings are secure. This would encapsulate something like enforced Transport encryption.
Does this mean that the config can not be changed to fit the thread model? No.


Not sure why you’ve chosen to be indignant about this particular implementation.
We are talking about a tracking App. Most selfhosted projects do not store such private data. You may can mage the argument for immich but only for ppl who take a picture every 5 min.
As I said, when you know the exact path of a media item on the server then you can check if the item exists.
If you choose a none standard filepath its not an issue.
Should that be fixed yes.
Whats the scenario? A law firm could brute force check all media items on open jellyfin servers? Highly illegal to exploit something like this in a lot of jurisdiction. And would also not proof the existence of the media on the server, just a file named like it.
Mitigation? Just add another random letter in the docker-compose mount path.