• 1 Post
  • 87 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • this will limit ZFS ARC to 16GiB.

    But if I have 32GB to start with, that’s still quite a lot and, as mentioned, my current usage pattern doesn’t really benefit from zfs over any other common filesystem.

    As for using a simple fs on LVM, do you not care about data integrity?

    Where you get that from? LVM has options to create raid volumes and, again as mentioned, I can mix and match those with software raid however I like. Also, single host, no matter how sophisticated filesystems and raid setups, doesn’t really matter when talking about keeping data safe, that’s what backups are for and it’s a whole another discussion.


  • ZFS in general is pretty memory hungry. I set up my proxmox sever with zfs pools a while ago and now I kind of regret it. ZFS in itself is very nice and has a ton of useful features, but I just don’t have the hardware nor the usage pattern to benefit from it that much on my server. I’d rather have that thing running on LVM and/or software raid to have more usable memory for my VM’s. And that’s one of the projects I’ve been planning for the server, replace zfs pools with something which suits my usage patterns better, but that’s a whole another story and requires some spare money and some spare time, which I don’t really either at hand right now.



  • Steps 1, 2, 4, 5 and 7 just need some time. I have the stuff pretty much thought out and it’s just a matter of actually doing the things. I was sick majority of November, but if it wasn’t for that those would have already been completed. The rest need either planning or money. Immich setup would ideally need 2x2TB ssd drives (on raid1 setup) but that’s about 500€ out of the pocket and home assistant setup needs time to actually work with it and to plan things forward. Additionally HA setup could use a floor thermostat or two, some homeESP gadgets and so on, so it needs some money as well.

    Majority of the stuff should be taken care of until February, the rest is more or less open.


  • A ton.

    1. Set up email and website hosting on a VPS to replace current setup
    2. Get more solid state storage for my home server and finnish immich setup (import photos and all that)
    3. Set up proper backups for the home server
    4. Migrate current Unifi controller to home server
    5. Local VPN server to access home assistant and other services even when travelling
    6. Spend some time with my home assistant server, fine tune automations, add some more, add sensors and more controls, maybe add a wall mounted tablet for managing the thing and so on, it’ll never end and need a visit or two from electrician too
    7. Better isolation for IOT things on my network. I already have separate VLAN for them without internet access, but it’s a bit incomplete project

    And then “would be nice” stuff:

    1. Switch Dahua NVR to something else. Current one works in a sense that it stores video, but movement tracking isn’t really perfect and the whole individual NVR box is a bit lacking both in speed and in features
    2. Replace the whole home server (currently running proxmox, which in itself is fine). It’s a old server I got from work, and it does work, but it’s not reundant and it’s getting old. So something less power hungry and less noisy would be nice. It just asks some money and time, which I have neither in surplus, so we’ll see.
    3. Move home assistant from a raspberry pi to the home server. Maybe add zigbee capabilities next to z-wave and wifi.

    And likely a ton more which I don’t remember right now. Money and specially spare time to tinker are just lacking.


  • Use the friend’s network as a VPN/proxy/whatever to obscure my home IP address

    And then your friend is responsible for your actions on the internet. The end goal you described is so vague that at least I wouldn’t let your raspberry connect on my network.

    There’s a ton of VPN services which give you the end result you want without potential liability or other issues for your friend. If you just want to tinker, this thread has quite a bit of information to get you started.


  • So, you want the traffic to go other way around. Traffic from the HomeNet should go to the internet via FriendNet, right? In that case, if you want the raspberry box to act as a proxy (or vpn) server, you need to forward relevant ports on the FriendNet to your raspberry pi so that your HomeComputer can connect to the raspberry box.

    Or you can set up a VPN and route traffic trough that to the other way. Tunnels work both ways, so it’s possible to set up a route/http proxy/whatever trough the VPN tunnel to the internet, even if the raspberry box is the client from VPN server point of view.

    I don’t immediately see the benefit of tunneling your traffic trough the FriendNet to the internet, unless you’re trying to bypass some IP block of something other potentially malicious or at least something being on the gray area. But anyways, you need a method for your proxy client to connect to the proxy server. And in generic consumer space, that needs firewall rules and/or port forwarding (altough both are firewall rules, strictly speaking) so that your proxy server on raspberry box is visible to the internet in the first place.

    Once your proxy server is visible to the internet it’s just a matter of writing up few scripts for the server box to send a message to the client end that my public IP is <a.b.c.d> and change proxy client configuration accordingly, but you still need some kind of setup for the HomeNet to receive that, likely a dynds-service and maybe some port forwarding.

    Again, I personally would set up something like that with a VPN tunnel from raspberry box to the HomeServer, but as I don’t really undestand what you’re going after with setup like this it’s impossible to suggest anything else.


  • So, you want a box which you can connect to any network around and then use some other device to connect to your raspberry box which redirects your traffic trough your home connection to the internet?

    The easiest (at least for me) would be to create VPN server on your home network. Have a dyndns setup on your home network to reach it in the first place, open/redirect a port for openvpn (or whatever you like) and have a client on raspberry running on it. After that you can connect your other device to the raspberry box (via wifi or ethernet) and create ip-forwarding/NAT rules for your traffic so that everything goes to the raspberry box, then to your home server via VPN tunnel and from there to the internet.

    You can use any HTTP proxy with this, or just let the network do it’s thing and tunnel everything via your home connection, but in either case the internet would only see your encrypted VPN traffic to your home network and everything else is originated from your home connection.

    You can replace VPN with just HTTP proxy, but both are pretty close the same on the terms of ‘cost’, so your network latency, bandwidth and other stuff doesn’t really change regardless of the approach. But if you just want the HTTP proxy you can forward a port on your home network for the proxy and just use that on your devices without raspberry box and achieve the very same end result without extra hardware.

    And obviously, if you go with VPN tunneling for everything, you don’t need raspberry for that either, just a VPN client which connects to your home network and that’s it. The case where you have devices which can’t use VPN directly would benefit from the raspbery box, but if you already can set up a HTTP proxy for the thing you’re actually using, I don’t see the benefit of running a separate hardware for anything.

    Some port forwarding or opening ports from firewall is needed on any scenario. But there’s a ton of options to limit access from anyone accessing your stuff. However, this goes way beyond the scope of your question and more details are necessary on what you’re actually trying to achieve with setup like this.


  • I really like the project and have been happily running it on my home lab for quite a while. But for enterprise their pricing for enterprise use is not really cheap either. 510€/socket/year is way more than the previous vmware deal we’re running. Apparently broadcom has changed their pricing to per core which is just lunatic (it would practically add up to millions per month on our environment), so it’s interesting to see what’s going to happen when our licenses expire.


  • As you can connect to the internet you can also access your router (or at least a router). And when running ping, even if you had overlapping IP addresses you should still get responses from the network.

    So, two things come to mind: Either your laptop is running with a different netmask than other devices which causes problems or you’re connected to something else than the local network you think you are. Changes on DHCP server or misconfigured network settings on the laptop might cause the first issue. The second might be because you’re connected to your phone AP, some guest network on your devices or neighbors wifi by accident (multiple networks with same SSID around or something like that).

    Other might be problems with mesh-networking (problem with ARP tables or something) which could cause issues like that. That scenario should get fixed by reconnecting to the network, but I’ve seen bugs in firmware which causes errors like this. Have you tried to restart the mesh-devices?

    Is it possible that your laptop has enabled very restrictive firewall rules for whatever reason? Check that.

    And then there’s of course the long route. Start by verifying that you actually have IP address you assume you have (address itself, subnet, gateway address). Then verify that you can connect to your router (open management portal, ping, ssh, all the things). Assuming you can, then check the router interface and verify that your laptop is shown there as a dhcp-client/connected device (or whatever term that software uses). Then start to ping other devices on your network and also ping your laptop from those devices and also verify that they have addresses you assume (netmask/gateway included).

    And so on, one piece at the time. Check only single thing at one time, so you get full picture on what’s working and what’s not. And from there you can eventually isolate the problem and fix it.




  • That’s better, but you still need to have single wire to loop it around, which is not normally accessible. And at least in here the term ‘multimeter’ spesifically means one without a clamp, so you’d need to wire the multimeter in series with the load and that can be very dangerous if you don’t know what you’re doing.

    Also, cheap ones often are not properly insulated nor rated for wall power (regardless of your voltage), so, again, if you don’t know what you are doing DO NOT measure current from a wall outlet with a multimeter.



  • “Enough battery life” is a bit wide requirement. What you’re running from that?

    Most of the ‘big brands’ (eaton, apc…) work just fine with linux/open source, but specially low end consumer models even from big players might not and not all of them have any kind of port for data transfer at all.

    Personally I’d say that if you’re looking for something smaller than 1000VA just get a brand new one. Bigger than those might be worth to buy used and just replace batteries, but that varies a lot. I’ve got few dirt cheap units around which apprently fried their charging circuit when the original battery died, so they’re e-waste now and on the other hand I have 1500VA cheap(ish) FSP which is running on 3rd or 4th set of batteries, so there’s not a definitive answer on what to get.


  • With Linux the scale alone makes it pretty difficult to maintain any kind of fork. Handful of individuals just can’t compete with a global effort and it’s pretty well understood that the power Linux has becomes from those globally spread devs working towards a common goal. So, should Linux Foundation cease to exist tomorrow I’d bet that something similar would raise to take it’s place.

    For the respect/authority side, I don’t really know. Linux is important enough for governments too, so maybe some entity ran by United nations or something similar could do?


  • I’ve worked with both kind of companies. Current one doesn’t really care about Bus factor, but currently, for myself personally, that’s just a bonus as after every project it would be even more difficult to onboard someone to my position. And then I’ve worked with companies who hire people to improve bus factor actively. When done correctly that’s a really, really good thing. And when it’s done badly it just grinds everything down to almost halt as people spend their time in nonsensical meetings and writing documentation no-one really cares about.

    Balancing that equation is not a easy task and people who are good at it deserve every penny they’re paid for it. And, again just for me, if I get overrun by a bus tomorrow, then it’s not my problem anymore and as the company doesn’t really care about that then I won’t either.


  • Nothing is perfect but “fundamentally broken” is bullshit.

    Compared on how things used to work when Ubuntu came to life it really is fundamentally broken. I’m not the oldest beard around, but I personally have updated both Debian and Ubuntu from obsoleted relase to a current one with very little hiccups in the way. Apt/dpkg is just so good that you could literally bring up a decade old distribution up to date and it was almost without no efforts. The updates ran whenever I chose them to and didn’t break production servers when unattended upgrades were enabled. This is very much not the case with Ubuntu today.

    Hatred for a piece of tech simply because other people said it’s bad, therefore it must be.

    I realize that this isn’t directly because of my comment, but there’s plenty of evidence even on this chain that the problems go way deeper than few individuals ranting over the net that snap is bad. As I already said, it’s objectively worse than the alternatives we’ve had since the 90’s. And the way canonical bundles snap with apt breaks that very long tradition where you could just rely that, when running stable distribution, you could be pretty much certain that ‘apt-get dist-upgrade’ wouldn’t break your system. And even if it did, you could always fix it manually and get the thing back to speed. And this isn’t just a old guy ranting how things were better in the past as you can still get the very reliable experience today, but not with snapd.

    Auto updating is not inherently bad.

    I’m not complaining about auto updates. They are very useful and nice to have, even for advanced users. The problem is that even if snap notification says that ‘software updates now’ it often really doesn’t. Restarting the software, and even some cases running manual update, still brings up the notification that the very same software I updated a second ago needs to restart again to update. Rinse and repeat, while losing your current session over and over again.

    Also, there’s absolutely no indication if anything is actually done. The notification just nags that I need to stop what I’m doing RIGHT NOW and let the system do whatever it wants instead of the tools I’ve chosen to work for me. I don’t want nor need the forced interruptions for my workflow, but when I do have the spare minute to stop working, I expect that the update process actually triggers on that very second and not after some random delay and I also want a progress bar or something to indicate when things are complete and I can resume doing whatever I had in mind.

    it just can’t be a problem to postpone snap updates with a simple command.

    But it is. “<your software> is updating now” message just interrupts pretty much everything I’ve been doing and at that point there’s no way to stop it. And after some update process has finally finalized I need to pretty much reboot to regain control of my system. This is a problem which applies to everybody, regardless of their technical skills.

    My computer is a tool and when I need to actively fight that tool to not interrupt whatever I’m doing it rubs me in a very wrong way. No matter if it’s just browsing the web or writing code to the next best thing ever or watching youtube, I expect the system to be stable for as long as I want it to be. Then there’s a separate time slot when the system can update and maybe break itself in the process, but I control when that time slot exists.

    There’s not a single case that I’ve encountered where snap actually solved a problem I’ve had and there’s a plenty of times when it was either annoying or just straight up caused more problems. Systemd at least have some advantages over SysVInit, but snap doesn’t have even that.

    As mentioned, I’m not the oldest linux guy around, but I’ve been running linux for 20+ years and ~15 of that has kept butter on my bread and snapcraft is easily the most annoying thing that I’ve encountered over that period.


  • You act as if Snap was bad in any way. Proprietary backend does not equal bad.

    I don’t give a rats ass if things I use are propietary or not. FOSS is obviously nice to have, but if something else does the work better I’m all for it, and have paid for several pieces of software. But Ubuntu and Snap (which are running on the thing I’m writing this with) are just objectivey bad. Software updates are even more aggressive than with Windows today and even if I try to work with the “<this software> updates in X days, restart now to update” notifications it just doesn’t do what it says it would/should. And once the package is finally updated the nagging notification returns in a day or two.

    Additionally, snap and/or ubuntu has bricked at least two of my installations in the last few years, canonicals solutions has broken apt/dpkg in a very fundamental way and it most definetly has caused way more issues with my linux-stuff over the years than anything else, systemd included.

    Trying to twist that as an elitist point of view with FOSS (which there are plenty of, obviously) is misleading and just straight up false. Snapcraft and it’s implementation is just broken on so many levels and has pushed me away from ubuntu (and derivatives). Way back when ubuntu started to gain traction it was a really welcomed distribution and I was a happy user for at least a decade, but as then things are now it’s either Debian (mostly for servers) or Mint (on desktops) for me. Whenever I have the choise I won’t even consider ubuntu as an option, both commercially at work and for my personal things.


  • I did quickly check the files on update.zip and it looks like they’re tarballs embedded in a shell script and image files including pretty much the whole operating system on the thing.

    You can extract those even without a VM and do whatever you want with the files and package them back up, so, you can override version checks and you can inject init.d scripts, binaries and pretty much everything to the device, including changing passwords to /etc/shadow and so on.

    I don’t know how the thing actually operates, but if it isn’t absolutely necessary I’d leave bootloader (appears to be uboot) and kernel untouched as messing up those might end up with a bricked device and then easy options are broken and you’ll need to try to gain access via other means, like interfacing directly with the storage on the device (which most likely includes opening the thing up and wiring something like arduino or an serial cable to it).

    But beyond that, once you override version checks, it should be possible to upload the same version number over and over again until you have what you need. After that you just need suitable binaries for the hardware/kernel, likely some libraries from the same package and a init-script and you should be good to go.

    The other way you can approach this is to look for web server configurations from the image and see if there’s any vulnerabilities (like apache running as root and insecure script on top of that to inject system files via http), which might be the safest route at least for a start.

    I’m not really experienced on a things like this, but I know a thing or two about linux, so do your homework before attempting anything, have a good luck and have fun while tinkering!