• 4 Posts
  • 311 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle
  • I haven’t really paid attention on prosumer-hardware lately as my RB4011iGS+RM just keeps on working. 6 watts is really low tho, according to spec sheet my router pulls 18W 24VDC. Few links I checked from your original post however give 15W TDP, so maybe some seller is pulling numbers out of their sleeve or there’s differences between models. Either way, those are pretty damn efficient boxes.

    With that celeron CPU I think they have less troughput than what I’m running, but if your internet connection isn’t several hundred megabits I don’t think that’ll be an issue. I had issues with some edgerouter, while it claimed to do full gigabit in practise it managed only up to ~700Mbps and even less than that with even slightly complicated routing.

    I don’t have any direct recommendations, but I’d stay away from TP-Link and other budget brands which often promise a lot more than they can actually deliver. My switches are from HPE and they are pretty cheap second hand (or even free if you happen to stumble in a office renewal somewhere).


  • In most common case you can think VLANs at the firewall end like whole different physical networks. On port LAN1 you have a switch and whatever else you happen to have, on LAN2 similar setup and so on. All the networks can (and should) have their own IP range and it’s the firewall who decides what traffic is allowed, like is a machine in LAN1 allowed to talk with printer on LAN2.

    Virtual LAN just bundles that all to one set of cables and network devices with the obvious benefit that you can have benefits of multiple networks for security, access control or whatever but you don’t need extra hardware for each setup. In theory it is possible to break out of VLAN separation, but in practice it’s really not something a home gamer should worry about too much.

    What you need is a managed switch (or multiple if needed) so that you can assign ports to different VLANs or a combination of many VLANs in a single port, commonly known as trunk. Some unmanaged switches pass trough VLAN frames as is, but it’s not guaranteed, so safe bet is to get only managed switches.

    For the firewall/router, the best option would be to either drop the ISP router totally or if possible use bridged port on it so that you can get ‘raw’ internet to your own device. You can make it work with ‘LAN’ port on your current router too, there’s just one set of port forwarding and firewall rules extra to manage before anything even hits your own network. Instead of firewall PC I’d recommend an actual router. They are often more suited to the task, are physically smaller and tend to consume less energy. Also dedicated firewall/routers are often a bit cheaper (at least less than 600$, I paid ~150€ for my router). I personally have a Mikrotik device and I like it, but there’s plenty decent ones to choose from. PC will work as well, but they tend to have more potentially failing components than dedicated routers.

    But in general, at least I can’t see anything fundamentally wrong with your plan. Remember to have fun while setting it up.


  • IsoKiero@sopuli.xyztoLemmy Shitpost@lemmy.worldFuture
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    You really should not turn the oven on remotely,

    Neighbors house almost burned down because of remote controlled device. It was a sauna stove instead of a oven and didn’t even have network, just control panel outside of the sauna where you could turn it on without checking the stove first. Kids had left some plastic toy on the stove. Gladly they noticed the smell just in time, few minutes more and smoke would have ignited, at least according to firemen who were alerted on site.

    My stove has option for remote control too via simple relay input so I could just throw in esphome or whatever on it and control it across the world over home assistant, but for that exact reason I didn’t install anything on the header.


  • There are various mesh-network projects around and it’s better than nothing, but their issues tend to be pretty low bandwidth and physically limited area. Wifi-mesh in a somewhat densely populated area is technically possible, but technology says that you need to be pretty close (100m give or take) to the next node. On rural areas people have built pretty long range wireless jumps without ISPs but hardware requirements for those are a bit different and you’re relying heavily on the node next to you in upstream direction.

    Then there’s things like LoRa Networking, but their bandwidth is very small and it’s really only suitable for SMS-style messaging with pretty low traffic, but it can reach up to 10km between nodes. AX.25 over amateur radio has range up to hundreds of kilometers, but it’s also pretty slow (~1kbps).

    So, in practise, the best would be to use something like NNTP and distributed servers across the mesh network where you’re less dependent on long range high speed communications. Modern web experience or instant messaging just isn’t really feasible over any mesh network with current consumer-grade hardware.


  • I don’t know about running the whole internet over peer-to-peer network, but my home server is pretty much the ‘main’ computer and while phones an laptops obviously have data locally it’s also synced to the server so losing one mobile device isn’t really a big deal (besides money to get a new one). Immich for photos, nextcloud for other data, radicale for contacts and calendar and self hosted imap-server for emails.

    Obviously the devices are still very much personal, but it’s easy enough to wipe and start over if needed. For remote wipe I still need to rely with google on phone and with laptop there’s currently no way to remote wipe it but it’s running with encrypted drive anyway so it’s only the monetary value of the thing in case it’s lost.


  • Fixed headaches with my proxmox backup server. It has a SAS-controller and 4 spinning drives running backups at detached garage and the old fujitsu desktop I dug out of office dumpster pile just kept crashing. Flashed controller to IT-firmware, updated bios on motherboard and did everything else I could figure out but the system just lost the drives pretty much daily and required a hard reset. Turns out, or at least that’s my conclusion, that the PSU on the machine just didn’t have enough juice for the whole setup and that caused instability. I dug out old (2010 or so) desktop from my own pile and threw 600W PSU on the box, it’s now been stable for at least a week.

    I would’ve liked to keep the fujitsu-machine as it’s in a more compact case and couple of generations newer CPU, but that thing has propietary power supply so it was easier to swap out the whole system and just move drives from one to another. So, the current setup consumes maybe a bit more electricity, but at least it’s doing what it is supposed to.



  • That is a problem, I agree. But I still feel like it would be beneficial if there was some standard on HTTP or other protocols which could limit user access based on PG-rating instead of everyone developing their own approach. It could also be something like robots.txt, but for PG-rating, where client would do the verification.

    And, as I already mentioned, that should be strictly local only setting and only for parental/guardian controlling what minors can and can’t do with their devices.


  • There is a very good argument for OS level age ‘tracking’ as a means of creating a cohesive environment for software and websites to operate without having to implement individual age verification. The biggest actual issue here is how the OS determines what the user’s age is.

    I agree with you on this. I wouldn’t mind if there was a mechanism on browsers which would send ‘child/teen/adult’ (or whatever they’d be called) data to websites in request headers since they already report a ton of stuff to the server anyways. It would be trivial for adult sites to check one header and limit access based on that. But the setting needs to be local only, so that parents could easily set restricted accounts for their kids. The point where user age must be validated via any 3rd party it’s no longer about parental controls and the whole thing becomes a surveillance tool.

    Also the limits should be agreed somehow on at least somewhat global basis so that it’s only used for porn/gore/horror and other stuff like that. Things like sexual education, religious topics (likely both pro- and against-), medical stuff and things like that should be left out of the filtering. But as with practically every ‘think of the children’-thing proposed for the internet it’s got nothing to do with children nor used only for that.



  • Zfs can become painfully slow if you don’t have RAM for it. I tried to run ZFS on my old setup with 64GB RAM and with moderate amount of virtual hosts and it was nearly useless with heavier io-loads. I didn’t try to tweak settings for it, so there might be some workarounds to make it work better, I just repartitioned all the storage drives with mdadm raid5 array and lvm-thin on top of that. Zfs will work with limited memory in a sense that you don’t risk losing data because of it, but as mentioned, performance might drop significantly. Now that I have a system which has memory to run raidz2 it’s pretty damn good, but with limited hardware I would not recommend it.

    LVM itself is pretty trivial to move on a another system, most modern kernels just autodetect volume groups and you can use them as any normal filesystem. If you move full, intact, mdadm array to a new system (and have necessary utils installed) it should be autodetected too, but specially with degraded array manual reassembly might be needed. I don’t know what kind of issues you’ve been getting, but in general moving both lvm and mdadm drives between systems is pretty painless. Instead of mdadm you could also run lvm-mirroring on the drives so it’ll drop one layer off from your setup and it potentially makes rebuilding the array a bit simpler on another system, but neither approach should prevent moving drives to another host.

    Lvm-thin is more flexible and while it might be a slightly slower on some scenarios I’d still recommend using that. Maybe the biggest benefit you’ll get from it is an option to take snapshots from VMs. Mounting plain directories will work too, but if your storage is only used by proxmox I don’t see any point in that over LVM setup.



  • For whatever reason ISPs tend (at least in here) to be pretty bad at keeping their DNS services up and running and that could cause issues you’re having. Easy test is to switch your laptop DNS servers to cloudflare (1.1.1.1, 1.0.0.1) or opendns (208.67.222.222, 208.67.220.220) and see if the problem goes away. Or even faster by doing single queries from terminal, like ‘dig a google.com @1.1.1.1’.

    If that helps you can change your router WAN DNS server to something than what operator offers you via DHCP. I personally use opendns servers, but cloudflare or google (8.8.8.8, 8.8.4.4) are common pretty decent choices too.


  • Depends on what you’re looking for, but for server use even a bit older hardware is just fine. My proxmox server has Xeon 2620v3 CPU and it’s plenty for my needs. For storage I went with SAS-controller, controllers are relatively cheap and if you happen to have a friend in some IT department you might get lucky when they replace hardware. RAM is a pain in the rear, but 8GB DDR4 rdimms work still just fine (if someone is interested I have few around)

    Personally I wouldn’t pay current prices for new hardware, specially if it’s for hosting. A bit older, but server rated, components give a lot more value for your money.


  • This, in turn, is different from APT, which is not Debian’s repository, but Debian’s package manager. So, technically, I could write “sudo apt install (anything)” to get any piece of software from Debian’s repository indeed, but I could also use that command to get software from somewhere else also in the form of a Deb package but which would not have come from Debian itself.

    With apt (and discover which uses apt/dpkg at the background) you can install anything from repositories configured on your system. So, if you want to use apt to install packages not built by Debian team you’ll need to add those repositories in your system, so they don’t just appear out of nothing.

    Some software vendors offers .deb packages you can install which then add their own repository on your system and then you can ‘apt install’ their product just like you would on native Debian software and the same upgrade process which keeps your system up to date will include that ‘3rd party’ software as well. Also some offer instructions on how to add their repository manually, but with a downloaded .deb it might be a bit easier to add repository without really paying attention to it.

    Spotify is one of the big vendors who have their own repository for Debian and Ubuntu and with Ubuntu there’s “ppa” repositories, which are basically just random individuals offering their packages for everyone to use and they are generally not going trough the same scrutiny than official repositories.






  • “installing apps from outside the Google Play Store”

    To me that implies it’s somehow different than just installing software. You could say ‘install from play store’ or ‘install from f-droid’ if you need to specify which app repository you should use, as that what it is. Sideloading might be an appropriate term if you need to upload apk to your device via USB-cable from your PC, which the term originally meant.

    to make it sound somehow dangerous or complicated in order to justify

    [Citation needed]

    From the article:

    This “advanced flow” is for power users and enthusiasts who “want to take educated risks to install software from unverified developers.” Google says it was “designed carefully to prevent those in the midst of a scam attempt from being coerced by high pressure tactics to install malicious software.”

    Sure, the term itself comes from 1990s, but lately specially Google tries to twist that to mean something only ‘power users’ do and it comes with a ‘educated risk’.