

- Any of https://staticsitegenerators.bevry.me/
- Any webserver + virtualhost config that serves plain HTML pages
- a build/upload script
A full-blown samba domain is extremely overkill if you don’t have a fleet of windows machines.
You can get centralized user management with a simple LDAP server or similar, no need for a domain.
Also, snapshots-based backups have limited uses (can’t easily restore only a single file, eats quite a bit of storage). The only times where I actually needed backups were because I fucked up a single application or database, don’t want to rollback the whole OS/data drive for that.
https://lemmy.world/post/34029848/18647964
- Hypervisor: Debian stable + libvirt or PVE if you need clustering/HA
- VMs: Debian stable
- podman if you need containerization below that
You can migrate VMs live between hosts (it’s a bit more work if you pick libvirt, but the overhead/features or proxmox are sometimes overkill, libvirt is a bit more barebones, each has its uses), have a cluster-wide L2 network, use a machine as backup storage for others… use VM snapshots for rollback, etc. Regardless of containerization/orchestration below that, a full hypervisor is still nice to have.
I deploy my services directly to the VM or as podman containers in said VMs. I use ansible for all automation/provisioning (though there are still a few basic provisioning/management to bootstrap new VMs, if it works it works)
I’m not sure of any formal name
Cloudflare turnstile
If you needs are simple, write a simple playbook using the proxmox ansible module https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_kvm_module.html
Terraform/Opentofu provides more advanced stuff but then you have to worry about persistent state storage, the clunky DSL… used it when acsolutely needed, you can do 90% of this stuff with the proxmox ansible module.
If you need to make your playbook less verbose, move the logic to a role so that you can configure your VMs from a few lines in the playbook/host_vars. Mine looks like this (it’s for libvirt and not proxmox, but the logic is the same)
# playbook.yml
- hosts: hypervisor.example.org
roles:
- libvirt
# host_vars/hypervisor.example.org.yml
libvirt_vms:
- name: vm1.example.org
xml_file: "{{ playbook_dir }}/data/libvirt/vm1.example.org.xml"
state: running
autostart: yes
- name: vm2.example.org
xml_file: "{{ playbook_dir }}/data/libvirt/vm2.example.org.xml"
autostart: no
- name: vm3.example.org
xml_file: "{{ playbook_dir }}/data/libvirt/vm3.example.org.xml"
autostart: no
- name: vm4.example.org
xml_file: "{{ playbook_dir }}/data/libvirt/vm4.example.org.xml"
autostart: no
disk_size: 100G
turn that monitor off and save power?
apache can do load balancing as well https://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html
I’d pick something that you already use across your stack, to minimize the number of different integration/config styles/bugs…
Not saying this is impossible, you just need to have these questions in mind, and the answers written down before you start charging people for the service, and have the support infrastructure ready.
Or you can just provide the service for free, best-effort without guarantees.
I do both (free services for a few friends, paid by customers at $work, small team). Most of the time it’s smooth riding but it needs preparation (and more than 1 guy to handle emergencies - vacations, bus factor and all that).
For the git service I can recommend gitea + gitea-actions (I run the runners in podman). Gitlab has more features but it can be overwhelming if you don’t need them, and it requires more resources.
Spyware until proven otherwise. Where is the source code?
https://github.com/sethcottle/littlelink Or a simple HTML page…
I use RSS feeds, bump version numbers when a new release is out, git commit/push and the CI does the rest (or I’ll run the ansible playbook manually).
I do check the release notes for breaking changes, and sometimes hold back updates for some time (days/weeks) when the release affects a “critical” feature, or when config tweaks are needed, and/or run these against a testing/staging environment first.
Fail2ban is a Free/Open-Source program to parse logs and take action based on the content of these logs. The most common use case is to detect authentication failures in logs and issue a firewall level ban based on that. It uses regex filters to parse the logs and uses policies called jails to determine which action to take (wait for more failures, run command xyz…). It’s old, basic, customizable, does its job.
crowdsec is a commercial service [1] with a free offering, and some Free/Open-Source components. The architecture is quite different [2], it connects to Crowdec’s (the company) servers to crowd-source detections, their service establishes a “threat score” for each IP based on detections they receive, and in exchange they provide [3] some of these threat feeds/blocklists back to their users. A separate crowdsec-bouncer process takes action based on your configuration.
If you want to build your own private shared/global blocklist based on crowdsec detections, you’ll need to setup a crowdsec API server and configure all your crowdsec instances to use it. If you want to do this with fail2ban you’ll need to setup your own sync mechanism (there are multiple options, I use a cron job+script that pulls IPs from all fail2ban instances using fail2ban-client status
, builds an ipset, and pushes it to all my servers). If you need crowdsourced blocklists, there are multiple free options ([4] can be used directly by ipset
).
Both can be used for roughly the same purpose, but are very different in how they work and the commercial model (or lack of) behind the scenes.
Odoo major version upgrades are a pain in the ass. Wouldn’t recommend.
Fail2ban unless you need the features that crowdsec provides. They are different tools with different purposes and different features.
Debian
There is a pinned post for this https://lemmy.world/post/60585
Tested SMS Import/Export (installed from F-droid), works fine.