• 3 Posts
  • 72 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle

  • A full-blown samba domain is extremely overkill if you don’t have a fleet of windows machines.

    You can get centralized user management with a simple LDAP server or similar, no need for a domain.

    Also, snapshots-based backups have limited uses (can’t easily restore only a single file, eats quite a bit of storage). The only times where I actually needed backups were because I fucked up a single application or database, don’t want to rollback the whole OS/data drive for that.


  • vegetaaaaaaa@lemmy.worldtoSelfhosted@lemmy.worldBest Practice Ideas
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    20 days ago

    https://lemmy.world/post/34029848/18647964

    • Hypervisor: Debian stable + libvirt or PVE if you need clustering/HA
    • VMs: Debian stable
    • podman if you need containerization below that

    You can migrate VMs live between hosts (it’s a bit more work if you pick libvirt, but the overhead/features or proxmox are sometimes overkill, libvirt is a bit more barebones, each has its uses), have a cluster-wide L2 network, use a machine as backup storage for others… use VM snapshots for rollback, etc. Regardless of containerization/orchestration below that, a full hypervisor is still nice to have.

    I deploy my services directly to the VM or as podman containers in said VMs. I use ansible for all automation/provisioning (though there are still a few basic provisioning/management to bootstrap new VMs, if it works it works)





  • If you needs are simple, write a simple playbook using the proxmox ansible module https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_kvm_module.html

    Terraform/Opentofu provides more advanced stuff but then you have to worry about persistent state storage, the clunky DSL… used it when acsolutely needed, you can do 90% of this stuff with the proxmox ansible module.

    If you need to make your playbook less verbose, move the logic to a role so that you can configure your VMs from a few lines in the playbook/host_vars. Mine looks like this (it’s for libvirt and not proxmox, but the logic is the same)

    # playbook.yml
    - hosts: hypervisor.example.org
      roles:
        - libvirt
    
    # host_vars/hypervisor.example.org.yml
    libvirt_vms:
      - name: vm1.example.org
        xml_file: "{{ playbook_dir }}/data/libvirt/vm1.example.org.xml"
        state: running
        autostart: yes
      - name: vm2.example.org
        xml_file: "{{ playbook_dir }}/data/libvirt/vm2.example.org.xml"
        autostart: no
      - name: vm3.example.org
        xml_file: "{{ playbook_dir }}/data/libvirt/vm3.example.org.xml"
        autostart: no
      - name: vm4.example.org
        xml_file: "{{ playbook_dir }}/data/libvirt/vm4.example.org.xml"
        autostart: no
        disk_size: 100G
    



    • Ever tested restoring those backups? Do you have the exact procedure written down? Does it still work? If the service gets compromised/data corrupted on sunday, and your backup runs, do you still have a non-compromised backup and how old is it?
    • How timely can you deal with security fixes, and how will you be alerted that a security fix is available?
    • How do you monitor your services for resource availability, errors in logs, security events?
    • How much downtime is acceptable for routine maintenance, and for incidents?
    • Do you have tooling to ensure you can redeploy the exact same configuration to another host?
    • How do you test upgrades before pushing them to production?

    Not saying this is impossible, you just need to have these questions in mind, and the answers written down before you start charging people for the service, and have the support infrastructure ready.

    Or you can just provide the service for free, best-effort without guarantees.

    I do both (free services for a few friends, paid by customers at $work, small team). Most of the time it’s smooth riding but it needs preparation (and more than 1 guy to handle emergencies - vacations, bus factor and all that).

    For the git service I can recommend gitea + gitea-actions (I run the runners in podman). Gitlab has more features but it can be overwhelming if you don’t need them, and it requires more resources.




  • vegetaaaaaaa@lemmy.worldtoSelfhosted@lemmy.worldVersion Dashboard
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    I use RSS feeds, bump version numbers when a new release is out, git commit/push and the CI does the rest (or I’ll run the ansible playbook manually).

    I do check the release notes for breaking changes, and sometimes hold back updates for some time (days/weeks) when the release affects a “critical” feature, or when config tweaks are needed, and/or run these against a testing/staging environment first.


  • Fail2ban is a Free/Open-Source program to parse logs and take action based on the content of these logs. The most common use case is to detect authentication failures in logs and issue a firewall level ban based on that. It uses regex filters to parse the logs and uses policies called jails to determine which action to take (wait for more failures, run command xyz…). It’s old, basic, customizable, does its job.

    crowdsec is a commercial service [1] with a free offering, and some Free/Open-Source components. The architecture is quite different [2], it connects to Crowdec’s (the company) servers to crowd-source detections, their service establishes a “threat score” for each IP based on detections they receive, and in exchange they provide [3] some of these threat feeds/blocklists back to their users. A separate crowdsec-bouncer process takes action based on your configuration.

    If you want to build your own private shared/global blocklist based on crowdsec detections, you’ll need to setup a crowdsec API server and configure all your crowdsec instances to use it. If you want to do this with fail2ban you’ll need to setup your own sync mechanism (there are multiple options, I use a cron job+script that pulls IPs from all fail2ban instances using fail2ban-client status, builds an ipset, and pushes it to all my servers). If you need crowdsourced blocklists, there are multiple free options ([4] can be used directly by ipset).

    Both can be used for roughly the same purpose, but are very different in how they work and the commercial model (or lack of) behind the scenes.