

FWIW, many “apps” are just web apps
I’m surprisingly level-headed for being a walking knot of anxiety.
Ask me anything.
Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.
I also develop Tesseract UI for Lemmy/Sublinks
Avatar by @SatyrSack@feddit.org


FWIW, many “apps” are just web apps


Second this. Everything I have runs on Debian or OpenWRT.


You can also POST AS A GUEST TO THE FEDIVERSE without signing up.
Oh, dear lord. As if we don’t have enough spam and drive-by trolls as it is.
Mine died last weekend and I had to replace it. $650 to do myself, and that included same day delivery.
It failed the prior night, noticed it mid-morning, ordered at 2pm, received it at 5, and had it installed by 7. Was kind of a pain but not nearly as awful as I feared.


Added :) I also disabled the “Create Post” button if the community is on a defederated instance even though, technically, you can still post to your instance’s local copy (it just won’t federate).
Edit: This only works one way. i.e. it can only know if your instance is defederated from the community’s. If the community’s instance is defederated from yours, there will be no indicator because there’s no way to do it without a remote lookup which is both unreliable and inefficient at scale.



You mean like if there’s a community called !cats@example.com and your home instance no longer federates with the instance example.com?
If so, I’ll add that to Tesseract as it sounds useful.


My friend got me into it, and it was the first and only MUD I ever really got into. So kind of loved it by default. I tried out a few others but never really got very far beyond the first few levels in each.
Beyond that, it was intuitive as far as MUDs went, had a massive world and lore, and was well “modded”.




Yep. Works great, at least for my small instance.
You have to install the ffmpeg-vaapi plugin and then under Config->VOD set the profile to the vaapi one it creates. I’m not using remote runners, but from what I’ve read, this doesn’t work with remote runners since you can’t install plugins on those. You may be able to shim in rffmpeg instead, though.
The only sticking point is I cannot get the peertube user (inside the container) to consistently have permission to write to /dev/dri/renderD128 .
Eventually I’m going to tweak the image so this isn’t necessary, but for now have a startup script that brings up the stack and chmod’s the device endpoint to allow any user inside the container to write to it:
#!/bin/bash
cd /opt/peertube
docker compose up -d
docker compose exec peertube bash -c "chmod o+rw /dev/dri/renderD128; ls -lah /dev/dri/renderD128"
Rather than have Docker engine manage the stack’s lifecycle, I have that startup script called by a systemd unit (ExecStop just does a docker compose down).
Edit: The other sticking point I ran into is the video studio not working well (or at least the few videos I tried). I haven’t really tried to pin down what that problem is.
Edit 2: I did have to build a custom image to include the Intel drivers/modules.
non-free) # http://snapshot.debian.org/archive/debian/20251117T000000Z
URIs: http://deb.debian.org/debian
Suites: bookworm bookworm-updates
Components: main non-free
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
Types: deb
# http://snapshot.debian.org/archive/debian-security/20251117T000000Z
URIs: http://deb.debian.org/debian-security
Suites: bookworm-security
Components: main non-free
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg
FROM chocobozzz/peertube:production-bookworm
COPY debian.sources /etc/apt/sources.list.d/debian.sources
RUN apt update && apt install -y --no-install-recommends vainfo intel-media-va-driver-non-free
I should probably add the step here to setup permissions for /dev/dri/renderD128


Ok, so there is real potential for PF to become known for it’s porn. That was not the plan
For a while, Tesseract was the most used frontend for lemmynsfw so, lol, I felt that.


I do!
Kubernetes is a nightmare and overkill for most things we need to run, and Docker Swarm is super easy to setup and maintain.
We only use it for one application, though. The app needs to scale horizontally and scale up and down with demand, so I put together a 6 node swarm cluster just for it. Works great, though the auto scaling required some helper scripting.


The thing about these deprecated tools is that the replacements either suck, are too convoluted, don’t give you the same info, or are overly verbose/obtuse.
ifconfig gave you the most relevant information for the network interfaces almost like a dashboard: IP, MAC address, link status, TX/RX packet counts and errors, etc. You can get that with ip but you’ve got to add a bunch of arguments, make multiple calls with different arguments, and it’s still not quite what ifconfig was.
Similarly, iwconfig gave you that same “dashboard” like information for your wireless adapters. I use iw to configure but iwconfig was my go-to for viewing useful information about it. Don’t get me started on how much I hate iw’s syntax and verbosity.
They can pry scp out of my cold dead hands.
At least nftables is syntax-compatible.


Ban
USApolitics from this sub please


1080p buffered generously but it worked :) The sweet spot was having it transcode to 720p (yay hardware acceleration). I wasn’t sharing it with anyone at the time, so it was just me watching at work on one phone while using my second phone at home for internet.


Just about anything as long as you don’t need to serve it to hundreds of people simultaneously. Hell, I once hosted Jellyfin over a 3G hotpot and it managed.
Pretty much any web-based app will work fine. Streaming servers (Emby, Plex, Jellyfin, etc) work fine for a few simultaneous people as long as you’re not trying to push 4K or something. 1080p can work fine at 4 Mbps or less (transcoding is your friend here). Chat servers (Matrix, XMPP, etc) are also a good candidate.
I hosted everything I wanted with 30 Mbps upload before I got symmetric fiber.


Maybe I should flesh it out into an actual guide. The Nepenthes docs are “meh” at best and completely gloss over integrating it into your stack.
You’ll also need to give it corpus text to generate slop from. I used transcripts from 4 or 5 weird episodes of Voyager (let’s be honest: shit got weird on Voyager lol), mixed with some Jack Handy quotes and a few transcripts of Married…with Children episodes.
https://content.dubvee.org/ is where that bot traffic lands up if you want to see what I’m feeding them.


Thanks!
Mostly there’s three steps involved:
Here’s a rough guide I commented a while back: https://dubvee.org/comment/5198738
Here’s the post link at lemmy.world which should have that comment visible: https://lemmy.world/post/40374746
You’ll have to resolve my comment link on your instance since my instance is set to private now, but in case that doesn’t work, here’s the text of it:
So, I set this up recently and agree with all of your points about the actual integration being glossed over.
I already had bot detection setup in my Nginx config, so adding Nepenthes was just changing the behavior of that. Previously, I had just returned either 404 or 444 to those requests but now it redirects them to Nepenthes.
Rather than trying to do rewrites and pretend the Nepenthes content is under my app’s URL namespace, I just do a redirect which the bot crawlers tend to follow just fine.
There’s several parts to this to keep my config sane. Each of those are in include files.
An include file that looks at the user agent, compares it to a list of bot UA regexes, and sets a variable to either 0 or 1. By itself, that include file doesn’t do anything more than set that variable. This allows me to have it as a global config without having it apply to every virtual host.
An include file that performs the action if a variable is set to true. This has to be included in the server portion of each virtual host where I want the bot traffic to go to Nepenthes. If this isn’t included in a virtual host’s server block, then bot traffic is allowed.
A virtual host where the Nepenthes content is presented. I run a subdomain (content.mydomain.xyz). You could also do this as a path off of your protected domain, but this works for me and keeps my already complex config from getting any worse. Plus, it was easier to integrate into my existing bot config. Had I not already had that, I would have run it off of a path (and may go back and do that when I have time to mess with it again).
The map-bot-user-agents.conf is included in the http section of Nginx and applies to all virtual hosts. You can either include this in the main nginx.conf or at the top (above the server section) in your individual virtual host config file(s).
The deny-disallowed.conf is included individually in each virtual hosts’s server section. Even though the bot detection is global, if the virtual host’s server section does not include the action file, then nothing is done.
Note that I’m treating Google’s crawler the same as an AI bot because…well, it is. They’re abusing their search position by double-dipping on the crawler so you can’t opt out of being crawled for AI training without also preventing it from crawling you for search engine indexing. Depending on your needs, you may need to comment that out. I’ve also commented out the Python requests user agent. And forgive the mess at the bottom of the file. I inherited the seed list of user agents and haven’t cleaned up that massive regex one-liner.
# Map bot user agents
## Sets the $ua_disallowed variable to 0 or 1 depending on the user agent. Non-bot UAs are 0, bots are 1
map $http_user_agent $ua_disallowed {
default 0;
"~PerplexityBot" 1;
"~PetalBot" 1;
"~applebot" 1;
"~compatible; zot" 1;
"~Meta" 1;
"~SurdotlyBot" 1;
"~zgrab" 1;
"~OAI-SearchBot" 1;
"~Protopage" 1;
"~Google-Test" 1;
"~BacklinksExtendedBot" 1;
"~microsoft-for-startups" 1;
"~CCBot" 1;
"~ClaudeBot" 1;
"~VelenPublicWebCrawler" 1;
"~WellKnownBot" 1;
#"~python-requests" 1;
"~bitdiscovery" 1;
"~bingbot" 1;
"~SemrushBot" 1;
"~Bytespider" 1;
"~AhrefsBot" 1;
"~AwarioBot" 1;
# "~Poduptime" 1;
"~GPTBot" 1;
"~DotBot" 1;
"~ImagesiftBot" 1;
"~Amazonbot" 1;
"~GuzzleHttp" 1;
"~DataForSeoBot" 1;
"~StractBot" 1;
"~Googlebot" 1;
"~Barkrowler" 1;
"~SeznamBot" 1;
"~FriendlyCrawler" 1;
"~facebookexternalhit" 1;
"~*(?i)(80legs|360Spider|Aboundex|Abonti|Acunetix|^AIBOT|^Alexibot|Alligator|AllSubmitter|Apexoo|^asterias|^attach|^BackDoorBot|^BackStreet|^BackWeb|Badass|Bandit|Baid|Baiduspider|^BatchFTP|^Bigfoot|^Black.Hole|^BlackWidow|BlackWidow|^BlowFish|Blow|^BotALot|Buddy|^BuiltBotTough|
^Bullseye|^BunnySlippers|BBBike|^Cegbfeieh|^CheeseBot|^CherryPicker|^ChinaClaw|^Cogentbot|CPython|Collector|cognitiveseo|Copier|^CopyRightCheck|^cosmos|^Crescent|CSHttp|^Custo|^Demon|^Devil|^DISCo|^DIIbot|discobot|^DittoSpyder|Download.Demon|Download.Devil|Download.Wonder|^dragonfl
y|^Drip|^eCatch|^EasyDL|^ebingbong|^EirGrabber|^EmailCollector|^EmailSiphon|^EmailWolf|^EroCrawler|^Exabot|^Express|Extractor|^EyeNetIE|FHscan|^FHscan|^flunky|^Foobot|^FrontPage|GalaxyBot|^gotit|Grabber|^GrabNet|^Grafula|^Harvest|^HEADMasterSEO|^hloader|^HMView|^HTTrack|httrack|HTT
rack|htmlparser|^humanlinks|^IlseBot|Image.Stripper|Image.Sucker|imagefetch|^InfoNaviRobot|^InfoTekies|^Intelliseek|^InterGET|^Iria|^Jakarta|^JennyBot|^JetCar|JikeSpider|^JOC|^JustView|^Jyxobot|^Kenjin.Spider|^Keyword.Density|libwww|^larbin|LeechFTP|LeechGet|^LexiBot|^lftp|^libWeb|
^likse|^LinkextractorPro|^LinkScan|^LNSpiderguy|^LinkWalker|msnbot|MSIECrawler|MJ12bot|MegaIndex|^Magnet|^Mag-Net|^MarkWatch|Mass.Downloader|masscan|^Mata.Hari|^Memo|^MIIxpc|^NAMEPROTECT|^Navroad|^NearSite|^NetAnts|^Netcraft|^NetMechanic|^NetSpider|^NetZIP|^NextGenSearchBot|^NICErs
PRO|^niki-bot|^NimbleCrawler|^Nimbostratus-Bot|^Ninja|^Nmap|nmap|^NPbot|Offline.Explorer|Offline.Navigator|OpenLinkProfiler|^Octopus|^Openfind|^OutfoxBot|Pixray|probethenet|proximic|^PageGrabber|^pavuk|^pcBrowser|^Pockey|^ProPowerBot|^ProWebWalker|^psbot|^Pump|python-requests\/|^Qu
eryN.Metasearch|^RealDownload|Reaper|^Reaper|^Ripper|Ripper|Recorder|^ReGet|^RepoMonkey|^RMA|scanbot|SEOkicks-Robot|seoscanners|^Stripper|^Sucker|Siphon|Siteimprove|^SiteSnagger|SiteSucker|^SlySearch|^SmartDownload|^Snake|^Snapbot|^Snoopy|Sosospider|^sogou|spbot|^SpaceBison|^spanne
r|^SpankBot|Spinn4r|^Sqworm|Sqworm|Stripper|Sucker|^SuperBot|SuperHTTP|^SuperHTTP|^Surfbot|^suzuran|^Szukacz|^tAkeOut|^Teleport|^Telesoft|^TurnitinBot|^The.Intraformant|^TheNomad|^TightTwatBot|^Titan|^True_Robot|^turingos|^TurnitinBot|^URLy.Warning|^Vacuum|^VCI|VidibleScraper|^Void
EYE|^WebAuto|^WebBandit|^WebCopier|^WebEnhancer|^WebFetch|^Web.Image.Collector|^WebLeacher|^WebmasterWorldForumBot|WebPix|^WebReaper|^WebSauger|Website.eXtractor|^Webster|WebShag|^WebStripper|WebSucker|^WebWhacker|^WebZIP|Whack|Whacker|^Widow|Widow|WinHTTrack|^WISENutbot|WWWOFFLE|^
WWWOFFLE|^WWW-Collector-E|^Xaldon|^Xenu|^Zade|^Zeus|ZmEu|^Zyborg|SemrushBot|^WebFuck|^MJ12bot|^majestic12|^WallpapersHD)" 1;
}
# Deny disallowed user agents
if ($ua_disallowed) {
# This redirects them to the Nepenthes domain. So far, pretty much all the bot crawlers have been happy to accept the redirect and crawl the tarpit continuously
return 301 https://content.mydomain.xyz/;
}


I was blocking them but decided to shunt their traffic to Nepenthes instead. There’s usually 3-4 different bots thrashing around in there at any given time.
If you have the resources, I highly recommend it.
Lemmy-UI is also an “app”.