Just curious what people are using n8n for.
I just finished setting up a workflow that sends me a Telegram message every night about photography opportunities for the next day. It puts together weather data, POIs (which I defined for my location), sun/moon position, milky way visibility, cloud cover, etc. The message then simply tells me if it’s worth it going out in the morning.


A governmental-ish site I’m required to use doesn’t push notifications as mails, so you have to login daily to check for updates. Updates may happen multiple times daily or once a month. I automated my server to access the site once a day with my credentials, screenshot the notifications, parse them with ocr, and send myself a mail.
Why screenshot and parse? Can’t you just parse the html directly?
Since the dawn of LLMs it’s virtually impossible to scrape web content. Headless browsers have become basically useless. I actually have to automate keyboard inputs to simulate the navigation. I could maybe try to write the javascript cache to file but honestly it’s just faster that way.
What why, I’m scraping html just fine
What do you use for OCR parsing?
The data is non critical and doesn’t contain indentifying info so I use ocr.space API. You could probably find ways to use the tesseract libraries locally.