• 0 Posts
  • 620 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Mine would be: “I have no idea” - An answer the LLMs generally refuse to give by their nature (usually declining to answer is rooted in something in the context indicating refusing to answer being the proper text).

    If you really pressed them, they’d probably google each thing and sum the results, so the estimates would be as consistent as first google results.

    LLMs have a tendency to emit a plausible answer without regard for facts one way or the other. We try to steer things by stuffing the context with facts roughly based on traditional ‘fact’ based measures, but if the context doesn’t have factual data to steer the output, the output is purely based on narrative consistency rather than data consistency. It may even do that if the context has fact based content in it sometimes.


  • Note that could prove you have it, but failure to execute does not prove yourself secure.

    For example, someone reported to me that their RHEL9 system was not vulnerable based on this result. But it was because python was 3.9 and didn’t have os.splice, so the demonstrator failed, but the actual issue was there.

    Similarly, if ‘/usr/bin/su’ isn’t exactly there (maybe it’s in /bin/su, or in /sbin/su, or /usr/sbin/su, or not there at all), the demonstrator will fail, but the kernel may still have the vulnerability, you just have to select a different victim utility (or change the cache for some other data other than an executable for other effects).



  • Note that this is a rather narrow view of the scope of things.

    Yes, the demonstrator is a python script that opens up ‘su’ and uses splice+this vulnerability to change it to ‘just assume all privileges and become sh’.

    However, it’s that any process in any namespace can leverage a certain socket type and splice to effectively modify any filesystem content they want. It’s easy to see how this could be part of a chained attack to, for example, replace a protected service that is firewalled off with a shell. An RCE in a service permits rewriting nginx in an entirely different container and replaces it with a shell backend of your choosing.

    That ‘flatpak’ application on your single user system that is guarded from touching your files that aren’t related? That isolation doesn’t mean anything if this issue is in play.

    In terms of shared systems, while it should be avoided if possible, practically speaking there’s a lot of shared resources.

    I don’t get why I’ve seen so many people saying “ehh, no big deal, privilege escalation is just a fact of life”.


  • In my experience, the bigger the codebase gets, the more confounded LLM gets at trying to make coherent changes. So LLM projects start on shaky ground and just get worse because they can’t maintain the stuff they themselves generated.

    I’ve seen what LLM can do and it is certainly interesting and can do some stuff, but the vast majority of my experience is someone who had not coded before “vibing” themselves into a corner and demanding help to dig them out. A bit irritating because while before we could reasonably prioritize requests to do stuff because management understood making something from nothing was real work, now management says “they aren’t asking you to make something, just help them fix something that already exists, should be easy!”

    On the ELOC metric, for a long time I pointed out how disastrous I must be because my contribution to a project I was on was about -10,000 lines of code by the time I went to something else.


  • While I despise the captchas from a human perspective, the fact that an LLM can solve the challenge isn’t a deal breaker. It doesn’t need to be impossible for a non-human to solve, it just has to be too expensive.

    It does certainly shift the equation to stuff like proof of work since a computer can solve it anyway, might as well not annoy the human.


  • Seems utterly pointless though…

    With the proof of work approach, at least it’s demanding the client consume some resources, though the ‘right’ amount is a tricky question, either it’s so trivial as to hardly matter to the scrapers, or it’s hard enough to put a dent in the scrapers’ build, but human operated low end devices are royally screwed…

    Here the crawler simply schedules a resumption and moves on to other work. The crawler doesn’t need it right now and it’s free for it to wait.




  • Fun story from this week, we had a chore for the frontend to refresh to a new version of the UI framework. Fairly simple task, so off to a junior developer. Within a couple hours there was a merge request ready to go. Ok, a fairly normal amount of time to change version and at least do a sniff test and find nothing changed so I go in assuming I’ll look at a few version bumps, maybe one or two tweaks… I see the junior dev was proposing over 1,000 lines of code to be added… WTF…

    I crack it open and there was just a firehose of css rules, all marked ‘!important’. Looking at one examlpe, it repeated the same classifier with the same exact bunch of rules 5 times in a row. It was like it found every possible derived css class combination with tag and defined !important CSS for most everything about it.

    So I find out that the junior dev asked it to rebase and it did what he expected, just change some version and went. He tried it and due to a framework change, one element was misaligned by a little bit. So he gave the feedback to the LLM and tried again… and it failed, and he tried again and it failed and after 5 rounds, it finally got the element aligned and hit ‘merge request’. For fun I opened up his proposed change and just so much was just a bit dodgy css wise because it screwed with so much stuff, but the junior dev only concerned himself with the page as it opened.

    So I said screw it, I’ll do it myself, and added the singular rule that was needed to adapt to the framework change, making it overall about a 5 line change including versioning and such.

    Depressingly, I suspect an executive would consider me far less productive because I only did 5 lines of change and the junior dev would have done thousands…





  • I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)… But you do raise a pretty good indicator that at least a key thing is not running in container.

    But I’m not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I’ve seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because ‘they work’.

    In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.




  • They migrated from an OS and MySQL version receiving no updates since at least 2 years to MySQL 8.0 which will stop getting updates in 4 days.

    I agree that it was an odd choice, as well as the OS. Going to Alma 9 when Alma 10 had already been out for some time. Would think if they wanted the long term updates they would have gone to 10 to get the most out of it. If they went from 8 to 9, sure, some people like staying in the area where RedHat got bored and won’t mess with it anymore, but 7 to 9 suggests they didn’t do timely upgrades before.

    Also every service is running without any containerization and there is a single database for everything

    Well, he said explicitly they have 30 databases, though I suppose you meant a single mysql instance. I will say I won’t judge one way or another about containerization, as I’ve seen about as much amateur hour containerization to not immediately judge one way or another on that.

    it all runs on a single host

    Yeah, that seems pretty dire given his stated usage scenario, and it seems very explicit that their entire internet facing world is that single host…

    backup strategy or disk encryption

    It was a post narrowly discussing migration, so I don’t expect a full inventory of everything they do, so backup strategy and disk encryption and all sorts of other things may be omitted as having nothing to do with the core thing. I guess the most red flag on this front is he explicitly mentions the old setup having “backups enabled” and new setup having “RAID1”, which does make me wonder if they think RAID1 is a credible answer for “backup”.

    Also not a single word about infrastructure as code

    Again, not necessarily in-scope for this document, so not sure if I’m going to judge on this one. I routinely take material expressed in terms of an ansible play and “generic it out” for general consumption when discussing with people outside my organization.

    The whole stuff is hosted in Germany for a Turkish software company

    I’ll confess to not liking it being in a single site, however to the extent they select a single site, Germany might make sense because:

    Several live mobile apps serving hundreds of thousands of users

    Their userbase may be better connected to Germany than Turkey, and the user latency matters more.

    My biggest concerns would be mitigated if they said that the German hosted server is their off-prem solution but it is also hosted on-prem giving them multiple sites, but I think that’s a bit much to imagine given the process described. The described migration process wouldn’t make sense in that scenario.


  • One thing is that I don’t know for sure if it is containerized or not. The topic was migration, and that facet would be not relevant to the core. When I’m doing a write up of things like this, I tend to omit details like that unless it is core to the subject at hand. Including replacing a funky ingress situation with a more universally recognizable nginx example. The users of a container setup would understand how to translate to their scenario.

    For another, I’ll say that I’ve probably seen more people getting screwed up because they didn’t understand how to use containers and used them anyway. Most notably they make their networking needlessly convoluted and can’t understand it. Also when they kind of mindlessly divide a flow for “microservices”, they get lost in the debug.

    They are useful, but I think people might do a lot better if they:

    • More carefully considered how they split things up
    • Go ahead and use host networking, it’s pretty good
    • unix domain sockets can be your friend instead of binding to tcp for everything. I much favor reverse proxy to unix domain instead of handling IP/ports, which is what the container networks buy most people but the flow is too gnarly
    • Be wary of random dockerhub “appliances”, they tend to be poorly maintained.

    If you are writing in rust or golang, containers might not really buy you much other than a headache, so long as you distinct users for security isolation. For something like python, it might be a more thorough approach that virtalenv, though I wouldn’t like to keep a python stack maintained with how fickle the ecosysyem is. Node is pretty much always “virtualenv” like, but even worse for fickle dependencies.



  • Nah, the producers of human slop are ecstatic because now they can just prompt up their slop and post something for engagement, before they had to at least put in a modicum of effort to make their slop. It would take at least as long to make the human slop as a human would take to view it, now they can get output with even less than the effort the human wastes seeing it.

    The slop flood gates are open.