• 0 Posts
  • 607 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)… But you do raise a pretty good indicator that at least a key thing is not running in container.

    But I’m not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I’ve seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because ‘they work’.

    In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.




  • They migrated from an OS and MySQL version receiving no updates since at least 2 years to MySQL 8.0 which will stop getting updates in 4 days.

    I agree that it was an odd choice, as well as the OS. Going to Alma 9 when Alma 10 had already been out for some time. Would think if they wanted the long term updates they would have gone to 10 to get the most out of it. If they went from 8 to 9, sure, some people like staying in the area where RedHat got bored and won’t mess with it anymore, but 7 to 9 suggests they didn’t do timely upgrades before.

    Also every service is running without any containerization and there is a single database for everything

    Well, he said explicitly they have 30 databases, though I suppose you meant a single mysql instance. I will say I won’t judge one way or another about containerization, as I’ve seen about as much amateur hour containerization to not immediately judge one way or another on that.

    it all runs on a single host

    Yeah, that seems pretty dire given his stated usage scenario, and it seems very explicit that their entire internet facing world is that single host…

    backup strategy or disk encryption

    It was a post narrowly discussing migration, so I don’t expect a full inventory of everything they do, so backup strategy and disk encryption and all sorts of other things may be omitted as having nothing to do with the core thing. I guess the most red flag on this front is he explicitly mentions the old setup having “backups enabled” and new setup having “RAID1”, which does make me wonder if they think RAID1 is a credible answer for “backup”.

    Also not a single word about infrastructure as code

    Again, not necessarily in-scope for this document, so not sure if I’m going to judge on this one. I routinely take material expressed in terms of an ansible play and “generic it out” for general consumption when discussing with people outside my organization.

    The whole stuff is hosted in Germany for a Turkish software company

    I’ll confess to not liking it being in a single site, however to the extent they select a single site, Germany might make sense because:

    Several live mobile apps serving hundreds of thousands of users

    Their userbase may be better connected to Germany than Turkey, and the user latency matters more.

    My biggest concerns would be mitigated if they said that the German hosted server is their off-prem solution but it is also hosted on-prem giving them multiple sites, but I think that’s a bit much to imagine given the process described. The described migration process wouldn’t make sense in that scenario.


  • One thing is that I don’t know for sure if it is containerized or not. The topic was migration, and that facet would be not relevant to the core. When I’m doing a write up of things like this, I tend to omit details like that unless it is core to the subject at hand. Including replacing a funky ingress situation with a more universally recognizable nginx example. The users of a container setup would understand how to translate to their scenario.

    For another, I’ll say that I’ve probably seen more people getting screwed up because they didn’t understand how to use containers and used them anyway. Most notably they make their networking needlessly convoluted and can’t understand it. Also when they kind of mindlessly divide a flow for “microservices”, they get lost in the debug.

    They are useful, but I think people might do a lot better if they:

    • More carefully considered how they split things up
    • Go ahead and use host networking, it’s pretty good
    • unix domain sockets can be your friend instead of binding to tcp for everything. I much favor reverse proxy to unix domain instead of handling IP/ports, which is what the container networks buy most people but the flow is too gnarly
    • Be wary of random dockerhub “appliances”, they tend to be poorly maintained.

    If you are writing in rust or golang, containers might not really buy you much other than a headache, so long as you distinct users for security isolation. For something like python, it might be a more thorough approach that virtalenv, though I wouldn’t like to keep a python stack maintained with how fickle the ecosysyem is. Node is pretty much always “virtualenv” like, but even worse for fickle dependencies.



  • Nah, the producers of human slop are ecstatic because now they can just prompt up their slop and post something for engagement, before they had to at least put in a modicum of effort to make their slop. It would take at least as long to make the human slop as a human would take to view it, now they can get output with even less than the effort the human wastes seeing it.

    The slop flood gates are open.


  • It’s tricky because like in so many other things, nuance is weaponized against the person using nuance.

    A politician presents a carefully considered position? An opponent declares it’s impossible to know where they stand.

    A broadly harmful thing has some potential value if we just pull back on the harmful part? People all-in will seize upon your acknowledgement of specific value as broad endorsement.

    In the AI front, if OpenAI and xAI folds up, and maybe Anthropic gets a big dose of humility, and business leaders finally get a sense for what it can’t do, there’s a chance for a healthy and useful adoption. Right now the nuance isn’t as valuable because it advocates for a scale that no one would be objecting to anyway.


  • There is a scenario where I would prefer this outcome.

    All too often in a meeting things start spinning because they have decided to do stuff, but have uncertainty, so they keep going around in circles speculating on what might go wrong and speculatively worry about everything.

    Take a break and then we will continue your precious meeting after we actually know what didn’t pan out. 95% of the time the follow up would be “it went fine, didn’t need endless contingency plans”.


  • Is a bit hyperbole at the moment, where the concrete lawd are basically “os asks user for age on honor system and relays that to websites”. Linux distros can add that without much real controversy.

    Proven is some are seeking laws that require the os to actually verify age, which in practice means locking things behind something like a Google account and having an online account vendor process your real identity and really validate your age. Under such a regime, Linux desktop as it exists today becomes infeasible. Also Microsoft can say they absolutely cannot allow local accounts anymore by law and force Microsoft accounts…


  • LLMs can be useful in this context, but Anthropic blew Mythos way way out of proportion. It absolutely was overly hyped.

    Their own demonstrator had to work with a downlevel firefox so it would still have vulnerabilities that were already fixed before they even started.

    It seems that their narrative is that other tools, some LLM and some not may be as good or better than Mythos at finding issues, but there were a couple of issues where Mythos was able to actually create a demonstrator, which the other models did not do. Which is relatively less interesting, as a human going from finding to demonstrator is generally not a huge part of the tedium, the tedium usually is in the finding.

    They pitched it as “it is dangerous, it will escape confinement”, etc etc. But instead they had to explicitly start with a downlevel firefox with known vulnerabilities unpatched and they further had to disable all the security mitigations that in practice had already made the two “vulnerabilities” impossible to exploit.

    It’s a matter of degree and exaggeration.


  • Note that in this case, very specifically, they had to yank Firefox’s javascript engine out of Firefox "but without the browser’s process sandbox and other defense-in-depth mitigations.” They had to remove the mechanisms designed to quash vulnerabilities.

    And they had to test explicitly against Firefox 147 vintage because Firefox 148 had already fixed the two issues that Mythos exploited to get an impressive number. Before Mythos even ran the key problems had been found and patched…


  • The document from Anthropic purporting to be a security research work largely leaves things vague (marketing material vague) and declines to use any recognized standard for even possibly hinting about whether to think anything at all. They describe a pretty normal security reality (‘thousands of vulnerabilities’ but anyone who lives in CVE world knows that was the case before, so nothing to really distinguish from status quo).

    Then in their nuanced case study, they had to rip out a specific piece of firefox to torture and remove all the security protections that would have already secured these ‘problems’. Then it underperformed existing fuzzer and nearly all of it’s successes were based on previously known vulnerabilities that had already been fixed, but they were running the unpatched version to prove it’s ability.

    Ultimately, the one concrete thing they did was prove that if you fed Mythos two already known vulnerabilities, it was able to figure out how to explicitly exploit those vulnerabilities better than other models. It was worse at finding vulnerabilities, but it could make a demonstrator. Which a human could have done, and that’s not the tedious part of security research, the finding is the tedious part. Again, in the real world, these never would have worked, because they had to disable a bunch of protections that already neutered these “issues” before they ever were known.


  • Speaking generally…

    One is that it was pitched as a superhuman AI that could think in ways humans couldn’t possibly imagine, escaping any security measure we might think to bond it with. That was the calibrated expectation.

    Instead it’s fine at security “findings”, that a human could have noticed if they actually looked. For a lot of AI this is the key value prop, looking when a human can’t be bothered to look and seeing less than a human would, but the human never would have looked. For example a human can more reliably distinguish a needle from a straw of hay, but the relentless attention of an AI system would be a more practical approach for finding needles in haystacks. It will miss some needles and find some hay, so a human effort would have been better, but the AI is better than nothing, especially with a human to discard the accidental hay.

    Another thing is the nuance of the “vulnerabilities” may be very underwhelming. Anyone who has been in the security world knows that the vast majority of reported “vulnerabilities” are nothing burgers in practice. Curl had a “security” issue where a malicious command line could make it lock up instead of timeout if the peer stops responding. I’ve seen script engines that were explicitly designed to allow command execution get cves because a 4gb malicious script could invoke commands without including the exec directive, and also this engine is only ever used by people with unfettered shell access anyway. Had another “critical” vulnerability, but required an authorized user to remove and even rewrite the code that listens to the network to allow unsanitized data in that’s normally caught by the bits they disabled. Curl had another where they could make it output vulnerable c code, then the attacker would “just” have to find a way to compile the output of the command and they’d have a vulnerable c executable… How in the world are they able to get curl to generate c code and compile it but not otherwise be able to write whatever c code they want… Well no one can imagine it, but hey, why not a CVE…



  • The difference in your scenario is that it is enforcing a regulation, rather than being bound by it.

    Yes, enforcing a regulation, particularly with different requirements by geography is a nightmare. You have to translate the law to code, and make it conditional based on some mechanism of determining jurisdiction.

    However, a regulation like “you will ensure you will not require online connectivity for single player games, or if multiplayer you will ensure that third parties are able to keep hosting to keep the experience whole once you stop” is not a nightmare of nitpicky local regulations to navigate. The law doesn’t need to map to code, it just governs the human behavior/decisions.

    For example, there are various ‘password’ laws, and it’s no huge deal to comply, since you only have to honor some strictest common law and you don’t need software to implement the regulatory rules.


  • Don’t have a Framework, but I think it’s due to the whole ‘modern standby’ approach where the firmware doesn’t implement ‘standby’ anymore and just let’s the OS put everything into as low power state as possible, component by component.

    It doesn’t work well for Windows either, which is why a Windows laptop I have will ‘standby’ for maybe 15 minutes before shutting itself down for ‘hibernate’. I figure they decided that NVME means resume from hibernate is ‘good enough’ and modern standby is such a power hog that they can’t pull it off.

    Problem in Linux is that they view SecureBoot as a promise they cannot keep if they resume from disk, so they block hibernate if SecureBoot is enabled, making it hard to bank on as a reliable recourse.


  • Better in almost every single respect.

    Photo printing is about the only thing I say I haven’t seen laser do, but the people in my family that appreciated printed photos over screens we would just order them printed to their local Walgreens instead of trying to mail them prints anyway. Don’t do that anymore either as they passed away some years back.


  • To make the fairy tale work out:

    • Brother laser printer. It’s there, it works, and I don’t have to think about it.

    On the other extreme, inkjet is just busted. I’ll give ecotank some appreciation for not having an ink resupply problem, but clogs like crazy and have to spend forever trying to unclog it if I have to use it.

    Then there’s HP, a brand that just works every day to think how to screw over printer purchasers to get more money out of them microtransaction style.