Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 5 Posts
  • 313 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle


  • If your language requires an IDE to show you WTF is going on in the code, it’s a bad language.

    Given, there’s ways to write poor code in any language, but some are much, much worse than others. Java and JavaScript being the kings of that kind of thing.

    Some day, AI assisted coding will become so intelligent that it will look at your average “enterprise” Java code and ask the user, “WTF were they even trying to do here?” Which is the only correct response a lot of the time.



  • There is a story people tell about AI regulation, and it goes like this: the technology is moving too fast, governments can’t keep up, regulators are overwhelmed, and by the time anyone writes a law the thing they’re trying to regulate has already evolved into something else entirely.

    No. That’s not the story people are telling about AI regulation. It goes like this:

    If we regulate AI, that will give an advantage to AI companies in other countries. They will surpass our AI capabilities and leave us in the technological dust.

    There’s a related story:

    If we regulate AI, we’re likely to create more problems because Boomers don’t understand technology.



  • Everyone wants to access Netflix, YouTube, Prime Video, etc through their TV interface and I just don’t get it. The best experience is when you hook up a PC to your TV… not some TV-centric Android OS or Roku’s thing.

    Install Kubuntu on some old PC with a GPU that can handle 4K @60Hz and you’re good to go. KDE and Firefox let you crank up the zoom so everything’s easy to read and it even has HDR support (though I prefer going without it… Old person eyes).

    It’s such a vastly superior experience. Not only do you get the usual stuff, you can use a real keyboard to type into that search bar. You can also access all those pirate streaming sites and do normal PC stuff like play games.













  • So… In no time at all, they’re going to be breached. Proving them wrong.

    Will they go back to open source after that? No. Of course not. Because it was never about security to begin with. AI is just an excuse.

    It’s like saying, “anyone can scan the source for vulnerabilities! It’s so easy. Too easy! That’s why we’re not going to justdo that ourselves and instead bury our heads in the sand and pretend the availability of source code was the problem.”


  • I’ve been researching this a bit… I’ve come to the conclusion that there is no AI bubble. In fact, we’re only just getting started down this road. Unless there’s some massive 100x efficiency breakthrough in training AI and inference, the entire world is going to be building seemingly endless AI data centers (and the normal compute kind, e.g. for stuff like AWS, Google/YouTube, Meta, banks) for at least a decade. Probably a little longer (12-15 years before demand levels out).

    Everyone thinks that “AI data center” means ChatGPT, Claude, Gemini, etc but there’s 10,000x more demand for AI than those services. Think: Pharmaceutical companies trying to find proteins, scientists (and big agriculture!) trying to model the weather, and other businesses trying to automate stuff. Not just software; robots and things like conveyor belts.

    Another example: Ever use one of those self-checkouts that’s mostly just a camera pointing down, where you place the stuff you’re purchasing? That uses AI too.

    Having said that, there is a great big bubble in AI: OpenAI, specifically. That will definitely pop one day. And hopefully, the DRAM bullshit will go along with it.