• 1 Post
  • 426 Comments
Joined 3 years ago
cake
Cake day: July 9th, 2023

help-circle

  • Yeah, even before its ceo came out of the closet as Nazi- after growing to a successful car business they got distracted by new and shiny

    But I still think their cars are better technology than anything else in the us, and now cheaper than most. They still have their big upcoming push into semis. They still have most superchargers. They still have a rapidly growing storage business. The cybertruck flop really hurt them and I don’t see how they can afford another.

    But ai doesn’t really have a profitably business model yet and humanoid robots are a much bigger gamble with no market yet. This is where a more typical business owner would create new corporate entities so Tesla could become a successful car company while the ceo could try other gambles





  • I was curious about that so tried googling ….

    Canadian buses were

    • earlier models - I see years like 2018
    • at least some were built in a new plant in Canada
    • part of the blame was lack of spare parts
    • part of the blame was limited range, especially in winter

    London buses

    • newer. I see years like 2024
    • no real winter
    • I see articles about a new larger battery to fix range issues
    • appeared to have similar problems, including a major recall for fire risk

    Maybe London benefitted from newer models and doesn’t get as cold as canada



  • Yeah I can see the temptation. Ai usage is part of my kpis and is the one place I got dinged on my review. I thought i was insulated as a “partly coding” position, but they put me against full time coders so I look even worse

    I guess they’re trying to force change, make us figure out how to make it work. The skeptic in me thinks it rewards people who have time to screw around, but when I set aside a week to see what I can do, I did increase my ai use.

    I did find some useful tasks that also increase my “agentic” score! But my “quality” score (ai generated lines I accept) is stick at zero. We currently pay a flat fee but that’s changing next months to pay per token




  • Definitely one of the weaknesses is: what about maintenance? Ai has been poor at maintaining existing code, and we all know that maintenance is much more expensive than development. Will it be able to maintain its own code? What if there are no longer enough developers to do it manually? Where is our future then?

    I’ve definitely been adding priority to refactoring. It was always a good idea for maintainability, for new developers to get up to speed and be able to contribute, but now we have the idiot developer that is LLMs. Perhaps more refactoring is meeting it halfway



  • LLM vendors are starting to charge money. I’m sure it’s not even close to profitable but it’s a start. Perhaps when the bubble pops and market consolidates, fewer vendors with more paying customers each …

    Using an LLM is a skill just like any other. If you just take what it gives you, you can’t expect good results. If you evaluate what it gives you and prompt it to improve, the results aren’t as bad.

    I use an LLM for coding and a definitely a skeptic, but I do find it a useful tool and am really interested in seeing if I can make it work.

    Initially I found some amount of success at lower levels, saving me some time

    • it could auto complete entire lines of code (and that’s trivial to evaluate and correct if necessary)
    • it was pretty good about generating unit tests since they tend to be simple and repetitive. In general corrections tend to be smarter coverage, tweaking the tests to cover more functionality with fewer tests
    • it’s pretty good with utility scripts. For example today I had a decision and wanted supporting data: in minutes it generated a script to call APIs in my scm and generate some stats for 4,000 code repos …. And it worked

    Currently I’ve created rulesets and project context so

    • it’s been quite successful at code reviews (it finds things I miss, and has resulted in my human reviewers finding less)
    • I’m proud of one for identifying refactoring opportunities. It finds good spots and makes good suggestions, but so far I have to implement myself: its code hasn’t been usable. I can also objectively verify by reduced cyclomatic complexity.

    Trying to find other scenarios it can be successful, it’s clear that insufficient context is a limiting factor. The fun challenge is to see if there are more successful scenarios if you can give it enough context. I’ve gone past rulesets and project context, to connect relevant services and metadata about our product set and environment. They want a team to try vibe coding and I’m still very skeptical, but my part of the effort is a real solvable problem and fun challenge whether they succeed or not


  • I was with you up to “cloud computing”. That bubble was a huge success that has really revolutionized how software is provided

    • well known winners include AWS, Google, Microsoft but there are many more depending how you define cloud computing
    • also some huge flops

    AI has a lot of mindshare and has demonstrated contributions in several areas. For example, ai slop you see on YouTube is making some people money. As a coder I do find it sometimes a useful tool, and I can definitely see the near future where it’s a required skill, and no, if you just ask it to spit out slop you’re not getting anything but slop ). I don’t see how it’s going away. However it doesn’t (yet?) live up to its hype nor is there (yet?) a profitable business for providers.

    Meanwhile the crypto and NFT bubbles were pyramid schemes that only ever made money from themselves. Web 3.0 probably looks useful to its proponents but was only ever a niche that no one else cared about




  • I’m not buying this. Sure minimizing dependencies is a good practice, but not updating? That’s a recipe for disaster.

    It’s important to note that you can’t predict supply chain attacks or vulnerabilities, and vulnerabilities are much more common. Also, while frequent updates might expose you to that supply chain attack more quickly, it also mitigates it more quickly. Frequent updates in combination with vulnerability scanning, and limiting downloads to reputable sources (that try to prevent supply chain attacks and discover them quickly) is a much better approach.

    There also the maintainability argument, that I’m having right now with a couple of our legacy software teams. Not updating can lock you into the past, for entire ecosystems of dependencies. You cant update if you have to, you cant take advantage of new features anywhere in the ecosystem, and it’s now an expensive emergency when something stops being maintained or has an unresolved vulnerability. If you’re being continually kept up, then choices or features are easy

    Then the goal is how do you automate your updates as smoothly as possible so they do not become noise, do not create extra work? Tools like dependabit and renovate bot have a lot of config options to help that