As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

  • SorteKanin@feddit.dk
    link
    fedilink
    arrow-up
    4
    ·
    5 hours ago

    My somewhat hot take is that design patterns and SOLID are just tools created to overcome the shortcomings of bad OOP languages.

    When I use Rust, I don’t really think about design patterns or SOLID or anything like that. Sure, Rust has certain idiomatic patterns that are common in the ecosystem. But most of these patterns are very Rust-specific and come down to syntax rather than semantics. For instance the builder pattern, which is tbh also another tool to overcome one of Rust’s shortcomings (inability to create big structs easily and flexibly).

    I think you’re completely correct that these things are dogma (or “circlejerking” if you prefer that term). Just be flexible and open minded in how you approach problems and try to go for the simplest solution that works. KISS and YAGNI are honestly much better principles to go by than SOLID or OOP design patterns.

  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    6 hours ago

    99% of code is too complicated for what it does because of principles like SOLID, and because of OOP.

    Algorithms can be complex, but the way a system is put together should never be complicated. Computers are incredibly stupid, and will always perform better on linear code that batches similar operations together, which is not so coincidentally also what we understand best.

    Our main issue in this industry is not premature optimisation anymore, but premature and excessive abstraction.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    6 hours ago

    OOP is good in a vacuum. In real life, where deadlines apply, you’re going to get some ugly stuff under the hood, even though the app or system seems to work.

  • dejected_warp_core@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    9 hours ago

    Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

    There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

    Congratulations. This is where you wind up, long after learning the basics and start interacting with lots of code in the wild. You are not alone.

    Implementing things with pragmatism, when it comes to conventions and design patterns, is how it’s really done.

  • HaraldvonBlauzahn@feddit.org
    link
    fedilink
    arrow-up
    3
    ·
    8 hours ago

    I think that OOP is most useful in two domains: Device drivers and graphical user interfaces. The Linux kernel is object-oriented.

    OOP might also be useful in data structures. But you can as well think about them as “data structures with operations that keep invariants” (which is an older concept than OOP).

  • Azzu@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    11
    ·
    13 hours ago

    The main thing you are missing is that “loose coupling” does not mean “create an interface”. You can have all concrete classes and loose coupling or all classes with interfaces and strong coupling. Coupling is not about your choice of implementation, but about which part does what.

    If an interface simplifies your code, then use interfaces, if it doesn’t, don’t. The dogma of “use an interface everywhere” comes from people who saw good developers use interfaces to reduce coupling, while not understanding the context in which it was used, and then just thought “hey so interfaces reduce coupling I guess? Let’s mandate using it everywhere!”, which results in using interfaces where they aren’t needed, while not actually reducing coupling necessarily.

    • FunkFactory@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      As a dev working on a large project using gradle, a lot of the time interfaces are useful as a means to avoid circular dependencies while breaking things up into modules. It can also really boost build times if modules don’t have to depend on concrete impls, which can kill the parallelization of the build. But I don’t create interfaces for literally everything, only if a type is likely going to be used across module boundaries. Which is a roundabout way of saying they reduce coupling, but just noting it as a practical example of the utility you gain.

    • HereIAm@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      9 hours ago

      I think a large part of interfaces everywhere comes from unit testing and class composition. I had to create an interface for a Time class because I needed to test for cases around midnight. It would be nice if testing frameworks allowed you to mock concrete classes (maybe you can? I haven’t looked into it honestly) it could reduce the number of unnecessary interfaces.

        • HereIAm@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          Yeah Moq is what I used when I worked with .NET.

          On an unrelated note; god I miss .NET so much. Fuck Microsoft and all that, but man C# and .NET feels so good for enterprise stuff compared to everything else I’ve worked with.

      • sik0fewl@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        This was definitely true in the Java world when mocking frameworks only allowed you to mock interfaces.

  • entwine@programming.dev
    link
    fedilink
    arrow-up
    18
    ·
    20 hours ago

    I think the general path to enlightenment looks like this (in order of experience):

    1. Learn about patterns and try to apply all of them all the time
    2. Don’t use any patterns ever, and just go with a “lightweight architecture”
    3. Realize that both extremes are wrong, and focus on finding appropriate middle ground in each situation using your past experiences (aka, be an engineer rather than a code monkey)

    Eventually, you’ll end up “rediscovering” some parts of SOLID on your own, applying them appropriately, and not even realize it.

    Generally, the larger the code base and/or team (which are usually correlated), the more that strict patterns and “best practices” can have a positive impact. Sometimes you need them because those patterns help wrangle complexity, other times it’s because they help limit the amount of damage incompetent teammates can do.

    But regardless, I want to point something out:

    the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

    This attitude is a problem. It’s an attitude of ignorance, and it’s an easy hole to fall into, but difficult to get out of. Nobody is “circlejerking OOP”. You’re making up a strawman to disregard something you failed at (eg successful application of SOLID principles). Instead, perform some introspection and try to analyze why you didn’t like it without emotional language. Imagine you’re writing a postmortem for an audience of colleagues.

    I’m not saying to use SOLID principles, but drop that attitude. You don’t want to end up like those annoying guys who discovered their first native programming language, followed a Vulkan tutorial, and now act like they’re on the forefront of human endeavor because they imported a GLTF model into their “game engine” using assimp…

    A better attitude will make you a better engineer in the long run :)

    • marzhall@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      6 hours ago

      I dunno, I’ve definitely rolled into “factory factory” codebases that are abstraction astronauts just going to town over classes that only have one real implementation over a decade and seen how far the cargo culting can go.

      It’s the old saying “give a developer a tool, they’ll find a way to use it.” Having a distataste for mindless dogmatic application of patterns is healthy for a dev in my mind.

    • Gonzako@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      You’ve described my journey to a tea. You eventually find your middle ground which is sadly not universal and thus, we shall ever fight the stack overflow wars.

  • JakenVeina@midwest.social
    link
    fedilink
    arrow-up
    23
    ·
    edit-2
    1 day ago

    One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

    That one is indeed objective horse shit. If your interface has only one implementation, it should not be an interface. That being said, a second implementation made for testing COUNTS as a second implementation, so context matters.

    In general, I feel like OOP principals like are indeed used as dogma more often than not, in Java-land and .NET-land. There’s a lot of legacy applications out there run by folks who’ve either forgotten how to apply these principles soundly, or were never taught to in the first place. But I think it’s more of a general programming trend, than any problem with OOP or its ecosystems in particular. Betcha we see similar things with Rust, when it reaches the same age.

    • egerlach@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      21 hours ago

      SOLID often comes up against YAGNI (you ain’t gonna need it).

      What makes software so great to develop (as opposed to hardware) is that you can (on the small scale) do design after implementation (i.e. refactoring). That lets you decide after seeing how your new bit fits in whether you need an abstraction or not.

    • boonhet@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      23 hours ago

      Yeah… Interfaces are great, but not everything needs an interface.

      I ask myself: How likely is this going to have an alternative implementation in the future?

      If the answer is “kinda likely”, it gets an interface. If the answer is “idk, probably not? Why would it?” then it does not get an interface.

      Of course these days it’s more likely to be an unnecessary trait than an unnecessary interface. For me, I mean.

  • Sunsofold@lemmings.world
    link
    fedilink
    arrow-up
    7
    ·
    21 hours ago

    I have to wonder about how many practices in any field are really a ‘best in all cases’ rule vs just an ‘if everyone does it like this we’ll all work better together because we’re all operating from the same rulebook, even if the rules are stupid,’ thing or a ‘this is how my pappy taught me to write it,’ thing.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    59
    ·
    edit-2
    1 day ago

    If it makes the code easier to maintain it’s good. If it doesn’t make the code easier to maintain it is bad.

    Making interfaces for everything, or making getters and setters for everything, just in case you change something in the future makes the code harder to maintain.

    This might make sense for a library, but it doesn’t make sense for application code that you can refactor at will. Even if you do have to change something and it means a refactor that touches a lot, it’ll still be a lot less work than bloating the entire codebase with needless indirections every day.

    • termaxima@slrpnk.net
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      Getters and setters are superfluous in most cases, because you do not actually want to hide complexity from your users.

      To use the usual trivial example : if you change your circle’s circumference from a property to a function, I need to know ! You just replaced a memory access with some arithmetic ; depending in my behaviour as a user this could be either great or really bad for my performance.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      arrow-up
      1
      ·
      10 hours ago

      Exactly this. And to know what code is easy to maintain you have to see how couple of projects evolve over time. Your perspective on this changes as you gain experience.

    • ugo@feddit.it
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      1 day ago

      I call it mario driven development, because oh no! The princess is in a different castle.

      You end up with seemingly no code doing any actual work.

      You think you found the function that does the thing you want to debug? Nope, it defers to a different function, which calls a a method of an injected interface, which creates a different process calling into a virtual function, which loads a dll whose code lives in a different repo, which runs an async operation deferring the result to some unspecified later point.

      And some of these layers silently catch exceptions eating the useful errors and replacing them with vague and useless ones.

    • Mr. Satan@lemmy.zip
      link
      fedilink
      arrow-up
      13
      ·
      1 day ago

      Yeah, this. Code for the problem you’re solving now, think about the problems of the future.

      Knowing OOP principles and patterns is just a tool. If you’re driving nails you’re fine with a hammer, if you’re cooking an egg I doubt a hammer is necessary.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 day ago

      I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

      If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I’d gladly hear it!

      • Hetare King@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        If you’re directly interacting with any sort of binary protocol, i.e. file formats, network protocols etc., you definitely want your variable types to be unambiguous. For future-proofing, yes, but also because because I don’t want to go confirm whether I remember correctly that long is the same size as int.

        There’s also clarity of meaning; unsigned long long is a noisy monstrosity, uint64_t conveys what it is much more cleanly. char is great if it’s representing text characters, but if you have a byte array of binary data, using a type alias helps convey that.

        And then there are type aliases that are useful because they have different sizes on different platforms like size_t.

        I’d say that generally speaking, if it’s not an int or a char, that probably means the exact size of the type is important, in which case it makes sense to convey that using a type alias. It conveys your intentions more clearly and tersely (in a good way), it makes your code more robust when compiled for different platforms, and it’s not actually more work; that extra #include <cstdint> you may need to add pays for itself pretty quickly.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          So we should not have #defines in the way, right?

          Like INT32, instead of “int”. I mean if you don’t know the size you probably won’t do network protocols or reading binary stuff anyways.

          uint64_t is good IMO, a bit long (why the _t?) maybe, but it’s not one of the atrocities I’m talking about where every project had its own defines.

          • Feyd@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            22 hours ago

            “int” can be different widths on different platforms. If all the compilers you must compile with have standard definitions for specific widths then great use em. That hasn’t always been the case, in which case you must roll your own. I’m sure some projects did it where it was unneeded, but when you have to do it you have to do it

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              10 hours ago

              So show me two compatible systems where the int has different sizes.

              This is folklore IMO, or incompatible anyways.

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 hours ago

                  Okay, then give me an example where this matters. If an int hasn’t the same size, like on a Nintendo DS and Windows (wildly incompatible), I struggle to find a use case where it would help you out.

          • Hetare King@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            20 hours ago

            The standard type aliases like uint64_t weren’t in the C standard library until C99 and in C++ until C++11, so there are plenty of older code bases that would have had to define their own.

            The use of #define to make type aliases never made sense to me. The earliest versions of C didn’t have typedef, I guess, but that’s like, the 1970s. Anyway, you wouldn’t do it that way in modern C/C++.

          • xthexder@l.sw0.com
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            23 hours ago

            I’ve seen several codebases that have a typedef or using keyword to map uint64_t to uint64 along with the others, but _t seems to be the convention for built-in std type names.

      • SilverShark@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        1 day ago

        We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          But you can use 64 bits int on a 32 bits linux, and vice versa. I never understood the benefits from tagging the stuff. You gotta go so far back in time where an int isn’t compiled to a 32 bit signed int too. There were also already long long and size_t… why make new ones?

          Readability maybe?

          • Consti@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Very often you need to choose a type based on the data it needs to hold. If you know you’ll need to store numbers of a certain size, use an integer type that can actually hold it, don’t make it dependent on a platform definition. Always using int can lead to really insidious bugs where a function may work on one platform and not on another due to overfloe

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              Show me one.

              I mean I have worked on 16bits platforms, but nobody would use that code straight out of the box on some other incompatible platform, it doesn’t even make sense.

              • Consti@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                22 hours ago

                Basically anything low level. When you need a byte, you also don’t use a int, you use a uint8_t (reminder that char is actually not defined to be signed or unsigned, “Plain char may be signed or unsigned; this depends on the compiler, the machine in use, and its operating system”). Any time you need to interact with another system, like hardware or networking, it is incredibly important to know how many bits the other side uses to avoid mismatching.

                For purely the size of an int, the most famous example is the Ariane 5 Spaceship Launch, there an integer overflow crashed the space ship. OWASP (the Open Worldwide Application Security Project) lists integer overflows as a security concern, though not ranked very highly, since it only causes problems when combined with buffer accesses (using user input with some arithmetic operation that may overflow into unexpected ranges).

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  10 hours ago

                  And the byte wasn’t obliged to have 8 bits.

                  Nice example, but I’d say it’skind of niche 😁 makes me remember the underflow in a video game, making the most peaceful npc becoming a warmongering lunatic. But that would not have been helped because of defines.

          • SilverShark@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            It was a while ago indeed, and readability does play a big role. Also, it becomes easier to just type it out. Of course auto complete helps, but it’s just easier.

  • Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    arrow-up
    17
    ·
    1 day ago

    I’m a firm believer in “Bruce Lee programming”. Your approach needs to be flexible and adaptable. Sometimes SOLID is right, and sometimes it’s not.

    “Adapt what is useful, reject what is useless, and add what is specifically your own.”

    “Notice that the stiffest tree is most easily cracked, while the bamboo or willow survives by bending with the wind.”

    And some languages, like Rust, don’t fully conform to a strict OO heritage like Java does.

    "Be like water making its way through cracks. Do not be assertive, but adjust to the object, and you shall find a way around or through it. If nothing within you stays rigid, outward things will disclose themselves.

    “Empty your mind, be formless. Shapeless, like water. If you put water into a cup, it becomes the cup. You put water into a bottle and it becomes the bottle. You put it in a teapot, it becomes the teapot. Now, water can flow or it can crash. Be water, my friend.”

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      It’s been interesting to watch how the industry treats OOP over time. In the 90s, JavaScript was heavily criticized for not being “real” OOP. There were endless flamewars about it. If you didn’t have the sorts of explicit support that C++ provided, like a class keyword, you weren’t OOP, and that was bad.

      Now we get languages like Rust, which seems completely uninterested in providing explicit OOP support at all. You can piece together support on your own if you want, and that’s all anyone cares about.

      JavaScript eventually did get its class keyword, but now we have much better reasons to bitch about the language.

      • Brosplosion@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        20 hours ago

        It’s funny cause in C++, inheritance is almost frowned upon now cause of the performance and complexity hits.

        • wicked@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          5 hours ago

          It’s been frowned upon for decades.

          That leads us to our second principle of object-oriented design: Favor object composition over class inheritance

          • Design Patterns - Elements of Reusable Object-Oriented Software (1994)
  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    1 day ago

    Yes OOP and all the patterns are more than often bullshit. Java is especially well known for that. “Enterprise Java” is a well known meme.

    The patterns and principles aren’t useless. It’s just that in practice most of the time they’re used as hammers even when there’s no nail in sight.

        • iii@mander.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 day ago

          Can I bring my own AbstractSingletonBeanFactoryManager? Perhaps through some at runtime dependency injection? Is there a RuntimePluginDiscoveryAndInjectorInterface I can implement for my AbstractSingletonBeanFactoryManager?

    • SinTan1729@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      As an amateur with some experience in the functional style of programming, anything that does SOLID seems so unreadable to me. Everything is scattered, and it just doesn’t feel natural. I feel like you need to know how things are named, and what the whole thing looks like before anything makes any sense. I thought SOLID is supposed to make code more local. But at least to my eyes, it makes everything a tangled mess.

      • Matty Roses@lemmygrad.ml
        link
        fedilink
        arrow-up
        3
        ·
        22 hours ago

        It’s not supposed to make it more local, it’s supposed to conform to a single responsibility, and allow encapsulation of that.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Especially in Java, it relies extremely heavy on the IDE, to make sense to me.

        If you’re minimalist, like me, and prefer text editor to be seperate from linter, compiler, linker, it’s not pheasable. Because everything is so verbose, spread out, coupled based on convention.

        So when I do work in Java, I reluctantly bring out Eclipse. It just doesn’t make any sense without.

        • SinTan1729@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          Yeah, same. I like to code in Neovim, and OOP just doesn’t make any sense in there. Fortunately, I don’t have to code in Java often. I had to install Android Studio just because I needed to make a small bugfix in an app, it was so annoying. The fix itself was easy, but I had to spend around an hour trying to figure out where the relevant code exactly is.

  • Log in | Sign up@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    23 hours ago

    The promise of oop is that if you thread your spaghetti through your meatballs and baste them in bolgnaise sauce before you cook them, it’s much simpler and nothing ever gets tangled up, so that when you come to reheat the frozen dish a month later it’s very easy to swap out a meatball for a different one.

    It absolutely does not even remotely live up to it’s promise, and if it did, no one in their right mind would be recommending an abstract singleton factory, and there wouldn’t be quite so many shelves of books about how to do oop well.

  • Windex007@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    Whoever is demanding every class be an implementation of an interface started thier career in C#, guaranteed.

      • Windex007@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        1 day ago

        I’m my professional experience working with both, Java shops don’t blindly enforce this, but c# shops tend to.

        Striving for loosely coupled classes is objectively a good thing. Using dogmatic enforcement of interfaces even for single implementors is a sledgehammer to pound a finishing nail.