Skip to main content

Unlearning SOLID: My Path to More Nuanced Code Evaluation

Mario Mainz
Author
Mario Mainz
Table of Contents

Introduction
#

If you are a software engineer, you have probably heard about the SOLID principles. They are a collection of principles meant to guide developers towards high-quality, maintainable code. And they are probably the most well-known principles when it comes to code quality.

Like many of you, probably, I’ve used those principles in the past to judge both my own and other people’s code. However, I rely less and less on them and have come to realize that they’re not as good as I once thought they were. I want to share my journey of coming to that realization and what I do instead nowadays in this blog post.

How I used to use it
#

Similar cubes with RULES inscription on windowsill in building

Although I was a believer in the SOLID principles for a long time, I never followed them religiously. The one I always found the most useful is the single responsibility principle, so I would very deliberately think about what the responsibility of the module I’m working on was. Then I would review the finished module for any Liskov or Demeter violations, check if there are any unnecessary dependencies, whether the interfaces actually make sense, and refactor accordingly. This usually yielded code that I was happy with.

However, if I ever got into a discussion about my or other people’s code, I always treated arguments based on the SOLID principles as very sound arguments. They are universally accepted principles, after all. In hindsight, that might have been a bit naive. But since I had the tendency to be pragmatic, I didn’t encounter any big issues with this approach.

Realizing the shortcomings of the principles
#

Then I switched jobs and stumbled upon some code that did apply the SOLID principles, but somehow it looked really weird and was also really difficult to work with. This is when I slowly started to question how useful those principles actually are. Let me give you two examples of what kind of code made me start down this path of questioning SOLID.

Interface Segregation
#

I was working on a large TypeScript codebase at the time. One symptom of the code was that there were many duplicated type definitions. Let me give you some oversimplified examples. You would have code like this in one file:

// getFullName.ts

type User = {
  id: string;
  firstName: string;
  lastName: string;
};

function getFullName(user: User) {
  return `${user.firstName} ${user.lastName}`;
}

And then code like this in another file:

// canUpdateAccountSettings.ts

type User = {
  id: string;
  isAdmin: boolean;
  isPoweruser: boolean;
};

function canUpdateAccountSettings(user: User) {
  return user.isAdmin || user.isPoweruser;
}

What’s interesting about this is that both of these functions will receive the same object. So naturally, I asked why we defined separate types for this and not just one type that describes the user entirely. The answer: interface segregation.

Now, what does the interface segregation principle say again?

Clients should not be forced to depend upon interfaces that they do not use.

At first glance, this seems correct. getFullName doesn’t need the isAdmin field, so by declaring a User type that doesn’t have this, we now don’t depend on it. Since TypeScript has a structural type system, I can still pass the full User object instead, as long as it has at least the fields defined in the type specific to getFullName.

But hold on a second. Is there really any benefit to this? The point of doing this is to prevent us from having to change getFullName if a field in User changes that is not used by getFullName. But do we even need the type for that? Imagine just doing this:

// user.ts

type User = {
  id: string;
  firstName: string;
  lastName: string;
  isAdmin: boolean;
  isPoweruser: boolean;
};

// getFullName.ts

import { type User } from "./user";

function getFullName(user: User) {
  return `${user.firstName} ${user.lastName}`;
}

Okay, the function looks the same. We’re just importing a User type now. What happens if I remove or change the isAdmin field? Exactly! Absolutely nothing! The code just continues to work, since it’s not using that field anyway. This is the power of duck-typing, as used by JavaScript. Since TypeScript wants to be a strict superset of JavaScript, it also works with duck-typing.

So in the first example, we duplicated the type without any benefit at all in an attempt to write better code. Doesn’t sound great, huh?

Dependency Inversion
#

A big number of wires that is impossible to follow

I have another example, this time about the dependency inversion principle. Just as before, it happened when switching jobs and laying my eyes on a new codebase that I hadn’t seen before.

Most of my life, I worked in dynamic languages. This means I could mock other classes and modules at runtime for my tests. So I never had a strong need to employ dependency injection just to make something testable. As a result, I only used dependency injection on dependencies that had multiple implementations, that were an abstraction of something. Like a logger that can write to either STDOUT or the file system, for example. Just inject a LogWriter with a write function, and your logger can be blissfully unaware of where the logs are going.

However, this new codebase had a lot of dependency injection going on. It was a TypeScript codebase using Jest, so they did have the option of just mocking modules at runtime. So I was a bit puzzled about why this was so prevalent in the code. Worst of all, there were several levels of nesting to this dependency injection. You would regularly find code like this:

export function buildSomething(dependencyA, dependencyB, dependencyC) {
  const serviceA = buildServiceA(dependencyA, dependencyB);
  const serviceB = buildServiceB(dependencyB, dependencyC);
  const serviceC = buildServiceC(serviceA, dependencyC);
  // many more of these...

  function doSomething() {
    // do something with the dependencies
  }

  return { doSomething };
}

Of course, buildServiceA would in turn also be made up of several build* functions. This meant when you were working on a lower one of these layers, finding the implementation of one of those dependencies was pure hell. You couldn’t just Go To Definition, that would just take you to the function parameters. From there, you’d do Find All References and find where the parameter is injected. Then Go To Definition on that injected variable, only to arrive at the next function parameter. Sometimes you could do Go To Implementation to circumvent this, sometimes not. And sometimes that wouldn’t even help you because you wanted to see how that specific dependency is initialized, not what the implementation looks like. Believe me when I say I lost my mind a few times searching for a dependency’s point of origin.

Naturally, I asked many questions about why the code is structured this way. I wanted to find out what drove them to do this. And also whether they actually thought this was a good idea. And indeed, most of the responses I got were saying “this is good design”, “it’s following SOLID”, “it makes testing easier”. In my eyes, none of these arguments survived being challenged, though. After all, what are the downsides of transforming the example above into just this?

import { serviceA } from "./serviceA";
import { serviceB } from "./serviceB";
import { serviceC } from "./serviceC";
// many more of these...

export function doSomething() {
  // do something with the dependencies
}

I personally don’t see any. It certainly is less code. It is also certainly easier to read due to fewer layers of indirection. I can directly go to the implementation of the dependencies since I know the path from the import statement. In tests, I can employ the module mocking mechanism of my test framework to supply test stubs. If you need to refactor this, you could even switch out the whole implementation of one of those modules. Just change the path to point to a file that exports the same symbols with the same signatures. So it’s still perfectly decoupled.

Starting to reflect on SOLID
#

Those experiences made me start to reflect on my use of the SOLID principles. Have I been too dogmatic in the past? I don’t think so. Because, as I already described earlier, I never applied the principles religiously. But I certainly felt like I hadn’t questioned the validity of the principles enough.

Now, you could probably argue that both of these instances are a misunderstanding of the principles. However, you can’t argue with the fact that the principles are what drove someone to write code like this. And there weren’t just one or two examples of code like this, there were numerous instances. Which means there was at least a sizeable group inside the organization that shared this misunderstanding. And how helpful is a principle like that if it can be misunderstood so easily by a large number of people? It makes me question whether vague principles like these are even worth it to write down.

So what to do instead?
#

A road leading into a snowy forest that forks in two directions

But if I can’t rely on SOLID anymore, what will I do instead? Sure, these weren’t the only things I was looking out for in code. There is much more to readable and maintainable code than just the SOLID principles. But they were a pretty big part of my foundation of judging good code from bad code. So I was craving more things in my toolbox.

CUPID
#

The first thing I stumbled upon was a blog post by Dan North about why every single element of SOLID is wrong. He has some very solid (haha) criticism of the principles that also reflects the experience I outlined above. He also has a blog post proposing CUPID as an alternative. While I do agree with his ideas, I was skeptical about replacing my one set of principles by another. So while I keep these in mind and try to think about these properties when I’m judging code, I wanted more.

Low coupling, high cohesion
#

If I don’t want just another set of principles, then I need to go back to the properties of the code itself. What are universal, objective qualities of good code? The biggest one is probably that good code has low coupling and high cohesion. We’ve all heard about this before. This still depends on context, though. Decoupling something can make code harder to read, which will only pay off if I actually benefit from the decoupling. So I still need to be careful to what degree I apply this.

What else? As I reflected on the qualities that make code truly effective, I realized that there’s another crucial aspect that often gets overshadowed by more technical considerations: readability.

Readability
#

One important aspect about code is that it will always be read way more often than it is written. So we should optimize on code being easy to read and understand, not necessarily easy to write. At least when those two are at odds. However, it’s also highly subjective what is considered readable. Someone used to JavaScript will find Lisp to be a mess. Although, if you get used to the syntax, many people start to appreciate the simplicity of the Lisp syntax (myself included). I don’t think this is a big issue. You should be aware of what environment your teammates are working in. And since you all work in the same or a similar environment, the personal preferences should converge towards each other over time.

Readability goes beyond just structure. It encompasses how we name our variables and functions, how we format our code, and even how we organize our logic. And just like with coupling and cohesion, the pursuit of readability can sometimes conflict with other goals, requiring us to make thoughtful trade-offs.

Digging even deeper
#

the edge of a swimming pool with a sign that says 'deep water'

I started to look for resources that did not come from the usual authors like Uncle Bob. While doing that, I stumbled upon A Philosophy of Software Design by John Ousterhout. This is a really nice, concise book with some excellent insights that I didn’t hear before. It also directly contradicts some of the usual advice you hear from books like Clean Code, so it was perfect to achieve a more balanced perspective on the matter of what is good code.

So, what does the book say? It says a lot of things, but let me give you a recap of the points that I found the most interesting.

Complexity
#

The author defines complexity as anything related to the structure of a software system that makes it hard to understand or modify. When trying to create a software system that will be developed for a long time by a diverse team of engineers, complexity is our biggest enemy. One symptom of a complex system is that it requires a higher cognitive load to understand. Another symptom the author calls change amplification, which means that to change something about the system, you need to make changes in many disconnected places of the code. It is our job as engineers to write code that not only we can understand, but that also all other engineers can understand as easily as possible. This is tricky because complexity is less apparent to the person writing the code, since they have all the context and history in their head at that time. That is why code reviews are so valuable.

The key property of complexity though is that it is incremental. There isn’t a single source of overwhelming complexity that makes it hard to work with the code. There are lots of small chunks of complexity that add up so that together they make it hard to work with the code. This is essential to understand. How often do you think “doing it this way is not ideal, but it’s a minor flaw, it won’t affect the codebase overall” when reviewing code? Accepting even small amounts of unnecessary complexity means going one more increment towards a hard-to-understand, unmaintainable codebase.

So what the author says we need to do is to adopt a zero-tolerance stance towards avoidable complexity. While I’m sympathetic towards that, I just established that I’m trying to be more pragmatic in the future and not treat anything as a dogmatic truth. So I’m not sure zero-tolerance is the best idea. But I get what he is trying to say. Even small flaws that look innocent on their own can add up to become an unmaintainable mess. So it pays off to try really hard to eliminate them. But I do think there are rare situations where you can make a compromise without any risks.

Deep vs shallow modules
#

One way to manage complexity is to encapsulate it in modules. The user of the module only has to understand the module interface, not how its behavior is implemented. For any given module, you can compare the size of the module’s public interface and the size of the implementation code. If there is a big interface and just small implementation bodies, this is a shallow module. If there’s a small interface with big implementation bodies behind that interface, this is a deep module.

The author states that deep modules are better than shallow modules. They hide more implementation details and therefore expose the user of the module to less complexity. This is something I personally never thought about in such explicit terms before, but I think it is right.

And this doesn’t just go for modules, the author argues that it is also true for every single function, be it public or private. And if it’s also true for functions, then this is somewhat contrary to the usual Clean Code advice to craft small functions of just a few lines of code. This was a bit hard for me to accept. I personally like small functions a lot, and I find it increasingly difficult to read functions as soon as they’re too big to fit on a single screen. But I still think the statement is somewhat true.

I now try to incorporate this perspective into my thinking. I still try to keep my functions small enough to fit on a single screen, but maybe it’s not necessary to extract as many two-line-functions as I usually did. At least when I do, I ask myself whether it is worth it in this case. It might still be reasonable when you’re encapsulating a certain behavior that you don’t want to duplicate. But I also keep in mind that I should try to make my functions deeper if I can.

Comments
#

a wall with a lot of sticky notes

Another point where the author is contrary to Clean Code is comments. While Clean Code almost entirely abolishes comments, John Ousterhout argues comments drastically improve the system’s design. To be clear, he still thinks that if you can make the code obvious itself, then you should do that instead of writing a comment. But there is a lot of information in the engineer’s head at design time that is not possible to encode in source code. Like the meaning of certain values, the rationale for a certain design decision, how a module works at a high level, or conditions under which a method can be called.

A common argument is that if you want to understand how a module works, you can just read the code. However, a really good counter from the author is that this defeats the purpose of a module, which is to hide complexity from the user. If I have to understand the entire module to be able to use it, that greatly diminishes the value of having an abstraction in the first place. So to be precise, comments are essential for abstractions because only the function signature will rarely be enough to adequately describe an abstraction well enough so that I don’t have to read the implementation code.

He also addresses common criticism against comments in his book. I won’t go into all the details here. But the chapter on comments definitely made me rethink my stance on comments. I’ve now started looking for good opportunities to use comments in a meaningful way when I write code. I think I’m still learning, but reading this has made me less dogmatic about the whole thing, which is good.

The essence of good code
#

These are great things to carry in your toolbox. But they are still context-dependent. It’s still not what defines the essence of good code. So what if we approach this from a business perspective? Most of us write code to make a living. So what properties make code the most economical? The most economical code has the highest value for the lowest cost.

What gives our code value? If it solves a problem so that we can sell the product to someone. Assuming that we correctly understood the customer problem (which can be a big if), we could say the code needs to do what it’s supposed to do. It needs to do its job. Ideally, without any bugs or undefined behaviors. So the value of our code is if it does what it’s supposed to do.

What defines the cost of our code? This will be mostly the time we spend on it. It could also include the costs for the infrastructure that runs the code, but let’s be real, for most of us, that will be a fraction of the cost of the developer’s salaries. So I think we can focus on that.

Does that mean we should try to spend the minimum amount of time on creating the code? Well, kinda, but not necessarily in the way you think. We know that if we have to go back to some code to fix a bug, that will cost orders of magnitude more time than when write correct code right away. It also pays off to take care to design our code in a way that makes it easy to change. We all know it’s normal for requirements to change. And if adapting our code takes a lot longer because we didn’t design it with care, that will almost always lead to us spending more total time on the code. So what we really want to do is minimize the time spent on the code over the code’s entire lifecycle. This is different from just hacking together something that barely works as quickly as possible. So, being reasonably easy to change is our second property, which relates to the cost of the code.

Of course, you can also overengineer your code, so we have to find the right balance and identify the things that are most likely to change. This is part of the design process.

So that leaves us with two essential properties:

  • The code does what it is supposed to do.
  • The code is easy to change.

These are good fundamentals to focus on when judging code. And I think these include enough context so that we can get away from dogmatically applying principles. They also allow us to still use those principles but in a more nuanced way, so the principles themselves still carry value.

Conclusion
#

a wide open road going into the horizon

So there you have it - my journey from being an acolyte of SOLID to becoming a heretic. It was eye-opening to me, and I hope this article helps you to widen your perspective as well.

Key takeaways from this journey include:

  1. Context matters: There’s no one-size-fits-all approach to writing good code. What works in one situation might be counterproductive in another.
  2. Beware of dogma: Blindly following principles, even well-established ones like SOLID, can lead to unnecessarily complex or hard-to-maintain code. It’s crucial to consider the actual impact on readability, maintainability, and efficiency.
  3. Embrace alternative perspectives: There are a ton of opinions out there on good software design. They can provide fresh insights and challenge our preconceptions about what constitutes good code.
  4. Focus on fundamental qualities: At its core, good code does what it’s supposed to do and is easy to change. These fundamental properties should guide our evaluation more than any specific set of principles.

Moving forward, I would encourage you to cultivate a more flexible and nuanced approach to code evaluation. Instead of relying solely on predefined principles, strive to understand the underlying reasons for various coding practices. Regularly question and reevaluate your beliefs about what constitutes good code.

Remember, we’re not throwing the baby out with the bathwater here. Those principles can still be useful, but think of them more like guidelines than strict rules. Use them when they make sense, ditch them when they don’t.

By keeping these questions in mind and remaining open to new ideas and perspectives, we can all become better developers and write code that stands the test of time.