How to identify a coder from the 90s

Modernizing the codebase and modernizing developers are two completely different challenges.

During last decades, many enterprises moved from C/C++/Delphi to C#/Java, because the latter are claimed to be more consistent and safe. I feel that’s how it happens — the technical management comes and says:

“Ok team, for 20 years we’ve been writing code in C++, now we will write in C# because it’s better. But no worries, it has roughly the same syntax so it’s similar.”

Here I touch the history of programming languages and note some habits and viewpoints of those who are stuck in the previous century.

Compiler is their enemy

A compiler, by definition, is a tool that translates one language into another. And for developers living in the 90s, that’s where its use ends. It lets them write in a proper language instead of machine codes.

And so the compiler throws errors for typos in keywords, or braces not closed, or semicolons missed — just when it can’t parse the code. And it’s annoying as it’s so easy for a human to fix those problems. For those guys, the compiler is a necessary evil. They fight with it, they make fun of it, and they trick it.

A good example of this attitude in many languages is type casting:

var fish = (Fish)animal;  // Compiler, I know this here is a fish.fish.Swim();              // Shut up and let it swim.

So it’s the same thinking— the developer is annoyed by the compiler. They know the exact type of the object in the runtime and regret they can’t just write animal.Swim();, having instead to do the cast to silence the compiler.

Well you shouldn’t be annoyed, but worried. The compiler has flagged bad information flow in the system. At some point, at least when the object was created, there was the knowledge of its real type —and then this knowledge got lost. Try thinking like: “Compiler, why do I know something you don’t?”

As Eric Lippert (one of the C# designers) suggests, instead of shutting it up, you should adjust the program so that the compiler does have a handle on reality. It should be your friend, you should listen to it and trust it. However…

They still rather trust people

The other day I had a discussion with a colleague. Imagine a method:

void SaveUser(string id)
{
// write to database, whatever
}

My task was to add a method doing similar things but accepting username. This is what I came up with:

public class Id
{
private readonly string _id;
public Id(string id) => _id = id;
public override string ToString() => _id;
}
public class Username
{
private readonly string _username;
public Username(string username) => _username = username;
public override string ToString() => _username;
}
void SaveUser(Id id)
{
// write to database, whatever
}
void SaveUser(Username username)
{
// write to database, whatever
}

“Why bring two classes, if you can just add SaveUserByName(string username)?”

Well because I trust the compiler more than humans. Because we get tired and distracted. Because we want to go home on Friday evening. You get the point?

The 90s approach is to create types only when you think that yet another method accepting same 7 parameters is kind of lame so those should be united. That’s the concept of structs in C — making types to group data. But you should also make types to discern data.

The compiler can’t tell string id from string username but it can tell Id from Username with strings inside. And it’ll shout if you mix them up. That’s why strongly statically typed languages are safer — but only if safety features are used. Not leveraging such a power is called primitive obsession. Instead…

They focus on efficiency

The most popular excuse that developers from the 90s have is time and space performance of the code. They diligently point to that at code reviews, not really caring about things like method responsibilities and overall design.

“All those objects you create everywhere — they overload the memory!”

They often don’t trust things like LINQ / Stream API. They’re afraid to lose the sense of control and speed, preferring to make and debug off-by-one errors.

So that’s about two software qualities: correctness and efficiency. And back in a day, with PCs only starting up, performance was the reasonable top priority.

Now we live in a different world. Memory is cheap, processors are fast. Most bottlenecks are in the wires. New languages are designed to reduce coding errors. There you have ideas like default immutability (e.g. Rust) or default null-safety (e.g. Kotlin). They come with a trade-off in performance implying that the industry now focuses on code correctness rather than efficiency.

I am not saying performance doesn’t matter at all, it does — somewhere, sometimes. Yet efficient code is not a general requirement anymore, so a need for that should be supported with some business reasoning, in every particular case. That’s the way to avoid the biggest evil.

Software modernization requires not only a technical upgrade but also a mental shift. Otherwise developers perceive the new tech as some pesky bureaucracy. Reeducation is needed —and it’s a worthwhile investment.

techie | music addict | volunteer | language nerd | coffee container | couch philosopher