The Leprechauns of Software Engineering
This issue ended up with a strong Leprechauns of Software Engineering vibe. The précis from the book landing page:
The software profession has a problem, widely recognized but which nobody seems willing to do anything about. You can think of this problem as a variant of the well known "telephone game", where some trivial rumor is repeated from one person to the next until it has become distorted beyond recognition and blown up out of all proportion.
Unfortunately, the objects of this telephone game are generally considered cornerstone truths of the discipline, to the point that their acceptance now seems to hinder further progress.
The corpus of such “cornerstone truths” grows at breakneck speed. This issue covers “Test-Driven Development leads to better code” (it may, but there’s no evidence for that), “static type systems improve safety” (they may do, but only incrementally), “a proper database is faster than SQLite” (it can be, but check for your application), and “Perl sucks” (that’s just, like, your opinion).
And those were the ones I had collected 25% of the way into curating the article! I don’t think there’s necessarily a “solution” to this: groups form shared values, people signal those values to indicate membership of the group. Last decade’s programmer conference groupspeak was Clean Code and TDD, the current groupspeak is about static types and “reasoning about” code, and in a decade that will be out of fashion and another set of topics will be de rigeur.
On entering programming in 2022
Then we meet one of the Great Debates of introductory programming classes, that I’ve been having since 2003: what are we introducing people to? In our case it was programming as a tool that students could use on summer school projects. Here “can be used for real work in $domain” becomes an important factor, along with “student is set up to take their computer away and carry on programming”.
Other courses prioritise “can be used for real work in ‘industry‘” (previously C and Java would have been taught on these courses, nowadays Python, Typescript, Ruby, or maybe a Swift/Kotlin/Scala-a-like), “makes it easy to teach programming concepts” (Python and Typescript again, historically Pascal or BASIC, maybe-rans include Lisp, Squeak, and the visual block-based languages) or “makes it easy to teach the specific programming concept I get worked up about” (Pharo, Haskell, C++).
Around the web
Magic Links – Are they Actually Outdated?
Magic links—when a service sends you an email with a uniquely identifiable link, and clicking that link logs you into the service (typically on the web)—are an alternative to password-based authentication. Well, they're moving the problem of password management onto you and your email provider, just like OAuth2, OpenID, and many other passwordless authentication systems. Zitadel weigh up the pros and cons, and determine that there may be better alternatives for your application.
Is it time to look past Git?
The answer to this question should always be “yes”. Not because of any specific beef with git, though I have those: I found mercurial much easier to use particularly in the common “star” topology that is also the way most git-using teams are set up.
No, I think in general we should be strongly committed to our values, very selective about the principles we abide by to uphold those values, and weakly associated with the tools and processes we use to work according to those principles. If we decide that we value “working software over comprehensive documentation”, that we will work according to the principle that “Simplicity--the art of maximizing the amount of work not done--is essential”, and that git, or Jira, or TestRail, or vim, or whatever, is generating work that we don't need to do, we should be ready to find an alternative.
Analyzing the effects of test driven development in GitHub
While I'm on the “here’s a thing people do that there’s no evidence helps” thing, there’s no robust evidence in favour of TDD, yet we all do it and claim that it’s worse—maybe even unprofessional—not to do it. Myself included. Why is that?
Announcing Perl 7
It has come to my attention that people hating on Perl are still talking about Perl 5. It's understandable that they didn't notice Perl 6—it got renamed to Raku—but Perl 7 is "Perl 5 with modern defaults" and was announced two years ago. I happen to think that Perl is a nice tool, albeit "unopinionated" in the modern parlance, but if you don't like that please at least don't like the modern version.
SQLite or PostgreSQL? It's Complicated!
Performance analysis? Analyse the performance of your application, not some microbenchmark posted by a technology evangelist!
“Nor have type errors ever been a significant source of bugs.“
Ken Kocienda on the anti-weak-typing shade. His experience that the type errors aren't the significant source of bugs is consistent with my experience (and remember that C-type languages had a way more expressive type-as-design language than predecessor languages), but I think this is probably because of the “decades” of doing it. Rather than the decades of experience showing that the type errors don't manifest.
I love dynamic programming and feel way more productive when I’m not fighting a compiler. At the same time I accept that e.g. Apple platform experience suffered from people looking at ObjC (or Google:Java) and thinking “no thanks, I’d rather write Typescript and React Native”. Both can be true.
The type supremacists definitely have the superior marketing, at the moment. It seems to be an easy sell: it is possible to have this bug. This tool stops you having this bug. Therefore this tool is best. Never mind that the people not using the tool don't, in practice, have this bug. In practice my most common ObjC bug (which Swift doesn't ameliorate, but SwiftUI does) was “forgot to connect outlet/action”, and that's something you write a test for once in
-awakeFromNib (the method that gets called when you “un-pickle” a serialised UI) then use in every project: because the language is dynamic, you can introspect your UI at runtime to work out what should be hooked up and whether it is. But that data isn't relevant to the “this could, in theory, happen” argument: you could in principle pass a
BrazilNut to a
Bolt because it expects to collaborate with anything that quacks like a
Nut, therefore all software is impossible in this system.
Extra checks to make sure my software isn't evidently wrong are helpful—and both computers and compilers have got so much faster over the last few decades that it's possible to have both a richer set of checks and a faster build, which is just amazing. Once those checks stop me doing good software design, and aren't optional, they are a hindrance.
And as Marcel Weiher argued back in 2014, static types often provide “safyness”, not safety. But not all of them also act as design hindrances. I find the progressive approach—particularly Typescript, which is designed with a sympathy towards the ways that types interact with software design—does a good job of pairing design liberality with comfortable safyness.