For the first few years programming was typing the letters off a page in a book (or a download from a BBS I printed) into a file. Then I ran it. Sometimes I had syntax errors. I’m not sure what I learned back then.
After that I learned something solid. I learned how to mix and match. I knew where to look, at least. I recognized examples in books as a way to go from point A to B. Years later I learned about shared libraries.
More years later, I sat with my mouth was agape as I saw, “This code is from the black horse book on page 375, it’s similar to what we need.” as a comment to some code I had inherited. I guess some people never progressed. This code was not well designed. I had great job security.
Danger 1: Never learning beyond the superficial.
This guy never figured out what programming really was. He thought it was a typing job. You found in a book what your manager wanted, typed it out and told him it was done.
Programming is not typing. Programming is thinking. The study of the intricacies of a problem and how to solve it, step by step. Following examples from a book never teaches how to think about the actual problem.
Danger 2: The transparent nuances.
This second danger is what inspired me to write this. I was reading through some code and eventually realized I lost the thread. It was gone.
I missed a very subtle single line of code that glued everything together. It confused me for about 30 minutes.
Imagine if I never caught it? I would have sat, frustrated, staring at software that seemed inadequate. While brooding, I would have developed a workaround. I would have created an alternative and extremely inefficient method for doing something the system itself handled quite elegantly.
This is quite common. Out of all my development flaws, I’m still guilty of this the most frequently. However I share great company. Many times have I reviewed code written where I gently say, “Why did you not use the actual built-in method for this?” The answer is usually that they didn’t know it existed.
Sometimes it is excusable. It’s a trivial detail documented in a sea of literature. Usually a single line. It’s odd how the most important things get single lines, where clever but largely useless systems get essays devoted to them. The end result is predictable. Just following along with other developers creates situations where it is far too easy to miss the important details.
When lost, I step back and ask myself, “How would I allow developers to use this system?” This usually allows me to figure where to look. Then I find what I was missing and I happily move on to the next mistake.
Danger 3: Inadequate understanding of the caveats.
This may sound like the last point, but it is subtly different. The last point is about rebuilding and wasting effort. This is about missing a logical detail and introducing critical issues into the product, even while the product is completely functional.
The worst scenario is when it is a security problem.
I see toolkits that advertise easy handling of inter-system communication. Things talking from one system to the other, then have a big CAVEAT section in their documents explaining how it exposes all aspects of your system to the network and by default is insecure. These are usually accompanied with an explanation of how to protect the system.
Nobody ever reads it.
That’s obviously the more dramatic example, but I often times see serious bugs introduced (and potential workarounds later in the code) because of inadequately understanding caveats.
Sometimes it’s unavoidable. As another example, proper unicode (the thing that lets me type 日本語 here) handling in most scripted languages is very hard. The documentation is confusing. There are various levels of understanding, where each level you doubt your sanity more and more.
Then you get something that works and everybody is happy, up until some unexpected string is entered into the system and suddenly all data is corrupted. Oops, should have read the entire docs. But again, nobody ever does. Nobody wants to and it is unreasonable to expect people to read a morass of documentation just to use software correctly. That is, unfortunately, the standard.
Avoiding it all.
Here’s how I avoid the above pitfalls, after years of practice not making the same mistake over and over again:
- Never follow a single example. If I’m learning some new software system I try to find 2-3 examples. I then read each one and write my own notes. I try to engage whoever I can to review my notes. I then introduce more (hopefully helpful and not long winded) documentation into the system.
- I treat bugs as first-class citizens. If I encounter unexpected behavior in a tool I am using I isolate the behavior. I figure out by feeding it controlled input and inspecting the output. Only when I feel I’m thorough in my understanding do I move on. Most of the time the bugs are my own invention or my own misunderstanding.
- I am paranoid. I believe it is my responsibility for my software to show the user what they expect, or a reasonable error message. I will never allow an underlying system I rely on affect my users experience. If there is an exception, I want it to be handled correctly. This is probably my weakest trait that I’m focusing on the most.
- I pretend I’m brilliant. When I first start getting confused I stop. I take a breath. I think how I would build the system. What business case would I be looking at. Why would I solve it this way. Then I usually feel more equipped to understand what exactly is going on.
Following the above I think I’ve managed to produce and create better software. I still have a long way to go, but I feel I’m improving. Building better software makes me happy. Being happy is exactly why I do this.