Monday, February 28, 2011

Oscilloscopes & software development



One of my first jobs was as an electronics technician where I would build, and test, professional audio visual equipment. My work bench was covered with an impressive array of tools and gadgetry, one of the most impressive being the trusty oscilloscope. To the uninitiated, it appears to be a complicated display with a vast array of control dials and buttons, and it's certainly one of the more difficult test instruments to master.

A typical application of an oscilloscope, or CRO for short (from the now dated acronym Cathode Ray-tube Oscilloscpoe) it to display the waveform of an electrical signal on the screen. The classic noob mistake when first learning to use them is to change too many settings at the same time, resulting in a wild, unintelligible display on the screen.

My boss would often chastise me and remind me of two important principals which would serve me well later on in software development:
* always start from a known baseline
* never change more than one thing at a time

With a CRO, there's a sensible default setting that should always be used to start with because it usually will display something, and from there only requires minor adjustments to achieve the desired result. I'd equate that in software development terms to setting up test fixtures with sensible data and making sure you've isolated the code to be tested.

With so many dials and buttons it's easy to make the mistake of tweaking and fiddling in a panic and ending up with a garbled display. The more experienced operator will only make one adjustment at a time, and if it doesn't have the desired effect return it to it's sensible default position before trying a different adjustment. Similarly, when developing software it's tempting to hack and slash instead of methodically working towards the goal, but it often leads to breakages and a mess of spagetti code which no clear way of knowing exactly what caused the error. Try something, if it doesn't work roll it back and try something different.

I was reminded of the CRO analogy recently when I had a bunch of tests failing while trying to modify some functionality. After trying in vein to work out why my modifications were causing the breakage, I rolled them back and tested again, only to reveal than a recent update had introduced the breakage, not my modifications. So I fixed the failing tests, merged my update back in again, and re-tested. That way I know that I've started from a known baseline, and only changed one thing at a time, so if anything breaks it's my change that broke it, and from there I can work towards fixing it.

Fault-finding and testing faulty electronic equipment gave me a good mindset for software development that I'll always appreciate. Just as we'd always be sure to comprehensively test all our equipment before shipping it to customers to keep the number of returned products to a minimum, following test driven development before deploying any code to production keeps the number of bugs and rollbacks to a minimum.