Rubber Duck FTW

Rubber duck debugging is a well known debugging technique - it boils down to explaining the code to a rubber duck, whether a real one, or a coworker who unwittingly becomes the “rubber duck”. Halfway through the explanation the “Wait … what?” moment pops up, you know where the bug is and you run off to fix it, potentially leaving your coworker wondering why you just ran off mid-sentence.

There are plenty of articles that talk about rubber duck debugging in detail, but why should this neat technique be restricted to just debugging? I’ll explore one area in particular - applying this technique as a means to improve existing code and design, rather than just for debugging.

Rubber duck debugging requires you to you walk through your code and explain it step by step, justifying what each bit of code is meant to do and how it is doing it. At some point you end up realizing that what you thought the code ought to do and what the code actually does don’t actually match, and that could be the bug!

Or maybe you misunderstood something. This may be an opportunity to improve your code, by adding a test, cleaning up a comment, or refactoring your code, so that next time you walk through the code the logic is more self-evident (as a side-note - I’m a big fan of self-documenting code which significantly reduces the chance that your comments become a lie as your code changes, among other things).

OK, cool, so that’s mostly just good ol’ rubber duck debugging… so what? I think that the same approach can be applied more broadly, specifically in the context of improving existing code and designs. The idea is the same as before, you walk through your code / design, explaining what it does and how to your rubber duck of choice, and through this process you find opportunities to make things better. I alluded to that in the previous paragraph, where you’d be looking for a bug, but may find things along the way that could be improved. However, unlike regular debugging where you basically have to find and fix the bug, pragmatically, when would it be sensible to do such a walk-through?

The initial inclination may be that this isn’t worth the trouble at all, and, for majority of the cases pertaining to code, it’s probably right: code is often either good-enough to serve its purpose as is, might be relatively short-lived, or otherwise uninteresting… but there may be bits of code that are subtle, critical to the correctness / performance, or otherwise important. This type of code can be a great opportunity to step back, explain it to the rubber duck, and see if your argument is as clear and convincing as you thought it was. If it’s not, maybe you’re not understanding the problem fully yet? Maybe you applied a well-known algorithm, but don’t quite follow how it all works, and you have a chance to take a step back, make the code clearer, or more optimized to your use-case? Maybe you can simplify the code? Or maybe you actually proactively found a subtle bug that you haven’t thought of before, rather than reactively?

On the other hand, code reviews are an established practice and should catch many flaws, whether they are done by another person, or by yourself as you’re checking code in. While this is true, small tweaks that look sensible in isolation can result in correct, but clunky code. Would a more thorough review catch them? Probably. Is doing an extra pass worth the effort? Maybe. You’ll want to apply your best judgement here, depending on importance of the code and its cleanliness to your project.

This snippet of code is from my solver (with slight simplifications) and resulted as I piled on new features, like logging and the ability to keep track of solution cost:

 1func (s *Search) Run() *SolutionWithCost {
 2  for {
 3    done, solution := s.step()
 4    if done {
 5      solved := solution != nil
 6      s.logDone(solved)
 7      if solved {
 8        s.problem.SolutionFound()
 9      }
10      if solution != nil {
11        return &SolutionWithCost{
12          Solution: solution,
13          Cost: s.problem.CurrentCost()}
14      }
15      return nil
16    }
17  }
18}

While this code is fairly straight-forward, it can be cleaned up by merging the two if-statements on lines 7 and 10. This doesn’t functionally change anything, but cleans things up and removes confusion because the two if-statement branches really are equivalent, even though at a glance that’s not immediately obvious.

Every once in a while you’re likely to end up writing some kind of a technical document. It might be a README file, a design document explaining what your API does, or, as the case was with the example above, a blog post. As you are writing such a doc you’ll likely find yourself inadvertently following similar steps to the rubber duck technique - you’ll be explaining to the readers what your code / API / thing does, and as you are doing this it’s a great time to check whether what you are describing is accurate. For example, are the edge-cases of your API behaving as the documentation says they should? Do you have tests to prove that? Or does the complex logic you are describing to the reader actually match what the code does? In either case, and particularly as time passes and the underlying code changes (possibly due to changes made by you and others), creating and updating these docs can be a good opportunity to double check that things are still as you intended.

The other and more important benefit of such an exercise is that not only is it an opportunity to double-check specific bits, which you would be looking at anyway as you incrementally build up the codebase, but it is also an opportunity to confirm that together all these changes are consistent and easy to understand. You may learn that the API(s) your code exposes, parts of which in isolation look reasonable, have inconsistencies, which could lead to confusion and difficulty using them properly. Or you might find code paths that separately are performant, but combined are not. Or you might discover that complexity of the whole system is getting out of hand and a refactoring or even bigger structural changes are overdue.

In any event, unlike proactive code reviews, which may be overkill, keeping documentation updated is useful for you and your users, so such doc updates should be done anyway - why not use this as an opportunity to run the doc by your trusty old rubber duck, and see if things could be improved? So the next time you’re updating the README, explain it to the rubber duck, too.

As I was writing the post about visualizing search and describing the protocol I was forced to poke around a bit harder than I had before, and look at the exact messages being sent over the wire. While I’ve previously done a good deal of testing (admittedly, largely manual because integration testing with websockets sounds hard) and the code did work, it turned out it did so in a goofy way - the sudoku data was sent transposed, so the data would describe columns first, and the client would then transpose them back so that on the frontend rows appeared first, instead of columns. Thinking back on this, this rings a bell, I recall being confused, but figuring “eh, easy enough to transpose back on the client”, pushing the change and … forgetting about this little bit of mess I’ve created. Writing about how it worked made me feel bad enough to go and actually fix it properly.

Lastly, don’t forget about the very relevant code-metric: WTF per minute; if you’re skimming the code / docs and something looks off, it’s a good chance to dig a bit deeper, understand why, and make it less off for the next person that reads that code / doc.