Testing

Testing is almost universal in software development, yet it’s common to treat it like a chore, or an after thought. Sure, test-driven development (TDD) is a means to influence what the public interface should look like by having the developer pretend the class already exists, and then implementing it to make the tests pass… but how often is TDD applied? And how often is it applied consistently? What’s the magic test-coverage percentage that you (or your lead) are satisfied with, and why that number, in particular? What the heck is “shifting left”, and how religiously shoud you stick to the test pyramid? Can automated testing happen in production?

All Different Variants

In the earlier posts I’ve described some of the basic ideas behind CSPs:

  • how the problems are represented using variables and constraints,
  • how the solver searches for a solution by building up the search tree,
  • how heuristics guide the search, and
  • how constraint propagation helps eliminate parts of the search space that will not contain solutions.

In this post I will look at how modeling the problem itself can have a significant influence on how quickly the solver is able to find a solution. In particular, I’ll talk about different ways of implementing the all-different constraint, and look at the various implementations’ performance characteristics.

Rubber Duck FTW

Rubber duck debugging is a well known debugging technique - it boils down to explaining the code to a rubber duck, whether a real one, or a coworker who unwittingly becomes the “rubber duck”. Halfway through the explanation the “Wait … what?” moment pops up, you know where the bug is and you run off to fix it, potentially leaving your coworker wondering why you just ran off mid-sentence.

There are plenty of articles that talk about rubber duck debugging in detail, but why should this neat technique be restricted to just debugging? I’ll explore one area in particular - applying this technique as a means to improve existing code and design, rather than just for debugging.

Visualizing Search

Part of the appeal of building a solver for the fourth time (?!) for me is that it’s fun to watch how it solves the problem. While it’s possible and quick to hack something up on command line, the results aren’t that pretty and can’t be easily shared. In this post I’ll talk about building a frontend that talks to the backend via websockets and produces an animated visualization of how the search is solving the problem.

The scope of this post, and my initial frontend, is generalized NxN sudokus, and this first iteration of the frontend focused on largely non-interactive visualization. You can try it out for yourself online. I’ll talk about building up more interactive solvers for regular sudoku and crosswords separately.

Heuristics

Constructive search needs to explore an exponentially large search space. In order to have any hope to find a solution, or prove that none exist, it is crucial that this exploration is done in a smart way. Problems that constructive search is applied to are NP-complete, so we don’t have an exact way of making smart decisions about this exploration. Instead, we rely on heuristics to make generally good decisions, that will hopefully help us either find a solution, or prove that one does not exist quickly.

There are two categories of these heuristics - variable ordering heuristics, and value ordering heuristics. Let’s take a closer look at how they work.