One of the things I did in Prague – more info shortly, promise – was buy a really cheap copy of River Crossing (called 'Crocodile [something]' there).
And being the sort of person whose response to Sudoku was to write a program to solve the puzzles (starting with one that would do 16×16 ones as well as 9×9) I've written a program to find solutions to this too.
(Ok, one incentive was being a bit stuck on puzzle 38 of the 40 in the box. The solutions are included, but that feels like cheating in a way that writing the solving program doesn't.)
It works 🙂 but several things have surprised me…
My first thought was that a depth-first backtracking search looks to be one basic approach to take – try a sequence of moves and if it doesn't work, go back to the most recent position where you could have made another choice and try that instead.
Eventually, you'll either have found a solution (not necessarily the only or best one) or run out of options = the puzzle has no solution.
As all the most annoying textbooks say, "it is trivial to see that…" for this puzzle, if you repeat a position (of both the planks and the hiker) then you've gone on an unnecessary loop. So finding you've gone down a dead end and need to start backtracking is easy – you test for "have I been in this position before?"
The first version happily solved the first puzzle, but crashed out with a 'range check error' (i.e. trying to access part of an array that's not there) with puzzle 37, its next test.
This happened because it had tried 500 moves down one particular sequence without either finding a solution or repeating a position.
As the solutions in the book are all rather less than 100 moves, this was surprise #1. I expected a limit of 500 moves would be way more than necessary – surely you'd repeat a position long before that?
Extending that limit meant a tweak of the code. My tool of choice for this sort of thing is Borland Pascal and it has a limit on the size of structures. To keep things simple, I'd used an 'array of positions' rather than anything more complicated or sticking positions on the heap.
Altering one aspect of the way positions are stored (going from an array of Boolean to a set, if you must know) meant I could have more of them in the same size.
It turns out that the search quickly got over 700 moves deep and then proceeded to wobble up and down around that. Only a factor of seven more than I expected!
Hmm, let's say that if you've done a hundred and one moves, and still not finished it, you've made a mistake and need to backtrack.
Ah ha – solved #37 in 98 moves.
Adding a couple of counters to the code reveals that this involved backtracking 2,571,927 times, and comparing the positions of the planks now with past positions (rather slower than comparing the position of the hiker, so only done if that's the same) 12,296,328 times.
Fortunately, modern PCs are fast.
Could the number of moves or the speed be improved?
I'd arbitrarily chosen, when given a choice, to look at moves involving moving across a plank before moves involving lifting or placing a plank.
This must mean lots of unnecessary shuffling across the board… and the printed solutions involve more lifting and placing than crossing, so it should speed up if I do it the other way around…
Surprise #2: a solution found in 101 moves (= longer) with 3,682,455 backtracks (= over a million more) and 18,531,273 (= six million more) comparisons. No wonder it was slower.
Back to move first, play with planks later, then!
At this point, I had a little 'doh!' moment and decided to start the 'have I been in this position before' search with looking at the most recent moves, rather than the first ones.
This got the plank compares down to 7,050,943 ahem.
What if we reduce the move limit to 90? Yes, that 98 move solution will be missed, but faster rejection of wasteful but non-duplicating routes = so faster run time?
Surprise #3: solution found in 90 moves, but backtracks up to 6,207,072 and plank compares up to 15,926,026.
After that, surprise #4 was that limiting to 80 moves is faster than 90: solution found in 80, via 5,890,004 backtracks, as a result of 13,723,046 compares. Still slower than the 101 limit, note.
One assumption I'd made was that laying a plank back to the start 'must' be wrong, but on spotting that one of the demo puzzles needs to do this, I took that out: same solution, but surprise #5: 7,089,665 backtracks and 16,446,459. Clearly, unnecessary lays of this kind are much more common than I expected.
If anyone else has a go at writing a solver for this, I'd be interested to see it. It looks like at least two others exist, but I don't know anything beyond that…
It'd be an interesting weekly homework project for a CompSci course.
(Update, 70 limit = solution in 70, after 9,161,246 backtracks and 18,901,256 compares.)
(Update2, no solution with a 60 limit. 'Just' 1,008,690 backtracks and a positively tiny 1,791,682 compares. So finding an optimal solution will just involve increasing this by one until it's found.)
(Thought: with this sort of runtime, a 'start with a low limit and work up until an optimal solution is found' approach will probably be not much slower than a 'find the first ok solution' method.)
(Update3, surprise #6: increasing the limit by one results in a 38% increase in the number of backtracks and a similar increase in compares! Still no solution though…)