Click here to Skip to main content
15,884,472 members
Articles / Programming Languages / C#
Article

OO, Patterns and Sudoku Solver: Part 2

Rate me:
Please Sign up or sign in to vote.
4.46/5 (9 votes)
6 May 200710 min read 40.5K   617   41   2
Sudoku solver

OO, Patterns and Sudoku Solver: Part 2

You can find the originial article here

In the first part we covered the actual usage of the Sudoku solver and in this part we are going to cover the different techniques and logic used to solve this problem and some statistics and pros and cons of each method.

Brute force solver

Brute force solver is the simplest but not the most efficient method to solve problems like sudoku. It relies on the sheer speed and number crunching capability of a modern microprocessor. In simple terms it tries out each possibility till it finds a valid solution to a given problem. In the current context it tries putting different possible numbers in each unsolved cell till a valid solution is found. You can imagine this can take some time if there are lots of unsolved cells. To minimise the number of iterations we could use the back tracking algorithm with brute force solver. Back tracking reduces the number of iterations by not starting from the beginning each time the loop fails at a particular point. Instead it tries different number in the current cell till all the valid numbers fail in the current cell and then it goes back just one step to previous cell tries different number there and then continues in this manner till a valid solution is found. This greatly reduces the number of iterations. The brute force solver I have implemented in this solution uses the backtracking technique.


Quick solver (Serial cells)

Brute force with backtracking improves the solving time but is only good enough for grid of a certain size as the solving time exponentially increases with number of unresolved cells. To give you an example, an empty grid of 4 X 4 (16 unresolved cells in total) takes typically 62 milliseconds on my pc but it takes 13 seconds to solve an empty grid of 9 X 9 (81 unresolved cells in total). That's approximately 210 times increase in solving time for 5 folds increase in unresolved cell numbers. And for an empty 16 X 16 grid I stopped processing after 15 minutes. So effectively its performance becomes unacceptable at that size. This is when injecting domain knowledge in to brute force solving technique becomes essential. The quick solver does exactly that. We know the sudoku rules and we can use those rules to optimise our solver.

No Duplicates Rule

We know that each unsolved cell can have a value from the possible number values set. e.q in a 9 X 9 grid any unsolved cell can only have a value between 1 to 9. But we can reduce that possible values set by removing values that a particular cell can't have. And according to the rules; no row, column or square can have duplicates. So if we take all the values from the solved cells and then remove them from the appropriate row, column and square then we can greatly reduce the number of possible values for each unsolved cell. Let me give you an example:


In the figure given above, the cells with the single blue coloured values are resolved cells. And the cells with multiple black coloured values are the unresolved cells, each with initial possible values set. So if we apply the no duplicates rule then the yellow shaded cell (row 1 column 2) with value 2 can't have the same value in its corresponding row (outlined in red), column (outlined in blue) or square (outlined in green). This means we can safely remove each blue or fixed value from its corresponding row, column and square. That leaves us with a sudoku as shown in the figure given below:


As you can see the possible values set in each unresolved cell has greatly reduced. Apply this rule to most simple and some medium difficulty level sudoku that you find in a newspaper. And after each iteration of removing duplicates; you will find at least one unresolved cell left with only one possible value. That is the resolved value for that cell. And now you can continue removing the recently solved value from all the unresolved cells of its corresponding row, column and square.

If we keep doing that then one of the following two things is going to happen:

1. We get a solved sudoku. (Hurray!)

2. We still have some unresolved cells.

If we arrive at 2, then is this the stage at which we start guessing values?

No, there is at least one more rule we can implement.

The Unique Square Value Rule

Sometimes after all possible duplicates removal you will come across the sudoku state as shown below:


You can place value 1 in the yellow shaded cell of row 2, column 8. Why? If you look at all the possible values for all the unresolved cells in that square you will find that 1 is only present once. And we need to put 1 somewhere in that square and only possible square we can put 1 is the yellow shaded cell and hence we can put 1 in that cell even though there are more than one possible values available for that cell. There are a few more cells with unique square values and they are all shaded in red.

We can only run USV (unique square value) rule if there are no more duplicates to remove. If we run this rule before all the duplicates are removed then the sudoku solver can put wrong values in the cells. Now after applying USV rule we get a few more resolved cells hence a few more duplicates to remove. And now if we keep doing that in a loop, again we come across one of the following two situations:

1. We get a solved sudoku. (Hurray!)

2. We still have some unresolved cells.

If we are still at 2 then we guess a value for the first unresolved cell found and then continue applying duplicates and USV rules. If we keep doing this then you are guaranteed to find a solution to any valid or solvable sudoku.


Quick solver (Possible Vals)

What can we do to reduce the number of iterations further? The quick solver (serial cells) puts the guessed values in a serial order. It starts scanning for unresolved cell starting from row 1, column 1 and moves on to cell in row 1, column 2 till it finds an unsolved cell. If no such cell is found in row 1 then it moves to row 2 and so on. If we alter this order then we are bound to get different number of iterations. What if we order the unsolved cells by the possible number of values they can have? So the unsolved cell with two possible values remaining comes on top and the cell with three possible values remaining next and so on. With this sort of ordering, the numbers of different starting path are less and discarding the wrong path happens very early on. Thus the numbers of iterations are reduced drastically in most cases. In addition to that the amount of work needed to code this type of solver on top of previous one is trivial using the inheritance (More details on that in the next part.).

Is Brute Force Solver Bad?

Here is the stats table for the numerical minded (all the solving times given are in milliseconds):

Grid size

4 X 4

9 X 9

16 X 16

25 X 25

Brute Force

62

13000

30000

Too long

Quick Solver (Serial cell)

45

750

1900

Too long

Quick Solver (Pos vals)

45

250

2000

9000

And here is the graphical view of the table (some values are changed in the graph to better illustrate the relative difference in the solving curves):


This just re-emphasises the fact that after certain complexity threshold the brute force solving technique becomes unacceptable.

But is the brute force technique always bad? In my opinion that's not true. This technique definitely has its advantages and the most notable is the cheaper cost to develop and maintain it over others. In the scenario given above, the quick solver is always fast compared to the brute force solver. But in many other cases it's quite possible that due to the added processing required, the quick solver kind of technique is slower to brute force at low complexity levels. And hence offer no real benefit at lower complexity levels. In those kinds of scenarios the Brute force is the winner. Let me illustrate it further with the help of the following hypothetical graph:


It is almost identical to the previous graph. Only the fast quick solver is kept and the red horizontal line is added; it is a sample ideal acceptable performance threshold for this app which is around 125 milliseconds in this case. The purple curve represents brute force solver and the green curve illustrate the quick solver. The point of intersection between these two curves is circled in red. This is the point where the quick solver starts getting faster compared to the brute force solver. Let's call it the brute force performance cross over point. The point of intersection between the brute force curve and the performance threshold line is circled in green this is the point after which the performance of the brute force solver is consistently inferior to the ideal acceptable performance. Let's call it the brute force performance cut-off point. So does this mean this is the point after which the brute force solver should be discarded as a rule of thumb? No, in many cases it is quite possible that the higher cost of developing better solution can push the acceptable performance threshold a little lower. That point is shown with filled brown circle. Let's call it the brute force viability cut-off point. This is the point after which the brute force can't be accepted as a solution.

Let's put some more colour around the brute force viability cut-off point:


The yellow shaded region is the brute force operation zone.

If your application is always going to operate within the brute force operation zone then the brute force solver is your winner and you can save some money by not investing in developing faster but costlier solutions. If you know for sure that your app is going to operate beyond this region then you need to have better solution like the quick solver. In case of uncertainty where you know at the start you are going to be operating within the brute force operation zone, but not sure whether you will operate beyond that zone in the future. In such cases you need to implement something that will allow you to plug in better solver if and when the need arises.

This is where the design patterns can come to your rescue. In this particular case you can implement the Strategy pattern. This particular pattern allows you to plug in different strategies depending on the requirements. So you can have your application running with brute force solver strategy and in the future you can swap it with quick solver strategy if required. I have implemented the strategy pattern in the Sudoku Solver.

More details about the strategy pattern and the code walkthrough of the critical areas will be covered in the next part. I will also be posting the complete source code in the next part. Some of the comments on the previous posting have mentioned the very obvious short comings of the UI. And I had replied that the actual code have many hidden goodies that are not utilised in the front end due to the time constraints and the lack of focus in that UI. I will cover those areas and will explain how to use those goodies to help you make a better sudoku solver.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
United Kingdom United Kingdom
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
GeneralSource Code Pin
TBermudez7-May-07 5:59
TBermudez7-May-07 5:59 
Where is the source code? You have only posted the executables.
GeneralRe: Source Code Pin
GregLesnie22-Feb-11 16:10
GregLesnie22-Feb-11 16:10 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.