I love solving puzzles, from debugging a tricky race condition in a bit of threaded code to figuring out how to get a chicken all the way to the place I need it in order to fly across a canyon to that last heart piece in Zelda, to piecing together an actual classic jigsaw puzzle, to the many available paper & pencil logic puzzles that have a small number of simple rules yet can be almost impossible to solve. However like most people, in order for any puzzle to be enjoyable, it needs to fall in a relatively narrow difficulty range such that the puzzle is challenging enough to maintain my interest but not so difficult that I get stuck for more than a few minutes without making any progress. Finding this sweet spot is of major concern to any puzzle or game makers, especially those that hope to market their game/puzzle to a wide variety of audiences.
The ideal solution would be to dynamically adapt the difficulty level of a puzzle/game to each individual player, both to find the best initial challenge setting for that person and to update the level to match the player's skill as they improve with practice/experience (or perhaps get worse at the end of a long day of compsing and don't want to have to try so hard). How to do this is an open question with lots of active research currently taking place in game theory, artificial intelligence, cognitive/neuro-science, human-computer interaction, and many more areas, as well as at all of the intersections among those fields!
The ultimate goal of this project is to automatically generate puzzles tailored to maximize the enjoyment of each individual puzzle solver, including adapting dynamically over time even for a single user as their skills change. The realistic goal will be to at least automatically generate puzzles of predictable difficulty levels as requested by a user, and attempt to adjust the difficulty/style/type of generated puzzles in some logical way based on analysis of one or more aspects of the individual's past solving history.
This past year I had a comps group do excellent work in building an efficient, fast solver for one particular type of logic puzzle called Slitherlink, and they worked on a generator for this puzzle in the last several weeks of their project. Their initial analysis of many hand-made puzzles as well as the ones they generated brought up many interesting questions as to what exactly is it that makes a puzzle easy vs. difficult, and fun/engaging vs. boring. The project this year will focus on answering those questions in order to generate "good" puzzles. Below are 2 examples of slitherlink puzzles before and after being solved. Though they are both the same size, the first was rated "easy" and took me less than 1 minute to solve, while the 2nd was rated "hard" and required almost 3 minutes for me to finish. If you had asked me a year ago which I thought would be harder I would have had no idea, only because of this past year group's initial analysis can I now say that the abundance of 2's and lack of 0's in the 2nd puzzle are a flag that it is likely to be difficult.
Those numbers, however, are specific to the slitherlink puzzle, and the goal this year is to explore many kinds of puzzles in an attempt to determine some general criteria for what makes any type easy/hard. For example, the reason more 2's means a more difficult slitherlink puzzle is because there are more ways to place 2 lines around a 2 than for example to place 3 lines around a 3, and clearly more than to place 0 lines around a 0, so a 2 gives you less information than a 3 or a 0 in this puzzle. Here are 2 examples of another japanese puzzle that similarly has only a few simple rules but can be difficult to solve, called masyu. The first puzzle is again easy, and the 2nd is labeled medium, and did take me a bit longer to solve. I have no analysis on this type of puzzle yet, but one thing that immediately stands out is that in the easier puzzle there are more dots near the edges, and again it turns out that in this puzzle there are fewer ways to fill in lines for dots on the border of the puzzle. More analysis is needed, however, to say that this is actually a general principle.
The outline of this project will be something like the following:
It would be useful to have taken algorithms and/or computability & complexity. Also any experience with AI, EvoComp, cog-sci, statistics, data-analysis, etc. would be beneficial. This project really though will require a bit of all the things, from designing/coding/user-testing the puzzle interface with user data collection, to performing statistical analysis of data, to implementing AI/learning/evolutionary methods for the adaptable puzzle generator.