My team and I are making great progress. By Monday, 5/16, we plan to have a majority of the work done so we can fine tweak it in the last few weeks of the quarter. We will also continue to test and fix any bugs that come along.
In the past week, I implemented three versions of an AI. There is an easy random AI, a medium difficulty greedy AI, and a hard alpha-beta pruning AI. The greedy AI makes a move based on the best possible score at that time not looking into further moves down the line. If the AI sees an opportunity to make a move to capture a queen, it will capture the queen based on relative piece score. It does not take into consideration the damage it could cause itself the next turn or so. It will however look for possible checkmates and checks before going for a capture. The alpha-beta pruning AI uses alpha-beta pruning to explore possible moves up to a certain defined depth. It will take into consideration the score of lost pieces from future moves and positional scoring by the the square value. We currently are using a depth of three and it is by far better than me.
Even though we are looking at depth three, the AI can take about 10 seconds to make a move because in there are times it will have 50000 thousand moves looked at. Based on how much time there is left in the project, there are some options we can take. The easiest option to reduce the time is use numpy arrays. Numpy arrays are more efficient than Python lists because it uses continuous memory. The one thing we would change in our engine is converting the current 2d list layout to a 2d array of numbers to represent the board. It would not eliminate any additional paths, but can decrease the calculation time by a few seconds.
The second option is a little more complex and will take into consideration move types and patterns. For example, lets say you have a queen that is ready to attack the king. The AI can determine a move that would pin the queen in an awkward spot that would cause it to get captured when moving away the next turn. Of course this is a vary simple example, but this is what is known as pinning. Other calculations would include number of checks, pawn formations, and blocking moves to name a few. These can all be weighted and used to eliminate some branches and will increase the time. Some speculate that is can cut the decisions in half.
The last and most complex solution is to implement a bitboard. The way bitboard works is to store sections and elements in a board in bit array data structure. For example, one way would be to store white pawns in a 64 bit array. This means there would be 56 zeros and 8 ones if all white pawns are present. The ones are positioned in away that would represent how they look on the board. At the start of the match, the first 48 bits are zeros (6 rows), then 8 ones (7th row), followed by 8 zeroes (8th row). Using bitboard means that all the moves can be calculated using bitwise operators. This would increase the speed drastically for AI calculations. More depths easily be implemented with bitboards.
Of course we not building a Deep Blue to beat a chess grandmaster, but these are options we can take make a better and faster AI. That said, the current alpha-beta pruning AI did beat me in six turns. After playing it 10 times, I did not win once, but I am also not a good chess player.