Written by Chirag on Sunday, July 19, 2015 at 6:30pm

I grinded through my weekly schedule. One coincidental fact to add was that the Recursion book I am reading quoted Douglas Hofstadter’s GEB book. This gives me further confidence that recursion might be a useful tool for building intelligent machines.

Here is a basic thought process on when to use recursion. A problem must have three distinct properties. This is directly quoted from Thinking Recursively by Eric S. Roberts

* It must be possible to decompose the original problem into simpler instances of the same problem.

* Once each of these simpler subproblems has been solved, it must be possible to combine these solutions to produce a solution to the original problem

* As the large problem is broken down into successively less complex ones, those sub problems must eventually become so simple that they can be solved without further subdivision.

At the end of the first chapter, there were three problems to solve. The last (and the difficult one) asked to find a light weight (counterfeit) coin among 16 coins. If you had a balance which you can use to compare two coins, how many trials would it take to find counterfeit coin among the 16 coins. The standard answer is four trials but you can do better than that. **The answer here is three**. I have uploaded a recursive solution (Recursion1\divideandconquer.py) on my github account.

I also ordered Rajesh Rao’s book Bayesian Brain. This will be on the next reading schedule. I also thought more about reaching out to Neuroscience PhD students. I really want to understand what I can gain out of attending a Neuroscience PhD program. So I have reached out to NJIT, UC San Diego and UT Austin Neuroscience programs.

What do you guys think?