Just been chugging along through coursera courses and my weekly readings. This week, I came across FB’s AI research group’s Memory Networks paper. I thought it was interesting particularly because they are having some success with it. The paper effectively stores information in an array and retrieves it to make intelligent predictions. To me this further adds evidence that brains’ work via a memory system and this approach from FB folks is very similar to what Jeff Hawkins is proposing.
The second thing I discovered this week was on what the so called Artificial Intelligence methods (such as the Gradient Descent and Logistic classifiers) are really doing. In our brains we make prediction via our memory system and possibly using sparse codes. Sparse codes are effectively a way of zipping and storing information. In the current state of the art Artificial Intelligence Methods, the derived/fitted regression parameters are effectively mathematical ways of zipping information. Thus the point of this entire “building intelligence” exercise is to store information in ways that can help you make accurate predictions.