Learning PCA

Today I learned about principal component analysis in the machine learning Coursera course.  This is something I know very well and have used it extensively in my current career.   Jeff Hawkins book On Intelligence hypothesizes that the brain is a memory system and that it stores similar memories together and in a zipped or sparse distributed format.  I see PCA as a way of reducing dimensions or zipping information.  In this context, I can see that intelligence algorithms can potentially use PCA for storing information and creating Sparse reduced representation.   I feel that creating hierarchical information structure is another thing that I will have to learn.  I should probably add time dependent representation and reinforcement learning as well to this list. 

Chugging Along

Have been chugging along the AI learning curve.  I am liking the University of Washington’s Machine Learning Certificate Track very much.  I think the professors are very good and the format of the lessons is very good.  You’re following along and doing hands on exercises.  I think all the programming courses should be like that.


I think Ben Franklin once said

Tell me and I forget. Teach me and I remember. Involve me and I learn.

Separately, I learned about Davide Maltoni’s HTM research paper.  He’s a professor at a school in Italy.  He used HTM for handwriting recognition and found that it outperformed other machine learning techniques.  So that’s cool. I am in the process of reading through paper now. 

As I am chugging along, I must tell you viewers of this blog what are my favorite things.  My favorite thing in the world right now is protein powder called vega proteins and greens.




Intelligence equals Zipping Information

Just been chugging along through coursera courses and my weekly readings.  This week, I came across FB’s AI research group’s Memory Networks paper.  I thought it was interesting particularly because they are having some success with it.  The paper effectively stores information in an array and retrieves it to make intelligent predictions. To me this further adds evidence that brains’ work via a memory system and this approach from FB folks is very similar to what Jeff Hawkins is proposing.

The second thing  I discovered this week was on what the so called Artificial Intelligence methods (such as the Gradient Descent and Logistic classifiers) are really doing.  In our brains we make prediction via our memory system and possibly using sparse codes.  Sparse codes are effectively a way of zipping and storing information.  In the current state of the art Artificial Intelligence Methods, the derived/fitted regression parameters are effectively mathematical ways of zipping information.  Thus the point of this entire “building intelligence” exercise is to store information in ways that can help you make accurate predictions.

Ughh Machine Learning

I am taking two coursera courses on Machine Learning (Andrew Ng) and University of Washington certificate.  A lot of the material is overlapping between the two courses; so it’s great and I am getting some good programming (particularly python) experience. I am glad they call this machine learning and not intelligence course because doing gradient descent and dynamic optimization techniques are not intelligence or even learning in my opinion.

Basically, anything that is Math/statistics, I think is not intelligence.  I think this is why it is so important to have an understanding of other disciplines and in particular Neuroscience, Psychology and/or Philosophy.  After reading Pentii Kanerva and Jeff Hawkins work it’s pretty clear to me that brain learns by storing related information and recalling information.  I hope to just get out programming and data analysis experience from the coursera course.  After these courses are complete, I plan to take the Computational Neuroscience class and Neuroscience certificate track from Duke University.

Coursera Courses

I have signed up for two coursera courses to up my knowledge of modern machine learning techniques.  I wanted to do this to get more hands on experience with data manipulation with Python.  One of the courses on coursera is a machine learning track for which you can get a certificate.  so I paid for it.  Not that I care for a certificate but it’s useful as a motivation.  I think it will help me with NuPIC as I find my limited python handson experience as the biggest roadblock.  I would like to take a robotics course as well so I can start incorporating that with NuPIC.

Coursera: Courses

  1. Machine Learning by Andrew Ng: https://www.coursera.org/learn/machine-learning
  2. Machine Learning Certificate by University of Washington: https://www.coursera.org/course/machlearning

Separately, I thought the last chapter in Mr. Kanerva’s Sparse Distributed Memory book was quite enlightening.  Kanerva basically summarizes the role of senses (including encoding), memory and motor manipulation in building an autonomous machine.  I would recommend that anyone who is interested in NuPIC type of intelligence framework to read at least the summary section of Kanerva’s Sparse Distributed Memory book.


Coding up Augmented Spatial Pooler

As of now, I am focusing all my efforts on coding up Augmented Spatial Pooler as defined in this paper (working on it as of 20 September 2015)


We have put a binary version of spatial pooler here.

Our Spatial Pooler implementation on Github

we have also put a Augmented Spatial Pooler version of the code here.

Our Augment Spatial Pooler implementation on Github

Augmented Spatial Pooling and Spatial Pooling for Greyscale Images

I am trying to better understand spatial pooling and am wanting to code up a working version of it on my own.  I found two works (shown below) by Professor John Thornton in Australia as quite useful in this regard.  I hope to implement these in time.

I had trouble keeping up with my overall schedule this week but I plan to finish it all up by tomorrow.