Have been chugging along the AI learning curve. I am liking the University of Washington’s Machine Learning Certificate Track very much. I think the professors are very good and the format of the lessons is very good. You’re following along and doing hands on exercises. I think all the programming courses should be like that.
I think Ben Franklin once said
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Separately, I learned about Davide Maltoni’s HTM research paper. He’s a professor at a school in Italy. He used HTM for handwriting recognition and found that it outperformed other machine learning techniques. So that’s cool. I am in the process of reading through paper now.
As I am chugging along, I must tell you viewers of this blog what are my favorite things. My favorite thing in the world right now is protein powder called vega proteins and greens.
Just been chugging along through coursera courses and my weekly readings. This week, I came across FB’s AI research group’s Memory Networks paper. I thought it was interesting particularly because they are having some success with it. The paper effectively stores information in an array and retrieves it to make intelligent predictions. To me this further adds evidence that brains’ work via a memory system and this approach from FB folks is very similar to what Jeff Hawkins is proposing.
The second thing I discovered this week was on what the so called Artificial Intelligence methods (such as the Gradient Descent and Logistic classifiers) are really doing. In our brains we make prediction via our memory system and possibly using sparse codes. Sparse codes are effectively a way of zipping and storing information. In the current state of the art Artificial Intelligence Methods, the derived/fitted regression parameters are effectively mathematical ways of zipping information. Thus the point of this entire “building intelligence” exercise is to store information in ways that can help you make accurate predictions.
I am taking two coursera courses on Machine Learning (Andrew Ng) and University of Washington certificate. A lot of the material is overlapping between the two courses; so it’s great and I am getting some good programming (particularly python) experience. I am glad they call this machine learning and not intelligence course because doing gradient descent and dynamic optimization techniques are not intelligence or even learning in my opinion.
Basically, anything that is Math/statistics, I think is not intelligence. I think this is why it is so important to have an understanding of other disciplines and in particular Neuroscience, Psychology and/or Philosophy. After reading Pentii Kanerva and Jeff Hawkins work it’s pretty clear to me that brain learns by storing related information and recalling information. I hope to just get out programming and data analysis experience from the coursera course. After these courses are complete, I plan to take the Computational Neuroscience class and Neuroscience certificate track from Duke University.
I have signed up for two coursera courses to up my knowledge of modern machine learning techniques. I wanted to do this to get more hands on experience with data manipulation with Python. One of the courses on coursera is a machine learning track for which you can get a certificate. so I paid for it. Not that I care for a certificate but it’s useful as a motivation. I think it will help me with NuPIC as I find my limited python handson experience as the biggest roadblock. I would like to take a robotics course as well so I can start incorporating that with NuPIC.
Separately, I thought the last chapter in Mr. Kanerva’s Sparse Distributed Memory book was quite enlightening. Kanerva basically summarizes the role of senses (including encoding), memory and motor manipulation in building an autonomous machine. I would recommend that anyone who is interested in NuPIC type of intelligence framework to read at least the summary section of Kanerva’s Sparse Distributed Memory book.