Machine Learning

Making 2D photos in to 3D

How do you take a 2D photograph and create a 3D scene out of it? When you take a 2D photograph the depth information is inherently loss. However when we see 2D photograph our brains can actually still perceive and estimate depth information, for example, a tree is in front and house is about 10 feet in the back and mountain is very far even though the photograph doesn’t have any depth information in it at all.

Fast Asymmetric Generalized Hebbian Algorithm

I can get sucked in to challenges very easily especially when that involves Artificial Intelligence or statistical analysis. The challenge that has occupied my interests these days is the one that was put up by Netflix. It’s easy to describe: They give you 100 million data points for a triplet (Customer, Movie, Rating) and you have to predict the rating for given (customer, movie) pairs. If the average of squared errors of your predictions is below certain value you get a million dollar prize.

Critticall

This is very interesting concept. I wonder if a language can be specifically designed so that genetic algorithms can natuarally be applied. Notes as my next project! Critticall

Earth: The Largest Neural Model

The individuals in population and their interaction with each other very closely models neural activities in brain. The entire neural network can be looked upon as smaller fractal version of human population, with lot of details vanished. It occurs to me that an alien might prefer to look upon Earth as the planet hosting one huge “brain” with individual humans essentially just forming “cells” for this brain.

Trading Geek Dinner For Self-Theories Of Intelligence

I’d to ditch the grand New York geek dinner to attend a talk on how the Brain makes memories at New York Academy of Sciences. This is my current absolute favorite subject to spend all my free time so I HAD to be there. The talk also turned out to be very energetic, fun and fast paced by Jennifer Mangels of Columbia University. The first part of the talk was about the role of the hippocampus in forming long term memories and her research.

Jeff Hawkins - On Intelligence

If all research papers were published the way Jeff Hawkins published his On Intelligence, the world would be a different place to live. I strongly feel that this is the most important work in the field which is otherwise so bloated with wasted directionless efforts. This book is nothing like any research paper you might have read so far. It’s written in very personal way. Instead of just spitting out end product, thoughts and algorithms, author also tells you what other directions he was thinking about and what made him to go for this one.

Neural Networks And Learning Bayesian Networks

These are the two most interesting subjects for me right now. This book on Neural Networks is probably the best to take a dive in the field. This is truly the magnificent book that you can read like some thriller story (assuming you are not afraid of some mathematical depth) and look at the real working through cool numerical examples. Short but concise explanations might make this book deceptively tiny but it’s sure the best introduction in this field.

Get Yourself Some Bayes!

I read about Bayesian probability first on Paul Graham’ milestone article on spam two years ago. Amazingly, his Bayesian based algorithm did the same thing that a sophisticated AI algorithm will do, i.e. to precisely identify spam emails (success rate: 995 out of 1000) just like humans do it with their image recognition, natural language processing capabilities and yet unparalleled intelligence. That got me interested and Bayesian probabilities got added in my things to learn.