Will AIers Ever Stop Being Dumb?
Every time I read and hear something in AI research, it freaks me out. These “research” people just keeps formulating laws out of thin air without having any strong logical grounds. Every researcher seems to have their own minuscule world of hard and fast rules that they think is capable to explain everything. What’s even worse is that they keep day dreaming that if they keep going like this they are not too far from the finally glory. These people have essentially transformed the “science” of Artificial Intelligence in to a never ending empirical guess game. And this disease is spreading fast everywhere. Now most research studies simply means collecting data, do some statistics and throw some sketchy laws with lots of “might be” to hungry scientific generals. All you need is some post at Stanford or MIT and you can at least expect that your garbage will occupy shelves of library all over the world. A good example is this lecture by Doug Lenat of Stanford. You will see how lots of “principles” are drawn right away out of the thin air without any justification for their validity and completeness. I think these guys should step back for a while and read Newton’s Principia or Euclid’s Elements just to get feeling of how important it is to approach problem with strong logical grounds rather then “just thoughts out of my mind”. We don’t have HAL. But seeing current state of AI, that doesn’t surprises me.