Tuesday, June 27, 2006

Sticky stuff

Those stickers had been disgracing mine and Renzo's already ugly computers for much too long. Finally we decided to put them where they belong.




Hmm... what more can you do with these stickers?

Why do I keep reading Digg when Slashdot is better?

They say Digg is bigger than Slashdot than these days. Bigger, better, newer, 2.0 and what not.

Indeed, I find myself scanning the Digg front page more often than Slashdot's ditto. Far too often, actually. Obsessively often? Well well, I actually get proper work done now and then.

Still, it's simply not true that the quality of the "news" on Digg's frontpage is higher than the news on Slashdot. On the contrary, Digg suffers from a disgusting amount of mob mentality. Half of what's on the front page is not news at all, merely short text snippets propagating rumours that everybody's already heard or views that most people already agree with. Which is why they get digged to the front page: people like to hear/read what they already believe in.

And this is not really a problem with Digg, it's a problem with people. (And like all problems we can't really do anything about, it's not even worth thinking of it as a problem, just as a fact.)

Slashdot, on the other hand, has sometimes-competent editors that sometimes put an effort into selecting and editing stories. While I might learn something I didn't already know from either Digg or Slashdot, I'm far more likely to learn something new about something I didn't already know anything about from Slashdot. Crucial difference.

So, back to the question: Why do I keep reading Digg when Slashdot is better? Because the Digg is updated more often, and the items are shorter. It's that simple. I think.

Tuesday, June 20, 2006

List of researchers/centres active in Computational Intelligence and Games

I thought a list of researchers and research centres working in the field of Computational Intelligence and Games (CIG) would be a useful resource to people like myself. For a definition of CIG, see the eponymous conference series; in short, it's about applying techniques such as evolutionary computation, reinforcement learning and neural networks to computer games, like strategy games, shooters, platformers, driving games and board games.

The format of the list would be the institution (university, company etc.), followed by a list of relevant researchers (those who spend a significant amount of their time on CIG research), and possibly a few words about what the topics of research are. The list is meant to be continuously updated: I am fully aware it is far from complete at the moment, so please help me expand it by posting additions and corrections to the list in the comments below! This post will then be updated as additions come in.

Brandeis University: Jordan Pollack. Co-evolution, theory, backgammon.

Natural Selection, inc.: David Fogel. Evolution, board games.

University of Essex: Simon Lucas, Julian Togelius. Neuroevolution, board games, Pac-man, car racing.

University of Iceland: Thomas P. Runarsson. Evolution, reinforcement learning, board games.

University of Nevada, Reno: Sushil J. Louis. Evolution, strategy games.

University of Paisley: Benoit Chaperot, Colin Fyfe. Motocross racing.

University of Pretoria: Andries P. Engelbrecht. Particle swarm optimization, board games.

University of Southern Denmark: Henrik Hautop Lund, Georgios N. Yannakakis. Player modeling, Pac-man.

University of Texas, Austin: Bobby Bryant, Risto Miikkulainen. Neuroevolution, strategy games.

AI: All fun and games

Who believes in artificial intelligence (AI) nowadays? Not many, it seems.

For some fifty years, computer scientists have been saying that they know the principles for creating intelligent machines, and that a working piece of AI hardware of software is just around the corner. Or maybe around the next corner. People nowadays seem not so much to take those claims with a pinch of salt as they seem to just ignore it.

AI research is all good, the reasoning goes, but all we are likely to get is better chess players, traffic control systems, brain scanners, search engines, rice cookers, or what have you. Human-made technology that autonomously learns and adapts to truly new situations, acting seemingly goal-directed and generally sensible will never appear, because we just don’t know how intelligence such as our own works. Some say that if we were so simple that we could understand ourselves, we would be so stupid that we couldn’t.

Of course, I don’t agree with this.

If I didn’t believe that we will some day create real artificial intelligence, if what I do all day was just plain engineering, I wouldn’t be doing it. (I would probably do something that involved significantly more glamour, girls and sunshine.) But the critics do have a point: we don’t understand how intelligence works right now. Maybe we will understand one day, maybe we won’t.

And this of course makes building an AI using standard engineering techniques, like how we would build a car, a house or an operating system, all but impossible.

Instead, I (and some others with me) think that we can create AI without knowing how it works. The idea is to let the AI build itself, and the method is trial-and-error, or as it is know in biology: Darwinian evolution.

To put it simply, we start with a “population” of randomly generated candidate AI’s, (most often these are software programs in the form of simple brain simulations, or “neural networks”) and we evaluate how good they are at some task. Because they are all randomly generated, they are usually not very good at the task, but some are a little less bad than others, and we keep those. We then delete the worst of the lot, and replace them with slightly changed (“mutated”) copies of the least bad. And then we do that again, and again, and again…

This is so simple that seems it shouldn’t work. But it does. It works in nature – we are intelligent, aren’t we? – and it works in computer simulation. A small community of researchers have been working along these lines for a decade or so; some representative research can be found in the book Evolutionary robotics by Stefano Nolfi and Dario Floreano.

The astute reader will already have noticed a problem with this: this research has been going on for a decade or so, but where is the AI we were promised? Where is HAL, R2D2, Skynet? Not even the car from Knight Rider seems to be ready for the market anytime soon. Indeed, we have a problem. The evolutionary approach works perfectly well for simple problems, but fail to “scale up” to more complex tasks.

I believe this is because the tasks people try to evolve solutions for are not right. What researchers usually do is to teach a robot to do a specific action, like pick up red balls and avoid blue. Ultimately, very little is gained from this, as there is no obvious way to proceed. Once you have learned to pick up red balls, how is that going to help you to brew a good cup of coffee, or take over the world?

It's like a rat learning to push a lever in a skinner box for some food reward. Once it has learnt to push this lever, there is no way to build on this "knowledge" to learn anything interesting.

The right task needs to be simple to get started with, yet more or less limitless in its ultimate complexity, and with a good learning curve so you can make continuous progress. Like life, or like a well-designed computer game.

Indeed, some games (mostly puzzles and board games) are marketed as taking "a minute to learn, but a lifetime to master". That's exactly what we're looking for. But this doesn't only apply to board games. The basic principles behind a carefully designed FPS like counter-strike are grasped in almost no time at all, but many people play it every day for several years and keep getting better at it!

At the moment we are working with a simple car racing game. Racing games are in a way ideal, as more or less anyone can pick up the controller and race a lap, but to become a racing champion requires a lifetime of practice, and quite a bit of intelligence. For example, you need to be able to plan your path, keep track of your opponents and anticipate their actions. I am making steady progress on having my AI’s teach themselves how to do this - ssee the videos.

But will automatic development of car racing AI really be a stepping stone toward general intelligence? I think so, but you are welcome to disagree with me - I'd love to hear why it wouldn't. And in any case, it will at least make for better racing games.