Babybot is a project to create perception via sensory input. The project is profiled on Information Society Technologies.
What I find most interesting is that engineers are doing this and not scientists. Engineers have a distinctly different mindset than scientists. Perfect example of this is the birth of the transistor. It was interesting to the scientists but it took engineers to see the real value of the transistor and why it should be freely licensed.
The computer you’re using to read this had hundreds of millions of transistors in it. All because engineers could put scientific finding to real world knowledge. So it isn’t a far view to think that these engineers might be the first to produce self-aware machines.
I’m going to prognosticate here and venture that we are within a decade of a functional artificial intelligence. Granted, the first ones might reach the mental level of a five year old, but that’s a damned big leap in AI.
On of the reasons behind my prediction is the DARPA Grand Challenge. In essence the challenges are to build an autonomous vehicle capable of navigating various terrain and now with DARPA Grand Challenge III, a vehicle that can navigate an urban landscape obeying traffic law and merging into moving traffic.
Granted, this is more an expert system but I still see it as a step toward general AI. After all, those vehicles depend on an extraordinary amount of sensory input, be it GPS data, local video and audio input, etc. Matter of fact, cars of today have several input sources too – they monitor engine function and emissions, altering fuel/air mixtures and other parameters including transmission shift points, etc. So in essence, they’re autonomous too.
But getting back to the DARPA Grand Challenge III, I would really dig getting on a team, or even forming one if I could find the financial backing. After all, still have to shelter, feed and clothe myself.