AI Is Ready to Take The Shape Of Human

There’s no right way to construct a robot, just as there are not any singular means of imparting it with intelligence. Last month, Carnegie Mellon University associate analysis professor and the director of the Resilient Intelligent Systems Lab, Nathan Michael, whose work includes stacking and combining a robot’s various piecemeal capabilities as it learns them into an amalgamated artificial general intelligence (AGI). Assume, a Roomba that learns how to vacuum then learns how to mop then learns to how to dust and do dishes — reasonably soon, you have got Rosie from The Jetsons.

However, attempting to model an intelligence after either the ephemeral human mind or the exact physical structure of the brain (rather than iterating increasingly capable Roombas) isn’t any small job — and with no small quantity of competing hypotheses and models to boot. In fact, a 2010 survey of the field found greater than two dozen such cognitive architectures actively being studied.

The present state of AGI research is “a very complex question without a clear reply,” Paul S. Rosenbloom, professor of computer science at USC and developer of the Sigma architecture, informed. “There’s the field that calls itself AGI which is a reasonably recent field that is attempting to define itself in contrast to traditional AI.” That’s, “traditional AI” on this sense is the narrow, single process AI we see around us in our digital assistants and floor-scrubbing maid-bots.