Gary Marcus argues for a shift in research priorities, towards four cognitive prerequisites for building robust artificial intelligence:
Hybrid architectures that combine large-scale learning with the representational and computational powers of symbol-manipulation
Large-scale knowledge bases—likely leveraging innate frameworks—that incorporate symbolic knowledge along with other forms of knowledge
Reasoning mechanisms capable of leveraging those knowledge bases in tractable ways
And rich cognitive models that work together with those mechanisms and knowledge bases.
Although there are real problems to be solved here, and a great deal of effort must go into constraining symbolic search well enough to work in real time for complex problems, Google Knowledge Graph seems to be at least a partial counterexample to this objection, as do large scale recent successes in software and hardware verification.
Want to sponsor this event? contact us.