Heretofore confined to specific fields of activity, AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them. Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data.
I think we need to do some 'backward chaining' and first determine what, as a species, we want to get out of AI. If we want complex decision making we may yet have to wait quite a while. But consider this; the construction of the latest and most complicated space craft can be task analysed so that each and every stage of its production can be implemented by the 8 year old mentioned by another respondent. There are in fact few tasks in human endeavour that can't be organised in this way and this has been understood since Henry Ford and the advent of Classical Management Systems. If we view AI systems as fron-end executors there is little that we can't pass over to them, releasing vast human resources to more complicated endeavours. Now the inherent wisdom of such possibilities, that is another matter entirely...8)