At present, there is a great deal of confusion as to the final objective of AI. Some see Artificial General Intelligence as the ultimate and imminent goal suggesting that it can be achieved through machine learning and its further developments.
We argue that despite the spectacular rise of AI, we still have weak AI that only provides building blocks for intelligent systems, mainly intelligent assistants that interact with users in question-answer mode.
A bold step toward human-level intelligence would be the advent of autonomous systems resulting from the marriage between AI and ICT envisaged in particular by the IoT. In this evolution, the ability to guarantee the trustworthiness of AI systems – reputed to be “black boxes” very different from traditional digital systems – will determine their degree of acceptance and integration in critical applications.
We review the current state of the art in AI and its possible evolution, including:
Avenues for the development of future intelligent systems, in particular autonomous systems as the result of the convergence between AI and ICT;
The inherent limitations of the validation of AI systems due to their lack of explainability, and the case for new theoretical foundations to extend existing rigorous validation methods;
Complementarity between human and machine intelligence, which can lead to a multitude of intelligence concepts reflecting the ability to combine data-based and symbolic knowledge to varying degrees.
In light of this analysis, we conclude with a discussion of AI-induced risks, their assessment and regulation.
Ещё видео!