This is an extract from An AI Learning Hierarchy - A hierarchy of AI machines organized by their learning power shows their limits and the possibility that humans are at risk of machine subjugation well before AI utopia can come by Peter J. Denning and Ted G. Lewis (published in Communications of the ACM, December 2024 - Vol. 67 No. 12 pages 24-27).
The article contains further details and is well worth reading.
Premise

This is a very interesting article presenting a hierarchical classification of AI machines based on their learning capabilities rather than their application domains.
The hierarchy, consisting of eight levels, highlights the limitations of AI and suggests that human intelligence might not be computable.
The AI hype and misconceptions
AI has achieved success in various fields like speech recognition, language translation, game playing, and autonomous systems, yet most AI systems are neither truly intelligent nor entirely trustworthy. Despite this, businesses and governments increasingly rely on them without fully understanding their limitations. AI has a history of overhyped expectations leading to "AI winters," and today, large language models (LLMs) have fueled another wave of speculation, raising concerns about misplaced trust in these technologies.
Hierarchy of learning machines
The hierarchy classifies AI machines based on their ability to learn new tasks over time:
- Basic Automation – Simple systems with fixed processes that do not learn. Rule-Based Systems – Machines following logical rules (e.g. early expert systems).
- Supervised Learning – AI trained on labelled data, such as neural networks.
- Unsupervised Learning – AI that identifies patterns without labelled training data.
- Generative AI – AI that produces new content, like ChatGPT, but often fabricates information.
- Reinforcement Learning AI – Systems learning through trial and error (e.g., AlphaZero, AlphaFold).
- Human-Machine Interaction AI – AI that enhances human capabilities rather than replacing them.
- Aspirational AI – Speculative AI capable of reasoning, self-awareness, and sentience, which has not yet been achieved.
The risks of AI progress
The article contrasts different AI progress models, including Seuk Min Sohn's vision of increasing automation leading to AI-dominated organisations and governance, and OpenAI's roadmap toward safe AGI. It warns that the rapid adoption of AI in business, governance, and decision-making may lead to a world where unintelligent but powerful machines control human systems, potentially resulting in human subjugation rather than an AI utopia.