Generative AI: Part 2 — How AI Thinks & Its Types
Table of Contents
In this second blog, The discussion covers different types of AI & How AI differs from human intelligence.
Introduction
Artificial Intelligence (AI) has emerged as a transformative technology, enabling systems to replicate or exceed human capabilities in various domains. AI can be categorized into different types based on its functionalities and capabilities.
This blog delves into the different types of AI, exploring their characteristics, uses, and potential applications in various fields. Exploring how AI mimics human thinking, from gathering data to producing results, reveals the strengths and limits of these technologies.
Types of AI
Artificial Intelligence is divided based on two main categorization
- Based on capabilities
- Based on the function of AI.
The three kinds of AI based on capabilities
1. Artificial Narrow AI
Artificial Narrow Intelligence, also known as Weak AI (what we refer to as Narrow AI), is the only type of AI that exists today. Any other form of AI is theoretical. It can be trained to perform a single or narrow task, often far faster and better than a human mind can.
However, it can’t perform outside of its defined task. Instead, it targets a single subset of cognitive abilities and advances in that spectrum. Siri, Amazon’s Alexa are examples of Narrow AI. Even OpenAI’s ChatGPT is considered a form of Narrow AI because it’s limited to the single task of text-based chat.
2. General AI
Artificial General Intelligence (AGI), also known as Strong AI, is today nothing more than a theoretical concept. AGI can use previous learnings and skills to accomplish new tasks in a different context without the need for human beings to train the underlying models. This ability allows AGI to learn and perform any intellectual task that a human being can.
3. Super AI
Super AI is commonly referred to as artificial superintelligence and, like AGI, is strictly theoretical. If ever realized, Super AI would think, reason, learn, make judgements and possess cognitive abilities that surpass those of human beings.
The applications possessing Super AI capabilities will have evolved beyond the point of understanding human sentiments and experiences to feel emotions, have needs and possess beliefs and desires of their own.
So, what’s the Takeaway?
In conclusion, it’s vital to know what “AI” we’re discussing.
- ANI (Narrow): The specialists we use every single day.
- AGI (General): The human-level intelligence we’re trying to build.
- ASI (Super): The super-intellect of the far-future that inspires both hope and caution.
The four types of AI based on functionalities
1. Reactive Machine AI
Reactive machines are AI systems with no memory and are designed to perform a very specific task. Since they can’t recollect previous outcomes or decisions, they only work with presently available data. Reactive AI stems from statistical math and can analyze vast amounts of data to produce a seemingly intelligent output.
Examples of Reactive Machine AI
IBM Deep Blue: IBM’s chess-playing supercomputer AI beat chess grandmaster Garry Kasparov in the late 1990s by analyzing the pieces on the board and predicting the probable outcomes of each move.
The Netflix Recommendation Engine: Netflix’s viewing recommendations are powered by models that process data sets collected from viewing history to provide customers with content they’re most likely to enjoy.
2. Limited Memory AI
Unlike Reactive Machine AI, this form of AI can recall past events and outcomes and monitor specific objects or situations over time. Limited Memory AI can use past- and present-moment data to decide on a course of action most likely to help achieve a desired outcome.
However, while Limited Memory AI can use past data for a specific amount of time, it can’t retain that data in a library of past experiences to use over a long-term period. As it’s trained on more data over time, Limited Memory AI can improve in performance.
Examples of Limited Memory AI
Generative AI: Generative AI tools such as ChatGPT, Bard and DeepAI rely on limited memory AI capabilities to predict the next word, phrase or visual element within the content it’s generating.
Virtual assistants and chatbots: Siri, Alexa, Google Gemini and IBM Watson Assistant combine natural language processing (NLP) and Limited Memory AI to understand questions and requests, take appropriate actions and compose responses.
Self-driving cars: Autonomous vehicles use Limited Memory AI to understand the world around them in real-time and make informed decisions on when to apply speed, brake, make a turn, etc.
3. Theory of Mind AI
Theory of Mind AI is a functional class of AI that falls underneath the General AI. Though an unrealized form of AI today, AI with Theory of Mind functionality would understand the thoughts and emotions of other entities. This understanding can affect how the AI interacts with those around them. In theory, this would allow the AI to simulate human-like relationships.
Because Theory of Mind AI could infer human motives and reasoning, it would personalize its interactions with individuals based on their unique emotional needs and intentions. Theory of Mind AI would also be able to understand and contextualize artwork and essays, which today’s generative AI tools are unable to do.
4. Self-Aware AI
Self-Aware AI is a kind of functional AI class for applications that would possess super AI capabilities. Like theory of mind AI, Self-Aware AI is strictly theoretical. If ever achieved, it would have the ability to understand its own internal conditions and traits along with human emotions and thoughts. It would also have its own set of emotions, needs and beliefs.
Emotion AI is a Theory of Mind AI currently in development. Researchers hope it will have the ability to analyze voices, images and other kinds of data to recognize, simulate, monitor and respond appropriately to humans on an emotional level. To date, Emotion AI is unable to understand and respond to human feelings.
After exploring the different types and capabilities of AI, it becomes clear that current AI systems still do not truly “think” for themselves in the human sense.
They do not have self-awareness, emotions, or independent intentions, yet they are able to generate answers, recommendations, and creative outputs that often feel intelligent.
This apparent “thinking” comes from the way AI is designed
Let’s find out how AI is thinking ?
Can an AI think?
Recall the last time a friend asked you for a restaurant recommendation. Immediately, you started thinking of memories of past meals, knowledge of your friend’s taste, and a list of places in the city. You were able to make a quick judgment about which spot would delight them.
AI don’t think in the same way humans do. Instead of relying on feelings or personal experiences, computers process large amounts of information using mathematical rules and algorithms. While AI systems don’t possess consciousness or emotions, they can behave in ways that appear intelligent.
Learning how artificial intelligence works helps to understand how humans and computers process information. It’s helpful to examine two key ideas: Thinking and Intelligence.
How AI “thinks”
Thinking is the process of using information to form ideas, solve problems, or make decisions.
Thinking is the process by which you make sense of the world. Humans think by combining past experiences, emotions, and knowledge to understand situations and make choices.
AI systems also “think” in a more limited way. They collect data, analyze it, identify patterns, and select an output. Now let’s see how those thinking steps are applied to AI systems.
Step 1: Input recognition
AI systems begin by receiving input. AI systems can gather input from various sources, including sensors, cameras, websites, and databases. This data can include numbers, text, images, or sounds. The quality and variety of the data determine how well the AI can make conclusions.
Step 2: Pattern analysis
In pattern analysis, AI identifies trends, relationships, or recurring features in the input. Using algorithms, AI identifies connections that might not be apparent to humans. This step enables AI to recognize what is normal, detect changes, and identify important details. Effective pattern analysis enables AI to interpret complex information and inform decision making.
Step 3: Decision logic
Decision logic is the process AI uses to determine what action to take, based on identified patterns, and guided by models and predefined rules. These rules involve probabilities and learned experience. Decision logic helps AI systems respond quickly and consistently to new situations.
Step 4: Output generation
Output generation occurs when the AI system produces a result, such as an insight, recommendation, or new content. This output is based on the data, patterns, and decisions made in earlier steps. The output can be a simple answer, a report, or even an image or piece of text. Clear output helps people make better decisions or take action quickly.
While thinking involves processing information, intelligence enables humans and machines to learn from that information and improve over time.
Intelligence is the ability to learn, understand, and solve problems using knowledge and reasoning.
Understanding your own thinking and intelligence can help you appreciate what AI does. As a human, you take in information, solve problems, and adjust when something changes. You use your past experiences to make decisions and learn from mistakes. This is how human thinking and intelligence work.
AI systems also process information and generate outputs to assist with decision making. But they do not think the way you do. They do not feel or reflect. They follow patterns in data and adjust based on what they are trained to recognize.
So, while AI today cannot originate its own thoughts or understand the world the way humans do, it can simulate intelligent behavior by processing information in highly advanced ways. This is why AI can chat, recognize images, drive cars, or recommend movies without actually “knowing” or “feeling” anything, it is executing learned patterns, not experiencing genuine thought.
Summary
In this second blog, we’ve taken our exploration of Artificial Intelligence (AI) a step further to understand Generative AI. We explored the different types of AI, learned how AI thinks, and discovered how it differs from human intelligence.
Further into this blog series, the next blog we will learn about machine learning (ML) which is considered as the subset of Artificial Intelligence (AI).
A message from WhiteScholars
Hey, we are team WhiteScholars here. We wanted to take a moment to thank you for reading until the end and for being a part of this blog series.
Did you know that our team run these publications as a volunteer effort to empower learners, share practical insights in emerging technologies, and create a growing community of knowledge seekers
If you want to show some love, please take a moment to check us on instagram, linkden. You can also explore more learning resources on our website WhiteScholars.
Read the First Part below,

FAQ’s
1. What are the main types of AI based on capabilities, and which one exists today?
AI by capabilities includes Artificial Narrow Intelligence (ANI/Weak AI, e.g., ChatGPT for text tasks), Artificial General Intelligence (AGI/Strong AI, theoretical human-level versatility), and Artificial Superintelligence (ASI, hypothetical superhuman cognition). Only ANI exists today.
2. How does Limited Memory AI differ from Reactive Machine AI, and what are examples?
Reactive Machine AI has no memory and reacts only to current data (e.g., IBM Deep Blue chess or Netflix recommendations). Limited Memory AI uses recent past data to improve decisions (e.g., ChatGPT for generation, self-driving cars, Siri).
3. Can AI truly think like humans, or does it just simulate intelligence?
No, AI doesn’t think like humans—it lacks consciousness, emotions, or self-awareness. It simulates intelligence via a 4-step process: input recognition, pattern analysis, decision logic, and output generation, relying on data patterns and algorithms.
4. What is Theory of Mind AI, and why is it still theoretical?
Theory of Mind AI (under AGI) would understand human emotions, motives, and social cues for empathetic interactions, like personalizing responses or interpreting art. It’s unrealized today, as current AI can’t infer intentions deeply.
5. What’s the difference between AI classified by capabilities vs. functionality?
Capabilities focus on scope: Narrow (task-specific, real), General (human-like, theoretical), Super (superior, theoretical). Functionality describes operation: Reactive (no memory), Limited Memory (uses recent data), Theory of Mind/Self-Aware (emotional awareness, hypothetical).
