What Is AI? | How It Works, Types & Examples
Artificial intelligence (AI) is any computer program designed to mimic human intelligence, which enables them to learn, problem-solve, and understand language.
There are diverse types of AI applied across various industries. Delivery drones, medical detection scanners, QuillBot’s Grammar Checker, and the virtual assistant on your smartphone are all AIs.
What is artificial intelligence (AI)?
Artificial intelligence is the simulation of human intelligence in machines. What this means can vary from one AI to another in terms of capabilities and functions.
For example, both ChatGPT and a robot vacuum are AIs, but how they perceive and interact with the world—and the problems they solve—are different.
While we may think of artificial intelligence as specific models, it’s really a “catchall” for the various technologies that perform tasks in ways that mimic humans.
Types of AI
The four main types of AI differ based on their programming and potential.
Type | How it works | Example |
---|---|---|
Reactive AI | Designed to perform a specific task without memory and cannot learn from past experiences | A chess-playing program |
Limited memory AI | Can learn from past experiences and improve performance over time by using memory | ChatGPT, which learns from answering users’ prompts |
Theory of mind AI | Would understand emotions, beliefs, and intentions, enabling it to interact socially like humans | Currently theoretical; no complete examples exist |
Self-aware AI | Would have consciousness, self-awareness, and the ability to understand and reason about the world independently, like a human | Currently hypothetical |
Currently, we only have reactive AI and limited memory AI. Researchers are currently working on theory of mind AI, though it remains in the early stages of development.
While self-aware AI is a trope in science fiction, in reality, it’s still speculative. Human consciousness is still not fully understood, which makes it difficult to replicate in machines.
Strong AI vs weak AI
Another way to distinguish between artificial intelligences is strong AI and weak AI.
Weak AI (also called “narrow AI”) is artificial intelligence designed to perform specific tasks or solve particular problems. These AIs operate without the ability to learn from the data they process. Essentially, weak AI is reactive AI, found in the table above.
- Virtual assistants (Alexa, Siri, or Google Assistant)
- Language translation tools (QuillBot Translator)
- Recommendation systems on streaming platforms (Netflix or Spotify)
- Autonomous vehicles (Teslas and Waymos)
- Face recognition systems (Facebook and Apple Photos)
Strong AI (also called “artificial general intelligence”) is a theorized artificial intelligence that has the cognitive abilities of a human. Strong AI would be able to learn from its experiences and transfer its knowledge from one domain to another to handle diverse tasks.
Strong AI correlates with self-aware AI. There are no real examples, but a few examples from fiction are Skynet (The Terminator), HAL 9000 (2001: A Space Odyssey), and Ava (Ex-Machina).
How does AI work?
To understand how AI works, it’s helpful to examine different subsets, which are the processes that drive artificial intelligence.
Subsets are another way to categorize AI, but instead of thinking of them as different pillars, think of subsets as nesting within each other.
Machine learning
Machine learning uses algorithms that enable computers to learn from data and improve over time without explicit programming. Humans select the algorithm based on the type of problem they want the machine to solve.
They then feed it large amounts of data, and AI learns from the patterns in that data. This data can be labeled or unlabeled. Labeled data includes target labels to guide AI. Unlabeled data doesn’t have labels, so AI has to map data on its own.
If working with labeled data, AI will receive reviews labeled “positive” or “negative.” If working with unlabeled data, there will be no labels. AI will have to use other contextual clues to decide how to classify a review.
A few types of machine learning are:
- Supervised learning, in which AI learns from labeled data
- Semi-supervised learning, in which AI learns from a small amount of labeled data and a large amount of unlabeled data
- Unsupervised learning, in which AI finds patterns in data without labeled examples
- Reinforcement learning, in which AI learns by interacting with its environment and receiving feedback
- Transfer learning, in which AI transfers knowledge gained from one dataset or task to improve performance on another
Deep learning
Deep learning is a subset of machine learning that teaches AI to learn and make decisions by mimicking how the human brain works.
It uses structures called neural networks, which are made up of layers of interconnected nodes (like neurons). Each neural layer processes data, passing refined information to the next layer, until the network produces a result.
Unlike traditional AI or machine learning, deep learning doesn’t need humans to manually define rules or extract features from data. Instead, it automatically learns patterns and relationships from raw data, like photos, audio, or text.
Since it doesn’t require human intervention, deep learning allows for tremendous scalability.
Generative AI
Generative AI is what most people think of as AI nowadays; it generates complex, original content based on prompts. ChatGPT, DALL-E 3, and Gemini are examples of generative AI.
Generative AI is a specific application of deep learning that can produce outputs like text, photos, video, audio, and even code. When dealing with language, generative AI also incorporates natural language processing (NLP).
NLP allows computers to analyze elements of human language, like syntax (grammatical units like nouns, verbs, direct objects, etc.) and semantics (meaning). Once it’s trained, generative AI uses the patterns learned from test data to predict the answer to a user’s prompt.
Prompt: I asked Paraphraser to rewrite the above paragraph about NLP.
Output: Computers can now evaluate aspects of human language, such as syntax (grammatical components like nouns, verbs, direct objects, etc.) and semantics (meaning), thanks to natural language processing (NLP). After training, generative AI predicts the response to a user’s request by using the patterns it discovered from test data.
Generative AI is rapidly evolving, and its widespread use has sparked discussions about how people integrate AI-generated content into their work. This has also given rise to a slew of detection tools, like QuillBot’s AI Detector, and optimization tools, like QuillBot’s AI Humanizer.
Robotics
Robotics blends hardware (physical components) with software (AI algorithms) to solve real-world problems in fields like industry, transportation, agriculture, exploration, and more.
Robotics doesn’t nest within the other subsets of AI. Instead, machine learning, deep learning, and generative AI can enhance how robots adapt, learn, and take on increasingly complex tasks.
For example, the combination of physical sensors and deep learning models allow robots to “see,” understand their surroundings, identify obstacles, and recognize objects for manipulation.
What is AI used for?
AI is used to solve problems across many different industries. Most of us interact with it on a daily basis, even if we don’t realize it.
Industry | AI example | How it works |
---|---|---|
Transportation | Self-driving cars (e.g., Waymo) | Uses sensors and deep learning to perceive the environment, make decisions, and drive autonomously |
Software to connect taxis and riders (e.g., Uber) | Matches riders with nearby drivers based on location and traffic data for faster service | |
GPS systems (e.g., Google Maps) | Adjusts directions in real-time to find the fastest and safest route | |
Healthcare | Medical image analysis | Analyzes images (X-rays, MRIs, etc.) to detect issues like tumors or fractures |
Virtual health assistants (e.g., Symptoma) | Provide personalized health information, schedule appointments, and offer initial advice based on reported symptoms | |
Surgical robots | Help guide surgeons by providing real-time data and precision for complex procedures | |
Agriculture | Crop disease detection | Detects diseases or pests by analyzing images of crops for early intervention |
Autonomous tractors | Perform planting and harvesting tasks autonomously using GPS and sensors | |
Precision farming | Looks at past data to help farmers make optimal decisions about planting, harvesting, watering, and fertilizing | |
Data and content | Content recommendation systems (e.g., Netflix) | Learns from user behavior to suggest personalized content |
Automated content generation (e.g., ChatGPT) | Generates content (text, photo, audio, video) after training on large datasets | |
Business analytics (e.g., Google Analytics) | Analyzes business data to uncover trends, make forecasts, and optimize human decision-making | |
Shipping | Automated cargo handling | Loads, unloads, and sorts cargo using robots |
Inventory management | Tracks inventory and forecasts demand for efficient restocking and warehouse management | |
Drone delivery | Uses GPS, sensors, and machine learning to autonomously navigate and deliver packages |
Benefits of AI
Artificial intelligence offers many benefits, like:
- Faster operations and improved productivity. AI automates repetitive tasks, saving time and reducing human error.
- Better business decisions. Companies can make better decisions thanks to AI’s ability to analyze vast amounts of data and present it in a way that’s easy for humans to consume.
- More engaging user experiences. Since AI is able to learn from users’ behavior, it can better personalize the content it shows them.
- Reduced maintenance and operational costs. Predictive maintenance helps prevent expensive equipment breakdowns across industries.
- Safer environments. When AI monitors systems and environments, it helps detect potential issues before they become critical. This can prevent accidents in manufacturing, transportation, healthcare, and more.
- Scalability without a proportional increase in human labor. Given that AI can work without human intervention, and that it is available 24/7, it allows for scalability without extra strain on human teams.
Challenges of AI
AI provides benefits, but it poses certain challenges or risks too. A few of its challenges are:
- Data breaches or misuse of data. Like in any digital environment, AI systems should protect sensitive data and comply with privacy regulations.
- Bias and fairness. AI models can inherit biases from training data, leading to unfair or discriminatory outcomes.
- Transparency. Many AI models, especially deep learning ones, are like “black boxes,” and it’s hard to understand how they make decisions.
- Job displacement. Automation driven by AI can lead to job losses, raising concerns about the future of work.
- New ethical questions. AI ethics is a developing field that seeks to balance technological advancements with ethical considerations. Should robots be performing surgery? Is technology responsible for misinformation? AI ethics explores questions like these.
History of AI
The idea of intelligent machines goes back millennia. However, it wasn’t until the 20th century that true development of AI began:
- 1950. Alan Turing introduces the Turing Test, used to identify intelligent machines.
- 1956. John McCarthy coins the term artificial intelligence. Allen Newell, J.C. Shaw and Herbert Simon create the Logic Theorist, the first-ever AI program.
- 1965. Joseph Weizenbaum creates ELIZA, an early natural language processing program that simulates conversation.
- 1974-1980. AI slows down during the “First AI Winter” due to limited computer power, decreases in funding, and other factors.
- 1986. Geoffrey Hinton and others reinvigorate AI with the backpropagation algorithm.
- 1997. IBM’s Deep Blue defeats world chess champion Garry Kasparov.
- 2002. Roomba becomes the first widely adopted AI-driven consumer product.
- 2011. IBM’s Watson beats human champions at Jeopardy!
- 2012. AlexNet’s neural network marks a breakthrough for deep learning.
- 2016. DeepMind’s AlphaGo program beats world champion at Go, a strategy game. Google later purchases DeepMind, reportedly for $400 million.
- 2020. AI experiences a boom thanks to models like GPT-3 and DALL-E, which show off major advancements in NLP and creative uses of AI.
- 2023. Generative AI is widely adopted across many industries.
Frequently asked questions about AI
- What is an AI model?
-
An AI model is a computer program designed to perform specific tasks while mimicking human intelligence.
AI models use algorithms to process input data and make predictions based on this data, and they work in areas like speech recognition, language interpretation, image analysis, or decision-making.
For example, QuillBot’s Grammar Checker is an AI model designed to understand, find, and correct errors in human writing.
This field is developing each day, so understanding what AI is and how it works is more important than ever.
- What is artificial general intelligence?
-
Before understanding what artificial general intelligence is, you need to understand what AI is in the general sense.
AI refers to computer programs designed to perform tasks requiring human-level intelligence, but usually limited to specific tasks.
For example, QuillBot’s free Paraphraser can rewrite sentences, but it cannot compute numbers or guide a vehicle.
Artificial general intelligence (AGI) is a theorized AI that can perform any human task, with cognitive abilities and adaptability to apply knowledge across diverse areas.
- What is AI governance?
-
AI governance is the frameworks, policies, and practices intended to ensure that AI is developed, deployed, and used in ethical and transparent ways. It is related to AI ethics.
AI governance differs by jurisdiction. If you are studying or working in this field, it’s best to consult with experts in your jurisdiction for more detailed information.
Specific institutions and organizations may also have their own form of AI governance. For example, many universities now have published policies about how students can and cannot use generative AI and how teachers should work with AI detectors.
- What are AI hallucinations?
-
AI hallucinations are errors where AI generates false or nonsensical information. They’re most common in large language models (LLMs), image generators, and other generative AI models.
These hallucinations may appear confident and correct at first, but closer inspection can reveal inaccuracies.
For example, an AI-generated image may look good overall, but details like hands or text may not be accurately generated. Or if you ask AI to provide information and cite sources, it may accurately relay the information, but cite a source where that information doesn’t appear.
Understanding what AI is helps explain why AI hallucinates. AIs are computer programs trained on datasets. Biases, gaps, inaccuracies, and ambiguities in this data can cause hallucinations.
- What are AI agents?
-
AI agents are software programs that can perceive their environment. They can gather data, analyze it, and take action, working autonomously to achieve specific tasks.
A few examples of AI agents are:
- Self-guided vehicles
- Delivery drones
- Virtual assistants like Siri or Alexa
- Customer service chatbots
- An AI chess opponent
AI agents are one of the many types of AI (generative AI, analytics programs, AI detectors, etc.) and are quite common in modern society given their ability to help humans with everyday tasks.