AI & IoT Glossary
What is Strong AI?
Strong artificial intelligence (AI) is a theoretical form of AI that describes a particular AI development mindset. Strong AI requires machines to be as intelligent as humans. They should thus be self-aware or able to independently solve problems, learn, and plan.
Strong AI is also known as “artificial general intelligence (AGI)” or “general AI.”
What is Adversarial Search?
Adversarial search is a method applied to a situation where you are planning while another actor prepares against you. Your plans, therefore, could be affected by your opponent’s actions. The term “search” in the context of adversarial search refers to games, at least in the realm of artificial intelligence (AI).
What is an AI Accelerator?
An artificial intelligence (AI) accelerator is computer hardware that specifically handles AI requirements. It speeds up processes such as artificial neural network (ANN) tasks, machine learning (ML), and machine vision.
Back in the 1980s, graphics accelerators made PCs faster and more efficient by freeing up the main processor and handling all the graphics requirements. Similarly, AI accelerators free up the main processor from having to deal with complex AI chores that can be resource-intensive.
What is Ambient Intelligence?
Ambient intelligence, often shortened to “AmI,” is an emerging technology that aims to bring pervasive computing, artificial intelligence (AI), sensors, and sensor networks to our everyday lives. This technology is human-centric, as it is highly responsive to the presence of humans within its environment.
Ambient intelligence is a sophisticated AI system that detects and reacts to human presence. One concrete example of ambient intelligence is Siri or Alexa that responds to you once it detects your voice.
What is Applied Machine Learning?
Applied machine learning refers to the application of machine learning (ML) to address different data-related problems. Its connotation is similar to applied mathematics—pure math involves many theories, which are applied and put to practical use in applied mathematics. As a result, applied mathematics helps solve real-world problems in engineering, biology, business, and many other fields.
Similarly, applied machine learning takes one's understanding of ML concepts and theories and uses these to solve problems. For example, a fundamental concept in ML is supervised learning, an ML task that requires labeled data and follows a path toward obtaining the desired output. The concept is pretty straightforward, but how can this address real-world problems?
This ML concept can be applied in computing credit scores and automating credit approval. You feed the machine with credit-related data, such as credit history and limit utilization (i.e., labeled data), and teach it to compute the credit score of a particular person (i.e., desired output). If the credit score reaches the minimum acceptable level, the credit application is approved.
What is Artificial General Intelligence (AGI)?
Artificial general intelligence (AGI) is a machine's capability to think, understand, reason, learn any intellectual task that a human being can, as well as plan, communicate in natural language, and represent knowledge.
This is what we imagine when we think of artificial intelligence, because of the almost human-like qualities of robots we see in sci-fi movies. But in reality, we are a long way off from attaining true artificial general intelligence.
What is Artificial Intelligence (AI)?
Artificial intelligence (AI) is the branch of computer science that deals with the creation of machines or systems capable of performing functions that would normally require human intelligence. These machines interact with the environment and behave according to the information they receive about it without any human intervention.
AI has been romanticized in science fiction literature and Hollywood movies. They have given us images of computers and robots that can talk, think, and act like human beings. The real state of AI is nowhere near this level yet. However, we are now seeing more and more machines that can play chess with human grandmasters, engage in intelligent conversations, and even drive cars.
What is Artificial Narrow Intelligence (ANI)?
Artificial narrow intelligence (ANI) is a type of artificial intelligence (AI) that tackles a specific subset of tasks. ANI is often considered a “weak” form of AI. It pulls information from a particular data set, and its programming is limited to performing a single task, such as playing chess or crawling web pages for raw data. ANIs, like other AI systems, can perform tasks in real-time despite not having any other functions outside their initial programming.
What is Augmented Intelligence?
Augmented intelligence is a partnership model between people and artificial intelligence (AI) systems that aims to improve cognitive performance and decision making and come up with new learning implementations.
Augmented intelligence is also known as “intelligence amplification” since its primary goal is to enhance people’s knowledge and skills by working with machines. In essence, it aims to elevate human intelligence so the user can complete tasks better and smarter.
What is Backward Chaining?
Backward chaining is an inference method which implies going backward from a successful result to infer the chain of events, conditions, or decisions that had led to that outcome. It’s like retracing your family tree to explain why you look the way you do or exhibit the characteristics that distinguish you as a person.
Backward chaining uses deductive reasoning, a method of arriving at a conclusion after establishing premises that are assumed to be true. For example, if all persons are created equal and you are a person, then you were created equal.
Backward chaining is used in artificial intelligence applications for logic programming, reasoning, and behavior analysis. It’s part of a system that aims to teach robots how to infer and make logical conclusions.
What is a Beacon?
A beacon is a wireless sensor that uses radio signals to communicate with devices such as smartphones, tablets, audio speakers and keyboards. A device with Bluetooth capability can detect the beacon and will automatically attempt to connect with it.
In the real world, it's like a bright light that suddenly appears in the distance while you wander lost in the dark wilderness. As soon as you see the light, you instinctively walk toward it.
What is a Behavior Tree?
A behavior tree is a mathematical model that details the plan of execution of a predefined set of tasks. If you have a finite set of tasks, then a behavior tree is where you specify how the program switches from one task to another.
Behavior trees were initially used to develop video games but are now widely used in robotics, computer science, and artificial intelligence (AI). With behavior trees, complex tasks, such as catching Pacman, are made possible through a series of simple tasks. In “Pacman,” these tasks include determining the locations of pills and fruits.
What is the Breadth-First Search Algorithm?
The breadth-first search (BFS) algorithm is a method that allows programs to perform a search using a level-based approach to address problems. Before a system can proceed to the next step, it must solve the first part of the problem.
It uses a searching tree or graph data structure to find the shortest path from a given source to other sources. Before moving on to the next data source, however, it must fully exhaust the available information in the current source.
What is a Chatbot?
A chatbot is a software application that uses artificial intelligence (AI) natural language processing capabilities to converse with customers, usually through a chat program. Such conversations may appear convincingly real that you may not realize you are conversing with a machine and not with another person.
Chatbots can detect customer's concerns and direct them to relevant links or persons who can resolve their issues quickly. In fact, when you chat online with customer service representatives for major brands chances are you are being engaged by chatbots.
What is Cloud Robotics?
Cloud robotics harnesses the power of the cloud (e.g., cloud computing, cloud storage, and other cloud-based technologies) for robotics. Cloud-connected robots can use the powerful computation, storage, and communication resources of data centers to process and share information with other robots, machines, smart objects, humans, and so on. Humans can also give them tasks remotely.
Apart from faster processing, cloud robotics also allow users to reduce costs. As such, cloud robotics makes it possible to build lightweight, low-cost, smarter robots with an intelligent cloud-hosted “brain.”
What is Cobot?
Not all robots are programmed to perform tasks by themselves. Some of them work alongside humans. They’re called “cobots,” short for “collaborative robots.”
Here’s how they work.
An ideal team consists of members with different but complementary skills. Each member takes care of the task that corresponds to the abilities he or she possesses. A cobot could take care of the repetitive, monotonous chores that people would rather not do. It either follows human instructions or responds to human actions.
What is Cognitive Automation?
Cognitive automation occurs when a piece of software brings intelligence to information-intensive processes. It has to do with robotic process automation (RPA) and fuses artificial intelligence (AI) and cognitive computing. Using AI, the process extends and improves actions typically correlated with RPA, saving users money and satisfying customers while accurately completing complex business processes that use unstructured information.
RPA is a means to automate business processes using AI or digital workers. Cognitive computing, meanwhile, allows these workers to process signals or inputs.
What is Cognitive Robotic Process Automation?
Cognitive robotic process automation or cognitive RPA is a subset of robotic process automation (RPA) that uses artificial intelligence (AI) technologies, including machine learning (ML), optical character recognition (OCR), and text analytics, to automate work processes.
In essence, it is a highly advanced form of RPA wherein robotic or automated processes highly mimic human activities. Most of the functions carried out by cognitive RPA systems focus on learning (gathering information), reasoning (forming contextual conclusions), and self-correction (analyzing successes and failures).
Compared with traditional RPA, which requires structured data, cognitive RPA can automate processes given unstructured information, such as emails, voice recordings, letters, and scanned documents. It can process data without human intervention, removing complex but non-rule-based tasks off human workers’ hands.
What is Computer Vision?
Computer vision is the science of developing software that can understand images in the same way that humans and animals can. Machines equipped with it will be able to categorize shapes, colors, and textures into meaningful groups.
A camera with computer vision will be able to group shapes into meaningful units and recognize that these represent — a butterfly, a cloud or a house, for example.
What is a Connected Device?
If you link a computer, tablet, smartphone, sensor, TV or any other electronic gadget to the Internet so that it can communicate with other appliances that are also attached to the network, you have what is called a connected device.
Imagine your car automatically transacting with a gasoline pump, or your refrigerator letting the supermarket's online server know that you're running low on soda. These are some of the examples of connected devices.
What is Context Awareness?
Context awareness is the ability of computing devices to sense and react to their environment. A straightforward example would be how your mobile phone changes its screen orientation depending on where you tilt it. It automatically switches from portrait (taller than broader screen) to landscape (wider than taller screen) orientation if you turn the device 90 degrees to the left or right.
A more complex example would be how your mobile phone adjusts the current time and date depending on your location. If you travel from Malaysia to the U.S., for example, the time and date shown instantly change when you turn it back on once the plane lands.
What is a Convolutional Neural Network?
A convolutional neural network (CNN) is a deep learning algorithm that employs image recognition, processing, and classification to identify objects and detect faces. It consists of neurons that receive inputs, assign importance to them, and cluster them according to similarities.
A CNN, also called a “ConvNet,” can look at an object’s surroundings to come up with accurate predictions. Instead of looking at the whole image to determine features, it will look at layers or smaller portions.
What is Curiosity Artificial Intelligence?
Curiosity artificial intelligence (AI) mimics the natural curiosity of humans that allows us to learn things on our own. The goal is to develop curiosity through machine learning (ML) algorithms so AI systems can seek solutions to new problems independently.
For instance, robots can be programmed to explore their environment and learn without being told precisely what to do. This AI approach can help machines learn faster. While most ML approaches are too dependent on human intervention and instruction, curiosity AI trains systems to investigate unfamiliar data or events.
Curiosity artificial intelligence is also known as “curiosity AI,” “curious algorithm,” “algorithm curiosity,” and “artificial curiosity.”
What is Decision Intelligence?
Decision intelligence (DI) combines data science with different scientific theories to help people make the best possible decisions. It aims to provide actionable insights by translating raw data into formats that decision-makers can easily understand.
DI aims to identify trends and predict the outcomes of different decision options. A typical example of DI is a recommendation engine. When you go to Amazon, you will likely see a list of suggested products the vendor derived after analyzing your past purchases, viewed products, and search history.
What is Deep Learning?
People are capable of gaining new knowledge from experience. For example, when we face a repetitive task, we may choose to adjust our actions to improve the results from the last time we had to do the same thing.
Deep learning tries to produce machines that can do this. It is a machine learning (ML) technique that teaches computers to mimic what the human brain does. They are given huge amounts of information to analyze and are expected to build on what they learned by repeating tasks over and over. The bigger the amount of data, the deeper the neural network, the more functions can be learned.
Deep learning aims to produce true thinking AI, much like the iconic HAL computer in "2001: A Space Odyssey".
What is an Edge Device?
An edge device is a type of hardware located at the periphery or “edge” of a network. It can perform specific tasks, such as data processing, routing, storage, filtering, and communication. These devices are typically near the data source.
Examples of edge devices include routers, gateways, sensors, and cameras. They are used in Internet of Things (IoT) applications, where they collect and transmit data to central servers or cloud-based services for further analysis and processing.
What is Electric Field Sensing?
Electric field sensing refers to using a sensory system that utilizes an electric field that detects nearby objects, provided they are at least slightly conductive. One such sensory system is the People Detector, a device that senses the presence of moving and stationary objects near solid materials. It detects changes ranging between a few centimetres to 4 meters.
Electric field sensing is one of the technologies used by smart cars to detect roads and nearby cars, allowing them to avoid collisions and straying from the right path.
What is Embedded Vision?
Embedded vision is the practice of integrating computer vision into computers to analyze images or videos. In conventional methods, the camera and computer are two separate entities that often take up space and cost more to build. Vision systems often rely on the computer with an interface card to import images from the camera. It also needs to use image analysis software. All these systems can be bulky and complicated to manage.
Recent advancements in technology, however, have allowed cameras and processing boards to become smaller and more powerful. As such, they are easier to integrate into systems without considerable cost increases.
Embedded vision essentially combines embedded or microprocessor-based systems that perform dedicated functionalities inside a network and computer vision devices or those that can analyze images. This combination allows machines to carry out intelligent tasks.
What is Few-Shot Learning?
Few-shot learning is a machine learning (ML) technique that enables systems to learn a task or concept through examples. The technique reduces the amount of data needed to train an ML model. Instead of introducing vast volumes of training data, few-shot learning requires only a few examples for the model to learn.
Traditional ML requires you to input thousands of grammar rules when teaching a system to recognize grammatical errors. But with few-shot learning, you could train the model using only a handful of sentences with grammatical errors as examples. It will then apply what it learned from those examples to recognize grammatical errors in sentences it has never seen before.
What is Forward Chaining?
Forward chaining begins by inferring a set of rules or known data and going “forward” to achieve a goal. As such, it simplifies a complex task by dividing it into several simpler tasks that a computer may carry out either synchronously or sequentially, much like in a chain of processes. The data-driven and logical process of forward chaining is thus commonly implemented in production rule and expert systems.
In artificial intelligence (AI), forward chaining helps a program come up with a solution by analyzing known data and aligning it with predetermined parameters. An example would be when an end-user uses an app to determine what kind of insect he/she is looking at. The app begins by determining how many legs the insect has, what its color is, and so on until it gains enough inputs to come up with an answer.
What is a Generative Adversarial Network (GAN)?
A generative adversarial network (GAN) is a computing procedure that uses two neural networks, which are considered “adversaries” of each other. These deep neural networks produce new and fabricated data that can easily mimic real information. In real life, GANs help in image, video, and voice generation. They are the very devices used to produce deep fakes.
You can consider the components of a GAN (the generator and the discriminator) as representations of a criminal and a cop, where the criminal mixes fake with real money for purchases. In contrast, the police detects which of the notes are counterfeit.
What is Geofencing?
Geofencing is a technology that defines boundaries around a particular area.
In the real world, fences are physical barriers that mark the extent of someone's private property, or that keep people and animals out. The fences referred to here are virtual boundaries established by Global Positioning Systems (GPS), electronic sensors, and the Internet.
Today the main use of geofencing is for companies to target customers with marketing promotions based on where they are located at the moment.
What is Granular Computing?
Granular computing or GrC is an emerging model of information processing where data undergoes division into information granules. An information granule refers to a collection of entities. Entities, meanwhile, refer to the results of numerical analysis and data arrangement, depending on their functional or physical characteristics, similarities, and differences.
Granular computing is not an algorithm per se but an approach that divides information into smaller pieces to see if they differ on a granular level. The relationships seen are then used to design machine learning (ML) and reasoning systems.
What is a Graph Neural Network?
A graph neural network is a deep learning method that analyzes and makes predictions based on data described by a graph. Deep learning is a machine learning (ML) technique that teaches computers to mimic the workings of a human brain. A graph in computer science, meanwhile, is a data structure that has two parts—vertices and edges. We’ll discuss these components in greater detail in another section.
Graphs in graph neural networks, however, differ from those we see and learn about in math classes. They aren’t limited to pie charts, bar graphs, and others like them. You’ll see why later on.
What is Hyperautomation?
Hyperautomation is the process of using an ecosystem of advanced automation technologies like artificial intelligence (AI), machine learning (ML), natural language processing (NLP), process mining, and robotic process automation (RPA). It aims to augment human knowledge by automating business processes, allowing enterprises to benefit from efficient decision-making and production.
Hyperautomation is not limited to task automation but also involves moving to the next frontier in digital transformation.
What is Independent Component Analysis?
Independent component analysis is a computational method that separates a multivariate signal into its additive components. Sounds complicated, doesn’t it? Let’s break the definition down, then.
In computing, a multivariate signal is simply a signal that contains several distinguishable components. So you can think of it as a complete song—with music, lyrics, and even additional sound effects and backup vocals. Suppose you break the piece down into its parts. In that case, you can apply independent component analysis, so to speak, by separating each instrument, singer, and object making the different sounds from one another. In this example, the song is the multivariate signal while the instruments, singers, and objects are its additive components.
What is Industry 4.0?
Industry 4.0 has been described as the fourth wave of the Industrial Revolution that changed the course of human civilization.
The first wave was the birth of mechanization, which accelerated industrial, economic, and human activity.
The second wave was the advent of electricity, which powered mass production.
The third wave was the era of electronics and information technology (IT) that automated production.
Finally, the fourth wave—Industry 4.0—saw the emergence of the Internet and digitalization, which enabled the interconnection and interaction of technologies.
What is an Intelligent Agent?
An intelligent agent (IA) is a computer software system that’s capable of acting independently to achieve certain goals and responding to people or events that are happening around it. It is programmed using the field of artificial intelligence (AI) called “machine learning (ML)” and equipped with sensors that allow it to observe and adapt to situations.
IAs are utilized in areas that require interacting with people because they are capable of demonstrating basic social skills. Today’s examples of IAs include Siri and Alexa. These can understand a request and act on their own to look for the information that’s being asked.
An IA can be likened to a cab driver that measures his performance based on a passenger’s safety and comfort, ability to reach the desired destination on time, and capacity to earn. He considers his environment, including the quality of the roads he takes and the traffic. He uses his car’s built-in features (e.g., brakes, accelerator, signal lights, etc.) or, in IA terms, actuators and sensors (e.g., camera, speedometer, odometer, etc.) to take the best course of action to reach his goal.
What is Intelligent Character Recognition (ICR)?
Intelligent character recognition (ICR) is an extension of optical character recognition (OCR). While OCR identifies hand-printed characters, ICR lets computers recognize different font styles to improve their text recognition accuracy.
Both technologies (OCR and ICR), in the simplest terms, allow digital devices to read text.
What is the Internet of Behavior?
The Internet of Behavior (IoB) simply refers to the process of collecting all kinds of data (business intelligence [BI], big data, etc.) that shows essential information on clients’ behaviors, interests, and preferences.
The IoB allows users to understand the data from people’s online activities. It aims to interpret data and use the knowledge gained to create and promote new products and services based on human psychology.
What is the Internet of Things (IoT)?
Wouldn’t it be convenient to tell your air-conditioning system to activate at a certain temperature before you head home from work? Or have your front door automatically unlock when you need it to open? The scenarios above are some examples of the Internet of Things (IoT) at work.
The Internet of Things is a system that connects any electronic device, gadget, machine, microchip, sensor, appliance, or building—just about anything, in fact—to the Internet. As a result, all these things can collect information and share it with each other. Such interconnection of devices and machines allows people to monitor, control, and improve their overall environment.
What is Knowledge Engineering?
Knowledge engineering is a field of artificial intelligence (AI) that aims to come up with data rules that would allow machines to replicate the thought processes of human experts. It has to do with developing knowledge-based systems such as computer programs that contain massive amounts of data about rules and solutions applicable to real-life issues.
Knowledge engineering defines how a conclusion or decision was reached to solve a complex problem that usually requires a human expert to accomplish.
What is Knowledge Representation?
How would you describe all the knowledge in the world, and what would you do with it? That, in a nutshell, is the concern of knowledge representation (KR), a subfield of study within artificial intelligence (AI). It’s a process that takes all the concepts in a domain, establishes how these concepts relate to each other, and defines the rules that control how they behave.
To illustrate, think about the real world. It’s large and infinitely complex. So we can reduce it to an abstract model—for instance, a map that captures only aspects of the world that are relevant to us, such as its geography. But the computer can’t understand a map, so we reduce this even further into a set of rules and statements that represents the map. In other words, KR represents information in a way that computers can understand.
What is a Knowledge-Based System?
A knowledge-based system (KBS) is an artificial intelligence (AI)-based one that uses information from various sources to generate new knowledge to help people make decisions. These devices have built-in problem-solving capabilities and rely extensively on data to provide accurate results.
KBSs are made up of two critical components—a knowledge base and an inference engine. The knowledge base contains all necessary data, while the inference engine tells the system how to process data. Most KBSs have user interfaces (UIs) to make it easy for users to send requests and interact with them.
What are the Three Laws of Robotics?
The laws of robotics are the ethical guidelines when it comes to building robots. You see, the process isn’t just about creating machines with cool functions. There are responsibilities that come with it.
The most famous set of robotics principles is by the science fiction writer Isaac Asimov. His “Three Laws of Robotics” declare:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except when such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
What is Lemmatization?
Lemmatization is a linguistic term that means grouping together words with the same root or lemma but with different inflections or derivatives of meaning so they can be analyzed as one item. The aim is to take away inflectional suffixes and prefixes to bring out the word’s dictionary form.
For example, to lemmatize the words “cats,” “cat’s,” and “cats’” means taking away the suffixes “s,” “’s,” and “s’” to bring out the root word “cat.” Lemmatization is used to train robots to speak and converse, making it important in the field of artificial intelligence (AI) known as “natural language processing (NLP)” or “natural language understanding.”
What is Machine Intelligence?
Machine intelligence is another name for the artificial intelligence that is demonstrated by robots. It’s the product of machine learning, deep learning, natural language programming, or natural language understanding.
Machine intelligence is now used in major industries such as customer service and manufacturing. Chatbots, or chatter robots, are helping handle customer queries from online shoppers, while cobots, or collaborative robots, are helping out in major factories. Several applications have also been developed for the healthcare industry, with some even assisting in delicate surgical procedures. Machine intelligence is expected to be used more widely as research into the technology continues.
What is Machine Learning (ML)?
There are no schools for them yet, but apparently, machines can be educated. The branch of artificial intelligence (AI) called “machine learning (ML)” aims to develop computer systems that learn and improve from experience without any human intervention or programming.
During this process, machines are provided with vast amounts of data, which they analyze for patterns and then learn from using examples. Over time, the systems are able to automatically make their own decisions and adjust their actions accordingly.
What is Machine Teaching?
Machine teaching is a subfield of artificial intelligence (AI) that pertains to the process of obtaining knowledge directly from people instead of merely extracting knowledge from data. The idea is to provide contextualized data to AI systems to bring outputs relevant to their users.
Machine teaching can be considered the reverse of machine learning (ML). In machine teaching, the system acts as a teacher who begins with a goal in mind rather than a desired result. The teacher develops an optimal training process that allows the learner to achieve that goal. In short, the teacher makes it easier for the learner to process and overcome problems. Ideally, machine teaching involves deconstructing the problem into smaller parts that are easier to solve for ML algorithms.
Machine teaching provides immense benefits in supervised learning scenarios where ML algorithms have little or no labeled training data to produce specific outcomes.
What is Machine Translation?
Machine translation is a branch of the study of linguistics that uses computer software to translate text from one language to another or substitute words in one language with words from another. It’s helping robots to speak and translate by teaching them the complex characteristics of human language. Machine translation uses deep learning methods patterned after how the human brain works.
The automated process of machine translation is like feeding a machine with large amounts of language-related information which its mechanical brain digests into knowledge that allows it to speak and translate speech.
What is Machine-To-Machine (M2M)?
Machine-to-machine (M2M) is a process that implies wireless communication between two or more physical assets. This system typically consists of wireless sensors that are installed in each device, allowing them to exchange data with each other automatically or as requested by an application.
It’s like forming a team of computers, machines, and devices. Each one contributes a set of capabilities, and together, the team accomplishes its objectives much more efficiently.
What is Machine-to-Machine Authentication?
Machine-to-machine authentication refers to the process of allowing different remote systems to communicate with each other. Your favorite vending machine, for example, can be set up to automatically send an order to the supplier’s system for items that are running out of stock.
Since machine-to-machine communication can occur over wired, wireless, and any form of channel, it is prone to abuse and glitches. Machine-to-machine authentication helps ensure that only authorized services can access information on another system. It wouldn’t be ideal for a threat actor’s or competitor’s application to access the vending machine supplier’s inventory and order filing system.
Most machine-to-machine authentication solution providers use Open Authorization 2.0 (OAuth) as a way for applications to access systems. OAuth 2.0 is the current standard protocol for online authorization. It replaced the first version, OAuth.
What is Model-Based Reinforcement Learning?
Model-based reinforcement learning is a means for machines to make decisions using a predictive model to determine what will happen if a particular course of action is taken to choose the best solution. Let’s break the definition down to understand the concept better.
A predictive model is used in statistics to predict outcomes by analyzing patterns in a data set. Think of it as a list of all possible outcomes from which a machine will choose the one that best suits the problem presented to it.
What is Model-Free Reinforcement Learning?
Model-free reinforcement learning is a reinforcement learning algorithm that does not use the transition probability distribution and the reward function of the Markov decision process that represents the problem that requires solving.
In model-free reinforcement learning, the transition probability distribution or transition model and the reward function are collectively called the “model.” Their absence in the process is why the name “model-free.”
Since the reinforcement learning algorithm does not use a model, you can think of it as a trial-and-error algorithm. The machine tries all possible solutions until it gets the best result.
What is the Moral Machine?
The Massachusetts Institute of Technology (MIT)’s Moral Machine is a platform that gathers human perspectives on moral decisions made by machine intelligence (e.g., self-driving cars). The system generates moral dilemmas and lets a driverless car choose the lesser of two evils (e.g., Should it kill two passengers or five pedestrians?). Outside observers (i.e., people) judge which outcome they think is more acceptable. They can see how their responses compare with those of others afterward. The platform also lets people create their own scenarios for others to view, share, and discuss.
What is Named Entity Recognition (NER)?
Named Entity Recognition (NER) is the process of identifying and categorizing named entities in a given text. Examples of categories are organizations, locations, time, names, money, and rate. Other terms that are synonymous to NER are:
- Entity Identification
- Entity Extraction
- Entity Chunking
NER is part of information extraction (IE) or the process of automatically getting structured information from an unstructured document. With NER, the entity is the specific piece of information extracted. An example of NER is when the following unannotated text gets annotated:
Bill Gates sold US$35.8 billion worth of Microsoft stock and gave it to the Bill and Melinda Gates Foundation.
NER creates the following annotated text from the sentence above:
[Bill Gates]Person sold [US$35.8 billion]Money worth of Microsoft Stock and gave it to the [Bill and Melinda Gates Foundation]Organization.
What is Natural Language Generation?
Natural language generation (NLG) is a type of artificial intelligence (AI) that generates natural language from structured data. While that sounds complicated, it simply means translating massive amounts of information into something humans can read and understand. But the data needs to be appropriately formatted for NLG to work.
An example of NLG would be translating the numbers in a spreadsheet into narratives or words to create human-readable text. NLG uses machine learning (ML), deep learning, and neural networks to make this process possible.
What is Natural Language Processing (NLP)?
Natural language processing (NLP) is a branch of artificial intelligence (AI) that analyzes human language and lets people communicate with computers. The NLP system is like a dictionary that translates words into specific instructions that a computer can then carry out.
NLP uses contextual analysis to help machines predict what you intend to say, as with your smartphone’s text suggestions. It also teaches a chatbot to interpret your words logically, so it can understand and even engage you in lively conversation.
What is Near Field Communication (NFC)?
You may have seen or experienced entering a train station by tapping your purse or wallet on the gate, without even having to take your ticket out. You see, the ticket card contains a microchip that allows it to communicate with a device on the gate. That's an example of NFC in action.
NFC stands for Near Field Communication and is a technology that enables an electronic device such as a mobile phone to interact with another one that is close by. It works by sending information over radio waves. NFC is a standard for wireless data transmission.
What is Neural Architecture Search?
Neural Architecture Search (NAS) is the process of discovering the best architecture a neural network should use for a specific need. In the past, programmers had to tweak neural networks to learn what works well manually. NAS automated that process, allowing artificial intelligence (AI) systems to discover more complex architectures.
In essence, NAS uses various tools and methods that test and evaluate huge architecture volumes utilizing a search strategy to choose one that best meets the programmer’s objectives.
What is a Neural Network?
A neural network is a computer system that mimics the way the human brain works. A natural neural network in the human brain is made up of layers of neurons. An artificial one is made up of layered nodes consisting of instructions, called algorithms, that guide the computer on how to recognize patterns in data. Such systems learn to perform tasks by evaluating examples instead of receiving specific instructions from programmers.
Neural networks are used today to solve many problems in business and industry. For instance, they are essential in teaching computers how to identify faces, drive cars, forecast demand, and much more.
What is the Noisy Channel Model?
The noisy channel model is a framework that computers use to check spelling, answer questions, recognize speech, and perform machine translation. It aims to determine the correct word if you type its misspelled version or mispronounce it.
The noisy channel model can correct several typing mistakes, including missing letters (changing “leter” to “letter”), accidental letter additions (replacing “misstake” with “mistake”), swapped letters (changing “recieve” to “receive”), and replacing incorrect letters (replacing “fimite” with “finite”).
What is a Perceptron?
A perceptron is a single-layer neural network for supervised learning. A neural network is a computer that mimics the way a human brain works. Supervised learning, meanwhile, is a machine learning (ML) technique that requires a human to train a computer by giving it data to get the desired results.
A perceptron has four main parts that we will describe in greater detail in the following section. These are input values, weights and bias, the net sum, and an activation function.
What is Predictive Modeling?
Predictive modeling is a mathematical technique that uses historical data and results to create a model that can predict future outcomes. For example, banks use predictive modeling in credit scoring to foresee a potential client’s ability to repay a loan. Airlines can also use the technique to predict the volume of passengers for a particular month or season.
Some may argue that predictive modeling rests on the premise that history repeats itself, which is not too far-fetched. After all, financial predictive models can predict if a client is likely to make late payments in the future based on his or her past behavior.
What is Prompt-Based Learning?
Prompt-based learning is a machine learning (ML) strategy that uses pretrained language models to train large language models (LLMs) so that the same model can be used for different tasks without retraining. It utilizes the knowledge acquired by the pretrained language models on a large amount of text data to solve various downstream tasks, such as text classification, machine translation, named entity detection, text summarization, and others.
Also known as “prompt learning,” prompt-based learning is an emerging strategy to allow pretrained artificial intelligence (AI) or foundation models to be repurposed for other uses without additional training. It is a new natural language processing (NLP) paradigm that does not require any supervised learning process since it directly relies on the objective function of any pretrained language model. It is a simple implementation of using a language model for a prompt-based learning model.
What is a Rational Agent?
A rational agent is an artificial intelligence (AI) component. It applies AI to different real-world problems. As such, it chooses an action from a set of distinct options. It has models that allow it to deal with unexpected variables and always selects the best possible outcome from all the available options.
The term “rational agent,” however, is not only applied to a system. It can also refer to a person, a company, or an application, practically anything or anyone that makes rational decisions.
What is Recursive Bayesian Estimation?
Recursive Bayesian estimation is an approach used in statistics and machine learning (ML) that estimates the current state of a system. The framework is used in robotics and automotive technology, where machines are taught to perform a task that requires estimation.
A self-driving car, for example, can estimate its location using the recursive Bayesian estimation framework. The car obtains its starting position via a Global Positioning System (GPS). Its built-in algorithms then help it estimate its current location after a certain amount of time or distance. The algorithms often use the Bayesian mathematical concept in statistics.
What is Reinforcement Learning?
Reinforcement learning is an approach that helps a machine learn by rewarding desirable actions and penalizing undesirable ones. If the artificial intelligence does not require any human inputs to learn, it progresses by trial and error. When its decision or action brings it closer to the agreed goal, it is given positive feedback. This is how it can remember which actions allow it to optimally perform the task.
It is similar to how you would teach a dog new tricks. You give it a treat when it does a command correctly so that it associates the reward with the correct response to a command.
What is Robotics?
Robotics is the interdisciplinary technology that combines artificial intelligence (AI) and engineering to conceive, build, and operate machines with various purposes.
The robots that are familiar to us are products of science fiction and Hollywood. But robotics is still far away from producing humanoids like the Terminator. The robots we have at present are more like workhorses designed to perform a limited set of specialist chores, such as assemble products or go into high-risk areas.
What is an RPA Bot?
A robotic process automation (RPA) bot is a software robot or application that automates repetitive and rule-based tasks within business processes. Its design lets it mimic human interactions with various software, such as entering data, manipulating applications, triggering responses, and communicating with other systems.
RPA bots typically operate at the user interface (UI) level, interacting with applications and systems like human users. They can navigate through different screens, extract data from documents, perform calculations, make decisions based on predefined rules, and even communicate with other bots or systems through application programming interfaces (APIs).
What is Self-Supervised Learning?
Self-supervised learning is a means for training computers to do tasks without humans providing labeled data (i.e., a picture of a dog accompanied by the label “dog”). It is a subset of unsupervised learning where outputs or goals are derived by machines that label, categorize, and analyze information on their own then draw conclusions based on connections and correlations.
Self-supervised learning can also be an autonomous form of supervised learning because it does not require human input in the form of data labeling. In contrast to unsupervised learning, self-supervised learning does not focus on clustering and grouping that is commonly associated with unsupervised learning.
What is a Sensor?
A sensor is an electronic device that measures and monitors environmental conditions. The data recorded by these devices is usually collected by a computer, which then uses the information to take action. Sensors measure physical qualities, such as speed, and built into many devices that you use regularly.
For example, you can equip a freezer with a sensor linked to its thermostat controller. When the device senses that the temperature in the freezer has increased past the acceptable limit, it may trigger the thermostat to kick in and lower the temperature until the freezer is once again just cold enough.
What Is Sentiment Analysis?
Sentiment analysis is an artificial intelligence (AI) technique that helps software determine how people feel in social media posts and blogs. By analyzing the words used, AI can tell and record what their sentiments are as well as why they felt that way based on the subject matter or context.
Many companies eager to find out how people feel about their products now use sentiment analysis. The technique helps them plan what they need to do to improve how consumers perceive their brand.
What is a Smart Car?
Wouldn't it be great if you could ride in your car, then sit back and relax while it drove itself to your destination? Smart cars allow you to do that.
A smart car is the product of Artificial Intelligence (AI), the Internet of Things (IoT) and modern approaches to automobile design. It is equipped with sensors, a global positioning system (GPS), computers, and software that keep it connected to the Web while it navigates through traffic.
Sensors continuously interact with the environment and objects around the vehicle — other vehicles and people — and update the software. The software, in its turn, constantly analyzes the data from the sensors and calls up the set of instructions that best deals with the current situation, controlling the car accordingly.
What is a Smart City?
A smart city is an urban planning concept that uses sensors and information technology to help the city operate more efficiently. Sensors report environmental data to computers that control various city services like traffic management and waste disposal. This type of integration ensures that services are dispensed when they are needed, and lets the city manage its resources more effectively.
In this age, when more and more people are living in urban centers, smart cities can help sustain their residents' quality of life.
What is a Smart Home?
A smart home is a house which automatically adjusts its internal environment to make sure its residents are comfortable all the time. It is armed with sensors which allow you to control its features from anywhere in the world, using a smart device connected to the Internet.
Imagine, for example, telling your air conditioner to cool down your room, using your phone while you are on your way home from the office. That's what a smart home can do for you.
What is Spatial Computing?
Spatial computing refers to the process of using digital technology to make computers interact seamlessly in a three-dimensional world using augmented reality (AR), virtual reality (VR), and mixed reality (MR). Spatial computing uses physical space to send input and receive output from a computer.
Through spatial computing, most people no longer interact with computers as an outsider would. Instead, they get to experience what it’s like to be within the digital realm by interacting with objects that only exist in it. The concept allows the marriage of the real world and the digital landscape.
What is Supervised Learning?
Supervised learning is a machine learning technique. It requires a human to train a machine by giving it some data that pair off with desired results. It is distinguished from unsupervised learning, where the outcome of the process can be unpredictable or uncategorized by humans.
It's like giving a child some photos with the names of people written on the back of each. Later you can ask the child to match the pictures with the correct names. Repeat this until the youngster is able to correctly match all the photos with the corresponding names. This is what supervised learning is about.
What is an SVM?
A support vector machine (SVM) is a machine learning (ML) algorithm that employs supervised learning models to solve complex classification, regression, and outlier detection problems. It performs optimal data transformations that determine the boundaries between data points based on predefined classes, labels, or outputs.
Since it requires supervised learning, teaching it to do complicated tasks needs human input. Think of it as learning with the help of a teacher instead of studying independently.
What is the Theory of Computation?
The Theory of Computation is a branch of computer science and mathematics that focuses on determining problems that can be solved mechanically, using an algorithm or a set of programming rules. It is also concerned with the efficiency at which the algorithm can perform the solution.
In simple terms, the Theory of Computation answers these questions:
- What problems can the machine solve? What problems can’t it solve?
- How fast can a machine solve a problem?
- How much memory space does a machine need to solve a problem?
To answer these questions, computer scientists use a model of computation, which is a computer simulation for the algorithm being developed. The Turing machine is among the most used models of computation.
What is the Turing Test?
The Turing test is a test that determines if an artificial intelligence is smart enough to pass as human. It was invented in the 1950s by the British mathematician Alan Turing.
In Disney's animated version of the popular children's tale, "Pinocchio," the puppet sets off to explore the world. Each encounter he has is a test to see if he can transform into a real boy.
The Turing test is essentially a more rigid, sophisticated, and less entertaining version of Pinocchio's exploits. In the end, the AI is assessed if it is convincingly human and can interact with real people — without them noticing.
What is Unsupervised Learning?
Unsupervised learning is a machine learning technique. It takes place when a machine is able to analyze data patterns previously unspecified by humans and subsequently decide on a particular desirable course of action.
To illustrate, let's say a dog realizes that when his master goes to the garden, she will pick a ball to play. After having some fun, the dog then rolls over to please the master, something she didn't teach the dog to do but which makes her very happy. What the dog did is an example of unsupervised learning.
What is Virtual Intelligence?
Virtual intelligence is a type of artificial intelligence (AI) that exists inside a virtual world. It is created and optimized to carry out specific tasks to aid a user within a defined framework. At the outset, a virtual intelligence machine can seem like a smart system based on how it interacts with a user. In reality, though, it isn’t.
Virtual intelligence is a code or program that functions within the controlled environment it was created for. As such, it can’t generate spontaneous solutions and responses. It responds only based on predetermined factors.
You can find virtual intelligence in action in Global Positioning System (GPS) navigation software, chatbots, virtual assistants, interactive maps, and wearables. While it can learn from its interactions to enhance performance, that learning is limited to the original functionality it was designed for.
What is a Vision Transformer?
A Vision Transformer (ViT) is a deep learning model architecture that applies the transformer architecture, initially introduced for natural language processing (NLP), to computer vision tasks. It is designed to process and understand visual data like images in a way comparable to how transformers process and understand sequential data in language processing tasks.
While convolutional neural networks (CNNs) used to be the dominant architecture for image analysis and computer vision tasks like image classification, object detection, and image segmentation, they are inherently limited by their reliance on local spatial relationships and struggle to capture global context and long-range dependencies within an image. ViTs overcame these limitations by adapting the transformer architecture, which proved superior for language tasks that require long-range dependencies.
What is Wearable Technology?
Ever wanted a dedicated assistant that can remind you of certain things, but can also be turned off whenever you wanted to? A wearable device allows you to do that.
Wearable technology or ‘wearables’ are electronic devices worn on the body. They share and receive the data via the Internet and collect information from a person’s surroundings. Some examples of wearables are smartwatches and fitness, tracking bands. These can also be sensors that monitor your health indicators, track your location, or measure your activities.