IT Glossary
What is a Quality Management System (QMS)?
A quality management system, or a QMS for short, is a formalized system that documents processes, procedures, and responsibilities to allow users to create high-quality policies and objectives. It helps coordinate and direct a company’s activities so it can meet customer and regulatory requirements and improve efficiency over time.
Using a QMS is almost always connected to adhering to ISO 9001:2015, an international standard created by the International Organization for Standardization (ISO). Sometimes, the term “QMS” is used for ISO 9001:2015 or the documents describing the system. In truth, though, the term refers to the entire system, including the said documentation.
What is Backlog Grooming?
Backlog grooming, or “backlog refinement” as it is currently known, is a regular meeting where backlog items are discussed, reviewed, and prioritized by product managers, product owners, and the rest of the development team. Its goal? It intends to keep backlogs updated and ensure the items on them are prepared for updating. Backlog grooming also helps product managers explain what will be done and get the entire organization behind the process.
Think of it as meeting with all the members of your household in preparation for spring cleaning. You’ll discuss every task, and who will be responsible for it and prepare the materials you’ll need. Everyone who won’t be part of the cleaning crew will also be told about the plan so they can get out of the cleaners’ way.
What is an Accelerated Processing Unit (APU)?
An accelerated processing unit (APU) is a microprocessor that combines the central processing unit (CPU) with the graphics processing unit (GPU) on a single computer chip. It makes a computer a bit faster than if the processors were set far apart from each other because, in that case, it would take more time for them to communicate.
You can compare an APU to a pencil with an eraser at the other end. There’s no need to drop the pencil and grab an eraser, erase the mistake, and then exchange the eraser back. Instead, you can easily and quickly make corrections to your work with a single tool. The APU works like that for today’s powerful gaming computers.
What is Ad Hoc Polymorphism?
Ad hoc polymorphism is a programming concept used to describe functions with the same name that are executed differently, depending on the variable or argument type. It is also referred to as “function overloading” or “method overloading.” The “sum” function is an example of this. When you execute it on “3 + 5” (integers), the result will be 8.
However, if the variables are changed into text strings, the “sum” function will concatenate or link the two strings together. For instance, “ad” + “hoc” will become “ad hoc.” Both operations used the same function, but the execution and results differ.
What is Ad Hoc Testing?
Ad hoc testing is an informal software testing procedure generally performed during the early stage of software development. The goal is to randomly find possible defects, bugs, or any other issue as early as possible.
Ad hoc testing is not structured, meaning it doesn’t follow a specific test design or adhere to certain documentation requirements. There is also no planning involved in this type of testing. Developers or testers merely test out the code in hopes that they will capture random defects. For this reason, ad hoc testing is also known as “monkey testing” or “random testing.”
What is Agile Methodology?
Agile methodology is a practice used to develop software quickly and efficiently. In essence, the developers are in a loop that starts with the requirements of the intended end-users. The programmers build the product based on these requirements, then pass it on to the users. They provide feedback, which the developers use to make adjustments to the software. The process then repeats itself for as long as there is a need for the software product.
Agile implies that the working team and client are in constant consultation every step of the way, allowing them to develop new requirements and apply changes to the project based on their current situation.
What is an Algorithm?
An algorithm is a procedural plan for solving mathematical and computer problems, such as sorting data, for example. It’s not a computer program yet. It’s merely a description of how you want the program to perform a task.
You can think of an algorithm as an outline, such as one a writer prepares before writing a piece. Similarly, a developer can plan how to solve a problem before writing an actual program.
In computing, algorithms serve as sets of instructions that determine the “what” but also address the “how.” Using algorithms makes it easier for programmers to gain a better understanding of the processes that would help them solve problems.
What is the API-First Approach?
The API-first approach refers to a strategy in software development where an application programming interface (API) is created before any code is written. An API, of course, is a type of software that lets an application obtain data from another. It allows various applications to communicate with one another.
Almost all the applications we use in our daily lives employ APIs. And so, it only makes sense to conceptualize, design, and test an API before writing the software code, instead of the other way around.
What is an Application Gateway?
An application gateway is a program that serves as a firewall proxy. It runs between computers in a network to tighten security. It is responsible for filtering incoming traffic that contains network application data.
To illustrate, think of a program that wants to connect with another. Before it can establish a connection, it must first connect to an application gateway, which then accesses the desired system on its behalf. That way, the computer on the receiving end is protected from possible malicious attacks.
So, what is an application gateway in simple terms? It provides an additional layer of protection against unwanted network traffic. It is also sometimes known as an “application-level gateway” or “application proxy.”
What is Application Programming Interface (API)?
An Application Programming Interface (API) is a piece of software that acts like a messenger between two applications. If you are using one application but require information from another one, the API will go fetch it for you. You can also think of it as an intermediary that enables two computer applications to interact with each other and enable a particular function.
When you use an application on your smartphone to send a text message, it’s the API that tells a server what you want to do. The server then performs the action and sends back the data to your phone, which is then able to proceed and send the text message.
What is an Automated Information System?
An automated information system (AIS) is any combination of hardware, software, and equipment that processes information with minimal human intervention. The system can include a computer, applications, telecommunications devices, and many more. The type of information processed by an AIS depends on the purpose it serves.
For example, a library may use a Library Management System that enables its staff to track and manage books properly. The system alerts them if a book is not returned on time, along with detailed information, such as the borrower’s name and mobile number. On the other hand, an accounting firm may use a different type of AIS that helps them collect and process financial data, calculate and send invoices, and compute taxes.
What Is Back Office Software?
Back Office Software refers to the digital tools and systems that help businesses manage their internal operations and tasks that are not directly customer-facing. It handles tasks such as maintaining records, overseeing inventory, creating employee schedules, and tracking financial information. Its role is to ensure that all these internal processes run smoothly, allowing the visible parts of the business to operate efficiently.
What is Backend Development?
Backend development is that area of web development that focuses on how a website or web application works. It is what happens behind the scenes, the same way that a restaurant’s chef and his/her staff handles all orders without the customers sitting at their tables seeing them. Instead of cooking food, though, backend developers write codes that enable web browsers to communicate with databases and servers.
The primary role of a backend developer is to ensure that end-users get the data or services they requested without a glitch and on time. As such, backend development requires a comprehensive set of programming skills and knowledge.
What is a Backend System?
A backend system refers to any structure or setup that runs and supports corporate back-office applications. Backend systems could take the form of servers, mainframes, and other systems that offer data services. Simply put, they are computers and devices that end-users don’t see since they work in the background. Nevertheless, a backend system plays a critical role in any organization’s operation.
When you think about a backend system doing several things in the background, you might think of a chaotic scene in a fast-food chain’s kitchen. Someone is in charge of preparing the drinks while another operates the fry station. Instead of several pieces of equipment, though, a backend system could include a single computer only.
Separating the frontend from the backend makes everything simpler. After all, you seldom see a waitress prepare the meals that he/she serves.
What is Backporting?
Backporting is the process of taking components of a newer version of a software program and porting them to an older version. It is part of software development maintenance. It is commonly done to fix security issues in older versions of an application and add new features to older program versions.
You can compare backporting to upgrading your computer storage, that is, if your system has a slot to put the memory card into or its processor can handle the additional memory. That action makes your computer faster and more efficient.
What is a Backup?
Have you ever poured all your energy in preparing a report or an assignment, but discovered that it had been accidentally deleted moments before submission? Creating a backup would have prevented this catastrophe.
A backup is the extra copy or duplicate of your data as a safeguard against the loss or damage of the original. It can be used to recover information in case it gets deleted or corrupted. You can also use it to recover data from an earlier time.
What is Binary Code?
Binary code is the language of computers. It consists of combinations of zeroes and ones, hence the term "binary". Each combination represents a specific set of instructions for the computer to carry out.
Here's an example. The word "HELLO", when written in binary code looks like this: 01001000 01100101 01101100 01101100 01101111 00100001
The operating system then converts the binary code into the corresponding letters of the English alphabet, and instructs the system to display it on the screen as "HELLO".
What is Black Box Testing?
Black box testing, also known as “behavioral testing,” is a software testing method where programmers test software functionalities without knowing its internal code structure, implementation details, and internal paths. It focuses entirely on application input and output functions based on software requirements and specifications.
So, what is black box testing for? Black box testing can evaluate operating systems (OSs), websites, databases, and even custom applications.
What is the Booting Process?
The booting process refers to starting a computer by pushing the power button or opening a laptop lid. Encrypted or more secure computers typically ask you to provide a password to continue the booting process.
Once the booting process starts, your computer starts all the programs you indicated in your startup options. These are the applications you typically use every day. They may include your email and chat software.
What is the Bootstrap Protocol?
The Bootstrap Protocol (BOOTP) is a network protocol a device uses to automatically obtain an IP address and other network configuration parameters from a server. It is commonly used in local area networks (LANs) to enable diskless workstations to boot up and connect to the network.
BOOTP is the predecessor of the Dynamic Host Configuration Protocol (DHCP), designed to work in environments with relatively static network configuration parameters. Unlike DHCP, which uses a lease mechanism to assign IP addresses and other network parameters, BOOTP assigns permanent IP addresses to devices.
When a device sends a BOOTP request, it broadcasts a message to all servers on the network requesting an IP address and other configuration parameters. The server that responds with a valid IP address becomes its BOOTP server. The server then sends it an IP address, a subnet mask, a default gateway, and other network configuration parameters to connect to the network.
What is a Computer Bug?
In computing, a bug is an error in the source code that causes a program to produce unexpected results or crash altogether. Computer bugs can affect an application’s performance, so developers need to make sure they are corrected before the software gets sold to customers.
Back when mainframe computers were still state-of-the-art, some programmers kept getting wrong results from their programs. When they checked under the hood, they discovered that a moth got into the circuitry, causing errors in computations. That’s why programming errors are called “bugs.”
What is Canary Testing?
Canary testing refers to the incremental deployment of new software code to a few users only. The new code could be a new feature or an additional functionality to an already existing application. This type of software testing helps ensure that any problem can easily be patched since only a limited number of users are affected.
Canary testing aims to prevent any software bug from negatively impacting the whole production or a huge user base. By limiting the number of affected users, developers can immediately detect and address any issue.
What is a Central Processing Unit (CPU)?
The computer's central processing unit (CPU) refers to the electronic circuitry that processes instructions and tells each part of the system what to do. It's arguably the most critical and defining component of a computer. Without it, your PC will not be able to do anything.
If your PC were a restaurant, the CPU would be its head chef. The quicker the chef works, the faster the food can be served to customers. Similarly, a CPU with higher processing power can perform more tasks faster.
What is Chaos Engineering?
Chaos engineering refers to the process of putting a system through a series of tests to build up its resilience to turbulence or unexpected conditions. You can think of it as a stress test to see how much wear and tear your system can take.
It puts to mind how astronauts get trained in preparation for their trip to space where a lot of things can go wrong. Astronauts in training get subjected to really harsh turbulence simulations to ensure they can withstand the shaking and pressure of liftoff and avoid meteors brought on by an unexpected shower or other issues.
What is Cloud Elasticity?
Cloud elasticity in cloud computing refers to a provider’s ability to increase or decrease its memory and storage capacity on demand and as needed. Organizations need to consider cloud elasticity when choosing a cloud provider, as it could impact their resources and, ultimately, the quality and availability of their services.
A marketing agency that has reached its cloud storage limit will have to store digital content, designs, and other materials offline if its cloud provider doesn’t have cloud elasticity. Imagine if this happens to a healthcare organization. Medical records, emergency response requests, and other data won’t automatically reach the concerned professionals because they have reached their resource limit.
What is a Cloud Enabler?
A cloud enabler is a technology or manufacturer that serves as an organization’s backbone for all of its cloud computing products and services. It is a broad term for technology vendors and solutions that let a company build, deploy, integrate, and deliver cloud computing solutions.
Cloud enablers are information technology (IT) firms that create hardware, software, storage devices, networking equipment, and other related cloud environment components. An example would be an organization that manufactures virtualization hypervisors that enable virtual machines (VMs).
What is Cloud Portability?
Cloud portability is the ability to move an application or data from one cloud service provider to another without the need to rewrite or restructure them. For cloud data portability, the information can be moved to another service provider without reentering it. On the other hand, cloud application portability also refers to transferring an application from an enterprise’s premises to a cloud provider’s.
What is Codeless Programming?
Codeless programming is alternatively known as “no-code software development” or “low-code software development.” It basically allows practically anyone (even those with no programming or development background) to create their own applications using templates and modules on graphical user interfaces (GUIs).
Simply put, “codeless” in “codeless programming” translates to “no coding.”
What Is Coding?
You’ve probably read somewhere that “coding” is one of the highest-paying jobs these days. Suddenly, everyone seems eager to learn how to do it. But what exactly is it?
Coding refers to writing a computer program. The series of instructions programmers create is known as a program’s “source code.” They soon shortened the term to “code.” So, when they are working on the source code of a software, an app, or a website, they refer to the act as “coding.”
What is a Cold Site?
A cold site is an office or a data center that does not have any server installed. It has power, cooling, and space available if an organization’s main work site or datacenter suffers a major outage. But since it is empty, a cold site will need engineering and information technology (IT) personnel to migrate all the necessary servers and equipment to it and make them operational. As such, a cold site is the least expensive cost-recovery option for any business.
Think of a cold site as a generator. When the power goes out in a building with a generator, the building administrator just needs to switch to generator power to restore electricity for the tenants’ use.
What is Commodity Hardware?
Commodity hardware refers to off-the-shelf hardware you can readily purchase from computer and accessory shops. They are typically affordable and can work with all kinds of devices so long as they’re compatible.
Commodity hardware includes servers, cables, and practically everything you need to use or connect computing or IT devices in a network. They are plug-and-play, meaning they should make your gadget work when connected. An example would be the cables you use to connect servers together. You don’t need to activate them to make them work. Just plug them into the correct ports, and you’re good to go.
What is a Computer Network?
Imagine if human beings could share thoughts, plans, emotions and more through their minds alone. This ability would let people understand one another easily and thus accomplish more things because of the mutual connection.
A computer network can be thought of as a form of telepathy. But instead of people, it is made up of computers and other devices that are all interconnected.
Many things can be shared between connected computers such as access, files, and more. Basically, a computer network can let one computer do many things with the help of others which it wouldn’t be capable of doing by itself.
What is a Computer Server?
Imagine that you live in a mansion, with a butler to assist you with everything you need. Just let him know what you want and he'll take care of it.
A server or computer server is a computer that acts like a butler for you, giving you the information or computer process you need. You send a request to the server, for example, by using a browser to connect to a web page. The server then passes your request to the software you need the data from, in this case a web server. It then serves that information back to you in the form of the web page you wanted to access.
What is Concurrent Computing?
Concurrent computing is the process where a single or multiple systems do several calculations simultaneously or within overlapping time frames. The idea is to run various threads or instruction sets based on a given schedule. As such, the program should run independently of the parent or main process. Systems or components can work together without one or all waiting for the other tasks to be completed.
Imagine a car assembly line. Each system is programmed to manufacture a car part. Let’s say there are 10 different parts and a device for each piece. But there are 11 machines running simultaneously; the eleventh assembles the car. All 11 machines can work at the same time. Every time any of 10 parts assemblers are done, it can pass on the completed car part to the eleventh device. The last machine can then just wait for all 10 systems to finish and thus achieve its task as well.
What is Contiguous Memory Allocation?
Contiguous memory allocation is a way to allocate adjacent parts of memory to a process or file that needs it. Since all of the available partitions of the memory space are in the same place, the unused sections are not distributed randomly.
Think of contiguous memory allocation as a means of forming a group where you don’t get to choose who you will work with. Instead of having the option to work with office mates from different floors in your office, for example, you end up with the people you’re on the same floor with.
What is Cowboy Coding?
In the simplest terms, cowboy coding is coding where the developer has free rein over the process. The cowboy coder has complete control over the project schedule; the languages, algorithms, tools, and frameworks to use; and the coding style to follow.
A cowboy coder can work alone or be part of a group, but they work without following a specified process. If a company employs the coders, they work with little to no management supervision. Business managers only control non-development aspects. They set broad targets, timelines, and scope.
What is Decoding?
Decoding is the process of unlocking the contents of a coded file that has been transmitted. Media files, like movies and music, are normally encoded so that that they do not take up much bandwidth during transmission. They must be decoded back to their original form in order for you to view the video or listen to the music.
It's like receiving a locked gift box. You need to unlock it to find out what's inside.
What is Development Environment?
The development environment is a collection of tools and procedures used by a software developer to create computer programs. It includes any hardware and software ( (i.e., debugger, source code editor, automation tools, etc.) the developer needs to do his work, and access to additional resources that may be needed during the course of developing the product.
The development environment is comparable to a workshop where a craftsman toils away at his creation.
What is a Device under Test (DUT)?
A device under test (DUT) refers to any product going through testing. This test can occur right after the product is made or later in its life cycle as part of functional testing and calibration checks. In cases where the product needs repairs, another test can be administered to see if it works according to the original specifications.
DUT is also known as “equipment under test (EUT)” or “unit under test (UUT).”
Any manufacturer typically puts all of its products through the DUT stage. That way, it won’t have to lose income from returns or warranties.
What is DevOps?
DevOps. Sounds like a cool name for a spy movie, doesn’t it?
In reality, it’s a set of practices that are meant to help a company’s software development (Dev) and IT teams (Ops) to work better based on the culture of close collaboration.
Applied to a company that relies on its ability to provide products to its customers faster than the competition, DevOps motivates the working teams to put out innovative features, updates, and fixes that satisfy the needs of their customers. This isn’t possible when following a traditional software development approach.
What is DevOps-as-a-Service?
DevOps-as-a-service is a means (either a delivery model or set of tools) for a company’s software development and operations teams to work together efficiently. It aims to track each move the software development team makes to ensure that project delivery goes smoothly.
DevOps-as-a-service vendors typically provide customers with the tools they need to monitor and track the progress of all software development and operations processes so teams can work as one toward achieving a common goal—providing business value.
What is DevRel?
DevRel, short for “developer relations,” is a marketing technique that ensures a company, including its products and developers, establishes good, continuous relationships with external developers through mutual communication.
So, what is DevRel? Think of it this way, if public relations means maintaining good relationships with the public, DevRel is the company-to-developer equivalent of that.
What is Dockerization?
Dockerization, also known as “containerization,” refers to the process of packaging an application and its dependencies into a standardized container called a “Docker container.”
Docker is an open-source platform that provides a lightweight and isolated environment for running applications. As such, Docker containers encapsulate an application’s necessary components, including the code, runtime, system tools, system libraries, and other dependencies. They thus enable the application to run consistently across different environments, such as development, testing, and production, without being affected by the underlying infrastructure.
You can compare dockerization to organizing your kitchen. In a box, you can put all your baking needs, including your mixer and its accessories, ingredients, trays, and everything else. You can store your cooking pots, pans, utensils, and other gadgets in another box. No matter which cabinet you put each box into, “dockerizing” them makes it easier to get your hands on them when needed.
What is an Edge Router?
An edge router is a network device that operates at the edge or boundary of a network, connecting an internal network to external networks like the Internet or other wide area networks (WANs). It is a gateway between different networks, directing traffic and facilitating communication.
An edge router’s primary function is to route data packets between networks based on their destination IP addresses. It typically maintains routing tables that contain information about the available network paths and make decisions based on this data.
Think of an edge router as the security guards at your community’s gate. They direct visitors to their friends’ houses after some form of vetting.
What is an Embedded System?
An embedded system refers to a combination of hardware and software or a fully functional computing system that performs a specific task within a larger system. Its capabilities can be programmable or customizable or fixed. An embedded system can be seen within industrial machines or robots, consumer electronics, agricultural and process industry devices, vehicles, medical equipment, cameras, household appliances, airplanes, vending machines, toys, and mobile devices.
In a car, an embedded system can be the airbag system that performs a specific function—deploy the airbags during a collision. The car is the big system that contains the embedded airbag system, along with many others.
What is Encoding?
Encoding is the process of converting data into a different format. When you convert temperature readings from Celsius to Fahrenheit or money from Japanese yen to U.S. dollars, the original values remain the same. They are just represented in a different form.
In the world of computers, encoding works in the same way. The computer converts data from one form to another. It does this to save on storage space or make transmission more efficient.
One example of encoding is when you convert a huge .WAV audio file to a tiny .MP3 file that you can easily send to a friend via email. The files are encoded in different formats but will play the same song.
What is Erasure Coding?
Erasure coding is a means to protect data. In it, data is broken into fragments, expanded, encoded with redundant information, and stored in different locations or storage media. And so, if a storage media fails or data is corrupted, the data can be reconstructed from parts stored in other storage media.
Think of any movie where the main character is framed for a crime and obtains evidence to exonerate himself. When he distributes different parts of the evidence and keeps these in other locations as insurance (e.g., a page of a document in a storage box per bank), he’s employing a form of erasure coding.
What is an Executive Information System?
An executive information system (EIS) is a management support system that facilitates and supports the decision-making requirements of an organization’s senior executives. Hence, it is also called an “executive support system (ESS).”
As a decision-making tool, it gives top executives easy access to internal and external information relevant to their organizational goals. As such, it is also considered a specialized decision support system (DSS).
What is Expanded Memory?
Expanded memory is a system that lets programs use more memory than most early-day computers had. When personal computers (PCs) were introduced in the mid-1980s, they only had 640 kilobytes (KB) of usable random access memory (RAM).
The memory limitation was challenging for many applications, pushing Lotus Development Corporation, Intel Corporation, and Microsoft to develop the Expanded Memory Specification (EMS) standard.
What is an External Sorting Algorithm?
An external sorting algorithm is an algorithm that can handle massive amounts of information. Users utilize it when the data that needs sorting doesn’t fit into a computer’s primary memory (usually the random access memory [RAM]). In such a case, you must place the information in an external memory device (usually a hard disk drive [HDD]).
Think of it this way. We should drink at least eight glasses of water a day to stay healthy. Let’s say you use a 1-liter tumbler while at work, and your goal is to drink eight glasses in eight hours. You know that 1 liter is equal to four glasses. To reach your goal, you’ll need to refill your tumbler once while in the office to reach your goal. In this scenario, the tumbler represents the computer’s RAM, which can only “process” four glasses of water (or data) per batch, and the HDD is the water source, which can provide massive amounts of “data,” depending on your goal.
What is a Fat Server?
A fat server provides most of the functionality a client machine within a client/server computing architecture requires. Think of it as a standard core server that hosts and provides critical network-based applications and storage, processing, Internet access, and other services.
In much simpler terms, in a network comprising a fat server and multiple computers, none of the processing is done by any of the connected computers. Instead, all the processing happens on the fat server.
What is Fault Tolerance?
Fault tolerance refers to how a computer continues to work correctly despite a system failure. You should know that no matter how many performance tests a computer goes through before getting sold, it can still experience system failures. But computers are designed so that despite system errors, they can still work correctly and give the correct results as much as possible.
Computer errors can stem from three major components—hardware, software, and power sources. Most often than not, when failures occur, an application or your entire computer will shut down and restart. After that, most recently saved copies of the files you worked on when the fault occurred will be available.
What is a Field Programmable Gate Array?
A field programmable gate array, more popularly referred to as “FGPA,” is a semiconductor device that uses a matrix of configurable logic blocks (CLBs) and is connected via programmable interconnects. A CLB is simply a single set of interconnected programmable logic devices. A programmable interconnect, meanwhile, is the device that connects all the elements of a programmable logic device, such as an FGPA, together.
What is Firmware?
Firmware refers to software that has been permanently installed in a machine, device, or microchip, usually by the manufacturer. Without it, the electronic device will not work. Unlike standard software, firmware is meant to control, operate, or maintain the hardware in the background, and not interact with human users.
It usually requires special equipment to embed firmware into a device, and you normally will not be able to alter or erase it without the manufacturer’s help. Because it is planted into the hardware, firmware is also called “embedded software” or “embedded system.”
What is a Floating-Point Unit (FPU)?
A floating-point unit (FPU) is that part of a computer’s processing unit that allows it to perform floating-point calculations. Floating-point numbers contain fractions or decimal points, such as 8.565 and 0.0158, and operations that include them are called “floating-point calculations.” These calculations could range from simple ones, such as addition and multiplication, to complicated processes, such as trigonometric and exponential calculations.
Computers in the early 1990s used to have a separate FPU to handle these types of calculations. However, starting with the Motorola 68000 and Intel Pentium series, computer manufacturers made FPUs a part of the microprocessor chip. Today, FPUs have become a standard addition to the central processing unit (CPU).
What is Fog Computing?
Fog computing is a type of network architecture (i.e., how the systems are connected within a network and to the Internet) that links cloud computing (storage of data and programs over the Internet) to the Internet of Things (IoT). It allows data transmitted between IoT devices and cloud services to be processed faster because it brings them closer to one another. At the same time, it also determines which information is stored in the cloud and local hosts (i.e., the computers or servers within a network).
Fog computing puts resources like applications and data strategically in the network edge or closer to the cloud. As such, it limits bandwidth use, lowers latency, and promotes optimal network performance as the data does not have to be transferred or moved long distances to reach its intended destination.
What is the Front-Side Bus?
The front-side bus (FSB) is a communication pathway in a computer that connects the central processing unit (CPU) to the main memory and other system components. It serves as a means to transfer data and instructions between the CPU and other devices, such as the random-access memory (RAM) and expansion slots. The FSB speed, measured in megahertz (MHz) or gigahertz (GHz), determines the rate at which data can be transferred between the CPU and other system components.
You can compare the FSB to a city highway or road network. Just like a highway connects the different parts of a city, the FSB connects the CPU with other components in a computer. Similar to how traffic flows on a highway, data and instructions flow through the FSB between the CPU, the main memory, and other devices. The speed of the FSB determines how fast you can transfer data the same way a highway’s speed limit affects the rate at which vehicles can travel.
What is Frontend Development?
Frontend development refers to that area of web development that focuses on what the users see on their end. It involves transforming the code built by backend developers into a graphical interface, making sure that the data is presented in an easy-to-read and -understand format.
Without frontend development, all you would see on a website or web application are undecipherable codes (unless you’re a developer, too, of course). But because of frontend developers, people with no coding background can easily understand and use web applications and websites. Everything you see when you visit Google Apps, Canva, Facebook, and other web applications are products of backend and frontend developers work together.
What is Glueware?
Glueware refers to solutions or platforms designed to integrate different software and systems that contain related resources seamlessly. They allow multiple solutions and systems to work together regardless of their developer or vendor, version, or type.
When you think of glueware, you can compare it to glue that can connect various things no matter what each is made of. An example would be gluing a metal figurine to a wood shelf so it won’t fall off even when an earthquake hits.
What is a Golden Image?
A golden image is a preconfigured template for various virtual machines (VMs) (e.g., virtual servers, desktops, or disk drives) used in network virtualization. Some organizations also consider it a master image that users can copy multiple times. A golden image makes it easier for IT managers to develop a consistent environment for all users.
What Is a Graphics Processing Unit (GPU)?
Were you good at drawing or painting as a child? Perhaps you’re that right-brained person in class who’s artistically inclined and excelled at visual arts. If that’s the case, then you’re pretty much like a graphics processing unit (GPU).
A GPU is a computer component that excels in rendering graphical content. It allows a system to display visually intense videos, images, and animations on software or video games.
GPUs can handle the complex calculations a computer needs to show high-quality graphics outputs.
Some of the major GPU manufactures include Nvidia, AMD, and Intel. At present, Nvidia accounts for the leading graphics cards market share of 73%.
What is Grid Computing?
Grid computing refers to a group of networked computers that work together to achieve a common goal. It allows users to split tasks across different machines, reducing the processing time and increasing efficiency. Grid computing is what enables normal system setups to function like supercomputers. In a sense, the process allows any network to perform a high volume of functions, including analyzing substantial datasets and weather modeling.
Grid computing works by executing specialized software on computers included in the network. The software serves as a system manager responsible for coordinating and assigning various tasks and subtasks to different machines.
Unlike other high-performing computing systems, grid computers have a node dedicated to an application or task. A node is a server or group of servers that manages and monitors the resources in a network.
What are Halloween Documents?
Halloween documents are files that were supposedly confidential to Microsoft. The documents consisted of company memoranda, statements, and internal reports that generally discussed how Linux and open-source software competed against Microsoft and how the company should address them.
The Halloween documents were initially leaked in October 1998, with their sources unnamed. The documents were sent to Eric Raymond, a software developer and open-source software advocate, who immediately published them. In the years that followed, more confidential Microsoft documents were leaked from various sources, bringing the number to 11 documents to date.
What is Horizontal Software?
Horizontal software refers to an application that can be used across several industries. Classic examples of horizontal software are word processors, spreadsheets, and web browsers. We can hardly think of any sector that has no use for these applications.
Horizontal software is primarily developed with no specific market or industry in mind. They are designed to be used by a wide range of users. In contrast, vertical software refers to applications specifically designed to solve a problem within a specific industry.
What is a Hosting Environment?
A hosting environment generally refers to the infrastructure and architecture a business uses for its website or workload management.
In website management, a hosting environment refers to the kind of network host a business uses—dedicated or shared. A dedicated host, also known as a “dedicated server” or “managed hosting service,” means leasing an entire server not shared with anyone else. The leaser thus maintains complete control of the server, including its operating system (OS), hardware, and other components. A shared host, meanwhile, hosts many websites on one Internet-connected physical web server.
In workload management, a hosting environment refers to how a company’s network is set up to make day-to-day operations seamless. It encompasses the corporate desktop, sandbox, development, data-oriented development, test, and production environments.
What is Hot Standby?
Hot standby is a means to ensure a company’s critical business systems continue to work uninterrupted even if one or more of their hardware components fail. Also known as “hot spare” or “warm spare,” the method lets your hot standby-equipped server push on with tasks even if its hard drive ceased to work. There would be no system downtime and thus no business operation interruption.
Hot standby, therefore, is a failsafe. It ensures business continuity despite hardware-related problems. Think of it as your car’s handbrake. If your brake pedal fails, you can still make it stop with your handbrake.
What is a Human Resource Information System (HRIS)?
A human resource information system (HRIS) is an application that collects, processes, stores, and manages an organization’s employee data. It is widely used by human resource (HR) departments, enabling staff to perform essential functions, such as recruitment and performance management.
Most HRIS solutions are cloud-based, meaning the data they contain can be accessed through the Internet and they run outside the company’s perimeters. But some HRIS applications are also installed directly within an organization’s premises. For this setup, employees have to be on-site to access the system.
What is Humanware?
Humanware is the method of adding a human facet into the development of computer programs. The main goal of developing humanware is to make hardware and software as functional as possible.
A computer system is made up of three major components—hardware, software, and humanware. While software and hardware make up an actual computer, humanware is necessary for enhancing user experience (UX) by making improvements in the system’s user interface (UI). Humanware is the combination of hardware and software elements that make human interaction with a device as good as possible. Often, developing humanware begins by defining who the computer’s potential users are, what they are interested in, and what they need before designing the infrastructure.
You can think of hardware as cooking utensils in a kitchen. Software, meanwhile, can pertain to a recipe. The humanware component in this scenario is the chef. All the utensils and recipes in the world will be useless if you do not have a chef to bring food to life.
Much like the comparison, a computer cannot work with just hardware and software. It needs humanware to serve its intended purpose.
What is a Hybrid Cloud?
The cloud is a place where data can be stored and accessed over the Internet. More specifically, the term “hybrid cloud” refers to a combination of both private and public cloud services in one package.
It’s used by organizations that want to be as efficient as possible by making use of public cloud services for non-sensitive operations while using the private cloud only when needed.
We can think of the hybrid cloud as public and private transportation services. We usually take the bus (public cloud) when we still have time to spare and because it’s cheap. But if we’re in a rush or have sensitive items that need to be transported soon, we often take a cab (private cloud).
What is Hypertext Preprocessor (PHP)?
Do you remember how websites used to be very static and boring to look at not too long ago? In contrast, today’s websites are more dynamic, interactive and easier to use overall. One of the things that made this possible is a programming language called PHP.
PHP, or Hypertext Preprocessor, is a popular scripting language used to create the attractive, user-friendly and interactive Web pages that we see today. PHP is open-source which means that it is well-documented and can easily be downloaded for free from the Web.
What is Identity Lifecycle Management?
Identity lifecycle management refers to managing user identities and changing employee and contractor access privileges throughout their stay with an organization.
It is a critical component of a complete identity security offering. In particular, an identity lifecycle management solution automates and simplifies all processes related to onboarding and offboarding users, assigning and managing access rights, and monitoring and tracking access activities.
What is an Industrial Control System?
The term “industrial control system,” or “ICS” for short, refers to a collection of various types of control systems and associated instruments that operates and automates industrial processes. It includes all related devices, systems, networks, and controls.
Each ICS has a different function depending on the industry it’s used in. In general, ICSs are built to manage electronic tasks efficiently. Today, ICS devices and protocols are used in nearly every industrial sector and critical infrastructure, such as the manufacturing, transportation, energy, and water treatment industries.
What is the Infinite Monkey Theorem?
The Infinite Monkey Theorem states that given an infinite amount of time, a monkey hitting random keys on a computer keyboard will almost surely type any given text, like the entire “Lord of the Rings” book series or any written work for that matter. In this context, “almost surely” translates to a probability of 1, or the event can happen at least once.
Generalizing the theorem’s application, therefore, we can say that any sequence of events with a non-zero probability of happening will almost certainly eventually occur again given enough time.
What is an Integrated Development Environment (IDE)?
An integrated development environment (IDE) is a software suite that comprises most of the essential tools developers use to write and test programs. It typically includes a source code editor, a compiler, a debugger, and build automation tools.
Some IDEs offer more advanced features and allow for customization, allowing developers to download plug-ins and extensions based on their preferences. Some famous examples of IDEs include Visual Studio, Eclipse, IntelliJ IDEA, and PyCharm.
What is an ISMS Audit?
An ISMS (short for “information security management system”) audit enables the review of an organization’s ISMS by an objective and competent auditor. It tests the components of the ISMS based on standard requirements mandated by the International Organization for Standardization (ISO).
You can compare it to an evaluation of a building’s physical security. An ISMS audit, like the physical security audit, tests how well the system works against all threats.
What is an IT Infrastructure?
An IT Infrastructure is like a natural ecosystem. Every part has its own role and contributes to the whole and the accomplishment of its goals and purposes.
IT Infrastructure is made up of software, hardware, network components, and services that are needed for an enterprise’s IT environment to exist and operate. It gives an organization the capability to deliver IT products and services either for itself or for other companies.
What is Java Programming Language?
Java is a general-purpose programming language used to create software applications for computers, smartphones, tablets, and even websites. Programmers use it to write instructions that can be understood by computers. It is one of the most widely used languages, and software and apps created in Java are running on many devices today.
What is a Knowledge Management System?
A knowledge management system (KMS) is an IT system that keeps and provides knowledge. It aims to improve understanding, collaboration, and process alignment. Any organization or team that wishes to have a central repository that all members can access to enhance their know-how and skills can use a KMS.
You can compare a KMS to a physical library. You can use all the books in it to learn more about practically anything under the sun. And anyone with a library card can visit it anytime. In this case, the books are the virtual resources stored in a database, and the library card indicates the level of access each user has.
What is a Label Switching Router?
A label switching router is a router that supports and understands Multiprotocol Label Switching (MPLS), a type of networking that routes traffic based on labels. Unlike traditional Internet Protocol (IP) address-based forwarding, MPLS is faster and can handle heavy Internet traffic. The routing method helps balance traffic and optimize network resources.
Label switching routers play a huge role in delivering MPLS packets to their designated routes. To better understand what a label switching router is, pretend you’re on an unfamiliar highway. If you’re not sure where you’re going, road signs are a big help. MPLS packets are like travelers having a hard time finding the correct routes. A label switching router acts like road signs, guiding MPLS packets to their right destination.
What is a Legacy System?
A legacy system refers to an old process, piece of technology, system, or application that has become outdated and yet remains in use. Examples include factory equipment that run on MS-DOS or office computers that still use Windows 2000 and servers that continue to run Windows 2003 Server.
In general, it is not advisable to use legacy systems as these are no longer supported by their respective vendors. As such, they no longer receive critical patches, especially for cybersecurity. In case they get attacked, retrieving data or even the mere act of rebooting them may be impossible.
What is a Lights Out Data Center?
A lights out data center is simply a set of servers that is physically isolated from an organization’s headquarters. Its primary purpose? It prevents unauthorized human access and limits the effects of environmental changes (e.g., energy fluctuations, blackouts, etc.) on a company’s productivity.
Interestingly, the term “lights out data center” has to do with blackouts caused by lightning strikes.
What is Longevity Testing?
Longevity testing, also known as “durability testing” or “endurance testing,” is a testing technique used to assess the stability and performance of a system over an extended period. It involves subjecting the software or hardware to prolonged usage scenarios or stress conditions to identify potential issues that may arise over time, such as memory leaks, resource exhaustion, performance degradation, or system crashes. Its goal is to validate the system’s ability to maintain its functionality and reliability under continuous operation or heavy usage, simulating real-world conditions.
Think of longevity testing as stress testing a car. You can continuously drive over rough terrains or extreme weather conditions to determine how well it holds up.
What is Manchester Encoding?
Manchester encoding is an encoding technique that synchronizes code with time. It is used in telecommunications and data storage. Computing code, as we know, comprises binary digits or bits, which comprise sets of zeroes and ones.
In Manchester encoding, zeroes correspond to lows while ones translate to highs. Synchronous clock encoding means the highs and lows are combined in one bitstream as they occur within the same amount of time. A bitstream or “binary sequence,” is simply a sequence of bits.
The Manchester code derives its name from its developers from the University of Manchester. They used the code to store data on the magnetic drums of the Manchester Mark 1 computer.
What is a Markup Language?
A markup language is a computer language for clarifying the contents of a document. It was designed to process, define, and present computer text in a form that humans can read. It specifies the code used to format text, including the style and layout the programmer wants the document to appear. It uses tags to define these elements.
You can think of using markup language like a teacher grading student exams. The teacher “marks” mistakes, so students know why they were given a particular score.
What is Mean Time to Recovery?
Mean time to recovery (MTTR) refers to the average time it takes a system to recover fully from failure. When this amount of time has passed, the device should be fully operational again. It includes the entire outage time and time spent in-between testing, repair, restoration, and resolution. The MTTR of every system varies.
Imagine a person who hurt his ankle. In his case, MTTR starts from when he broke his ankle to the time it heals fully, and he can walk again without feeling any pain.
What is Micro Segmentation?
Micro segmentation refers to a network security technique which allows security architects to divide a data center into distinct security segments logically. It can go as far as the individual workload level. That way, they can define specific security controls and provide services for each component.
Think of it as an office building’s lobby security personnel. While each office may have its own security measures (including guards), the lobby security already limits who can go into the building, thereby decreasing any office’s chances of intrusion.
What is Middleware?
Middleware refers to a type of software that connects different programs and databases to ensure that they can communicate, manage data, and work together seamlessly. It is a program that allows an operating system (OS) to communicate with the various applications that run on it like a bridge.
You can compare a middleware to a translator that helps different individuals who speak various languages to communicate with and understand one another.
What is Multihoming?
Multihoming refers to the practice of simultaneously connecting a network device or host to multiple networks. In other words, it involves having multiple network connections or interfaces on a single device or host. Each network connection may have a unique associated IP address.
Multihoming can be implemented at different levels of the network stack, ranging from individual devices or hosts to entire networks. It is commonly used by enterprises, data centers, and Internet service providers (ISPs) to optimize network performance, enhance reliability, and manage traffic effectively.
What is a Multiplexer?
A multiplexer, often abbreviated as “MUX,” is a digital electronic device that combines multiple input signals into a single output signal. It is commonly used in digital systems and communication networks to transmit multiple data streams over a shared channel. Its primary function is to select one of the input signals and route it to the output based on control signals.
A multiplexer is comparable to a switchboard, where multiple input sources are selectively directed to a single output or destination. Imagine a switchboard operator in a telephone exchange. The operator receives calls from different telephone lines and routes each call to the appropriate recipient based on the caller’s request or the operator’s instructions. Similarly, a multiplexer takes multiple input signals and selects one based on control signals, directing it to the output.
What is a Native App?
A native app is a software created to run on a specific device. For example, some apps were written specifically for Android phones. They have access to features that may not be found on other platforms, such as on an iPhone.
Native apps are like the locals in a city or town. They know all the ins and outs and can get to places that visitors may not even know about.
What is Network Coding?
Network coding is the process that encodes data before transmission and decodes it upon receipt. It aims to increase network throughput, reduce delays, and make a network more robust.
Network coding, therefore, is performed to make networks work faster, more efficiently, and hassle-free. Network administrators typically do it.
What is Network Congestion?
Network congestion occurs when traffic exceeds a network’s maximum capacity. Networks have a bandwidth allocation that specifies the set volume of data transmissions they can handle. When there’s too much data, networks can get clogged. This networking issue results in poor video quality, inability to play online games, and an overall reduction in service quality.
There can be several causes of network congestion. You may have too many connected devices or one device may be consuming too much bandwidth. Old routers can also slow down traffic, resulting in network congestion. Internet service providers (ISPs) experience network congestion as well. When this happens, they may throttle or restrict your Internet speed to manage traffic and fix the congestion.
What is Network Virtualization?
Network virtualization refers to consolidating hardware and software functionality into a single network controlled via one virtual machine (VM). The VM simulates traditional hardware, albeit only limited to forwarding packets that contain instructions the virtual network carries out.
Network virtualization can either be implemented externally or internally. External virtualization involves combining a host of local networks or parts of them into a single host to improve efficiency. Internal virtualization, meanwhile, uses software containers (software units that house code and related programs) to provide network-like functionality through a single server.
To answer the question “What is network virtualization?” therefore. We can say it gives administrators the capability to run a network even if it is disconnected from the hardware.
What is the Number of Tiers (N-Tier) Architecture?
The number of tiers (n-tier) architecture generally divides an application into three tiers—the presentation, logic, and data tiers. It physically separates the different parts of an application instead of doing so conceptually or logically, as in the so-called “model-view-controller (MVC) framework.” The n-tier layers are connected linearly, meaning all communication goes through the middle layer—the logic tier.
The n-tier architecture also refers to a program distributed among three or more separate computers in a distributed network. The most common form is the 3-tier application that comprises a user interface (UI) programming in the user’s computer, the business logic in a more centralized computer, and the required data in a computer that manages a database. This model allows software developers to create reusable applications with maximum flexibility.
You can liken n-tiers, therefore, to the walls in a house, with only one entry door (i.e., the front door and no back door) and rooms arranged in a single file, that separate one room from another. In this case, the house is the application, and the rooms (i.e., living room and kitchen, bedroom, and bathroom) are the tiers. Everyone that enters the house has to travel in only one direction to go in or out.
What is Object-Relational Mapping?
Object-Relational Mapping (ORM) is a programming technique for mapping data between a relational database and an object-oriented programming language. It’s a way to connect the two worlds of object-oriented programming and relational databases, which have different data structures and access methods.
ORM tools provide a layer of abstraction that lets developers work with objects in their code rather than directly interacting with the database. They handle the mapping between the database and the object-oriented code, so developers can use familiar object-oriented techniques to interact with the data.
What is On Premises?
“On premises,” also referred to as “on-premise,” “on-premises,” or “on-prem,” is a method of deploying software. With on-prem, computer programs are installed right on users’ computers through CDs or USB drives. Whereas with off-premise, the installer can be found anywhere on the Web.
Many companies opt for on-prem because it doesn't require third-party access, gives owners physical control over the server hardware and software, and does not require them to pay month after month for access.
Think of how you buy your fast food meal. You could buy it and eat it on premises at the restaurant. Or you can call and order your meal, and have it delivered to your home.
What is On-Demand Software?
On-demand software refers to an application delivered via and managed on a vendor’s cloud computing infrastructure to Internet-connected users when needed. The business model allows users and organizations to use the software in a pay-as-you-go manner, typically billed monthly.
On-demand software is also called “software-as-a-service (SaaS),” “online software,” and “cloud-based software.”
You can compare on-demand software to your utilities at home. You get billed each month for the electricity and water you use. So if you took a two-week vacation, you might get bills that are half of your regular consumption. The same goes for using on-demand software. You only pay for your total usage.
What is Open Database Connectivity (ODBC)?
Open Database Connectivity, or ODBC for short, is a standard application programming interface (API) for accessing database management systems (DBMSs). Let’s break this definition down to make it simpler.
First, an API is a software intermediary that lets two applications communicate. Think of it as a middleman, like a real estate agent that brokers a deal between a property owner and a prospective buyer.
Next, a DBMS is a program that stores, retrieves, and runs data queries. It is a link between an end-user and a database that lets the user create, read, update, and delete data from a database.
So, simply put, ODBC serves as a mediator between a program like Microsoft Excel and a data source like a comma-separated values (CSV) file stored in a SQL database. It will let an end-user access and use the CSV file via Excel on his or her computer.
What is Open Source Software?
Open source software is any computer program whose source code is made available for anyone to inspect and modify. Open source is basically a philosophy of being able to share and modify something because its design is made accessible to everyone.
Normally, only the original software developers get to see and make changes to a program's source code. Open source software removes this restriction so that anyone can improve that program by adding features to it or fixing parts that don't always work correctly.
Say, the special stew they serve at a nearby restaurant is really good, although it's just a little bit too salty for you. If you had access to the recipe you could prepare your own version of the stew with a little less salt in your own kitchen. That's the essence of open source.
What is an Operating Environment?
An operating environment is the place where users run application software or programs. It is not necessarily a full operating system (OS) but it does act like a middleware, that is, the software that makes the OS work with a specific application.
Initially, operating environments helped an OS improve and extend its capabilities to more than just providing a reliable user interface (UI).
What is an Operating System?
The operating system is a piece of software that makes sure all the processes and functions of a computer system perform properly. It controls all of the computer's hardware components, coordinates with other computers on the network, oversees all the software that runs on it and communicates with the human who uses it.
You can compare it to a traffic officer at an intersection who controls the flow of vehicles and lets drivers know when to stop and when to proceed. In short, this person makes sure that everything goes smoothly and that the intersection is trouble-free.
What Is Orchestration?
Orchestration is the practice of automatically configuring, coordinating, and managing applications, computer systems, and services. It primarily helps IT teams manage complicated workflows and perform tasks with ease.
Orchestration often requires identifying and understanding the many processes involved in accomplishing tasks and tracking the steps involved across various environments. These include mobile devices, applications, and databases.
In sum, orchestration refers to automating a series of tasks to work together seamlessly.
What is Paravirtualization?
Paravirtualization is a computer hardware virtualization technique that allows virtual machines (VMs) to have an interface similar to that of the underlying or host hardware. This technique aims to improve the VM’s performance by modifying the guest operating system (OS).
With paravirtualization, the guest OS is modified, so it knows that it is running in a virtualized environment on top of a hypervisor (the hardware running the VM) and not on the physical hardware.
What is Path Coverage Testing?
Path coverage testing is systematic and sequential software testing, meaning the tester assesses each line of software code.
Path coverage testing falls under the types of technical testing methods in software testing. It is labor-intensive, so it’s usually reserved for critical code sections.
Think of it this way, in car manufacturing, instead of just seeing if the vehicle runs, path coverage testing includes assessing all of its components (e.g., suspension, brakes, lights, etc.). That way, buyers won’t have any problems with the car when it goes to market, and the company won’t be liable should accidents occur or ensure too many warranties.
What is Peer-to-Peer (P2P)?
Do you remember back in high school when your teachers used to divide you and your classmates into groups to work on projects together? Everyone is ideally equally important in the group and each member is expected to contribute to the final output.
That is how peer-to-peer (P2P) works. It refers to a group of computers that are interconnected to each other to share the same workload, rights, and duties without a central server to manage them.
Much like your high school group, the main purpose of a P2P network is to share its resources and let everyone involved work together to accomplish a specific task or service.
What is a Personal Area Network?
A personal area network (PAN) is a computer network that connects electronic devices with one another in an individual’s workspace. That network can include a laptop, a mouse, a printer, and other devices a person may need for work or play.
You can use a PAN to make your devices communicate with one another or connect to a much more extensive network, such as your home network to include your smart refrigerator and other appliances, and the Internet.
What is a Personal Information Manager (PIM)?
A Personal Information Manager (PIM) is a software or tool that helps individuals organize and manage their personal information in digital format. It serves as a centralized hub for storing, retrieving, and managing various types of personal data, such as contacts, calendars, tasks, notes, and more.
PIMs are designed to enhance personal productivity and provide a convenient way to keep track of important information. They often offer features like synchronization across multiple devices, reminders and notifications, search functionality, data backup, and integration with other applications or services.
What is Pervasive Computing?
Pervasive computing is a software engineering concept that espouses the use of computerized technology anytime and anywhere. Also known as “ubiquitous computing,” the idea is that computing can be done using any device and format wherever the user may be.
Pervasive computing can exist in several forms, stemming from the use of laptops to household appliances. Some of the technologies that make it possible are microprocessors, mobile codes, sensors, and the Internet.
In short, pervasive computing happens every time people use digital devices to connect to technological platforms.
What is a Plug-In?
What if you could add another arm to your body so you can carry more, or give yourself another pair of legs so you can move in ways previously not possible? In the world of technology, a plug-in works the same way.
A plug-in is a software component that’s added to another computer program to give it new functions. An example of a plug-in is a virus scanner used in a Web browser in order to keep an eye on malicious software. By itself, the browser is not able to stop malware. But with the plug-in, it is secure and protected.
What is a Private Cloud?
A private cloud is a network of servers owned by a single organization. Functionally, it provides the same services as a public cloud. The software and applications are installed on these servers, but access to them is restricted only to designated people.
Think of a private cloud as an exclusive VIP lounge at the airport. Only those with million-miler status can come in and enjoy the amenities.
What is a Private Key?
A private key is one of a pair of codes used to access files encrypted using a process known as public-key cryptography. This type of encryption system uses two keys — a public key that is available to the general public, and a private key which only the recipient of an encrypted file has. You need both keys to be able to access the encrypted file.
To access a bank safety deposit box, some banks require two keys — one held by the bank manager and the other carried by the customer. This is somewhat how private keys work in the digital world.
What is Production Environment?
A production environment is the setting where the latest working version of a computer program is installed and made available to end-users. Therefore it must always be in working condition, bug-free and available when the end-user needs it.
A production environment is different from a testing environment where company developers test the codes and updates but have the luxury of going back to the drawing board if any problem is encountered. In a production environment, everything is assumed to be perfect, and there’s no chance to undo the bad press if anything goes wrong.
What is a Programmable Logic Controller?
A programmable logic controller (PLC) refers to an industrial computer control system that monitors input devices’ state. It then makes decisions on a custom program to control the state of output devices.
Nearly any production line, machine function, or process can use a PLC. But, probably its most significant benefit is its ability to change and replicate an operation or a process while collecting and communicating vital information.
A PLC is also modular, which means you can mix and match various input and output devices to best suit your needs.
What is a Programming Language?
A programming language is a system for instructing the computer how to solve a computing problem or perform a specific task. It consists of specific commands, basic logic, and a formal way to combine all of these elements into directives for the computer to execute.
Programming languages can be classified into two categories. The first is called 'high-level languages,' and programmers use them to write source code. The second category is called 'compiled languages' since the code must be compiled first before it could run.
A programming language is just like any human language. It has its own vocabulary and syntax, although the scope of what it can communicate is limited to what the computer can do.
What is a Project Management Sprint?
A project management sprint refers to a short time dedicated to a specific set of tasks or goals that are part of a larger project. The timeframe is already fixed, usually ranging from 1–4 weeks. This time already includes the allotment for planning, execution, and progress reviews.
Think of a project management sprint as a mini-project within a bigger project. For example, teams may dedicate separate sprints for each major feature when developing a mobile app. A project management sprint could be allocated for user interface (UI) development, another for adding remote login functionality, and other sprints for the rest of the requirements. Teams can make steady progress by breaking down the whole project into smaller and more manageable chunks.
What is Propagation Delay?
Propagation delay refers to the time it takes for a signal to reach its intended destination. It is related to networking, electronics, and physics.
Answering the question “What is propagation delay?” means responding to it in relation to at least three fields.
In a computer network, propagation delay is the interval between sending a signal and receiving it. In electronics, meanwhile, it is the time it takes for an input to gain stability then change to the point for the output to become stable and ready for change. Finally, in physics, it is the amount of time for a signal to travel to its destination.
What is a Public Cloud?
Imagine having a computer that never breaks down, or that never needs to be upgraded. What if you never had to install any software, or had access to all the data storage space you can ever need? All you need to do is plug your computing device into a public cloud and simply pay for what you use.
The public cloud is a facility with a lot of processing capacity, virtually unlimited data storage, microprocessing power and Web bandwidth — that they sell to anyone who needs it.
Its main idea is that you do not need to buy, upgrade or maintain your own server computers. Public clouds also have capabilities to protect users from the dangers of losing their files and help avoid paying for expensive high speed Internet access for any software applications users want to make available online.
What is Rapid Application Development?
Rapid application development (RAD) is an app development model where functions are built in a parallel manner. That is done in a way that each section appears like a subproject. The subprojects are then gathered and joined into a model, also known as a “prototype.” In-between the processes, it is easier for application developers to make, adjust, or even change elements of the model quickly.
Besides, RAD accords more priority to the rapid release and review of prototypes. Also, rapid application development puts more emphasis on the use of software together with user feedback mechanisms over requirement recording and strict planning.
Other interesting terms…
What is Real-Time Optimization?
Real-time optimization (RTO) is the process of optimizing a system or process in real-time, meaning it occurs in response to real-time data or inputs. It involves using advanced algorithms and models to analyze the data and make decisions or recommendations in real-time.
RTO is useful in various industries, including manufacturing, energy, transportation, and finance. It is often used to improve efficiency, reduce costs, and enhance overall performance. It typically involves feedback control systems, where data is continuously monitored, and adjustments are made based on the data. That can involve adjusting parameters, such as temperature, pressure, flow rate, or other variables, to achieve optimal performance.
What is Reliability Engineering?
Reliability engineering is a subfield of systems engineering that focuses on making devices carry out intended tasks without fail. Built-in failsafe mechanisms make the systems “reliable” given certain conditions over a specified period.
Reliability engineering involves predicting, preventing, and managing potential uncertainties and failures. As such, reliability engineers have to go through a stringent process of reliability testing for their creations.
Reliability engineering is often applied when a customer buys a product. There is an unwritten acceptance that the product will fail after some time. The manufacturer ensures the product’s reliability with a warranty. So if, for instance, the product bought continually fails before the warranty expires, the manufacturer needs to rethink its reliability engineering process to prevent future failures.
What is Reverse ARP (RARP)?
RARP, short for “Reverse Address Resolution Protocol” or “Reverse ARP,” is a networking protocol employed by a computer to ask for its IP address from a gateway server’s Address Resolution Protocol (ARP) table or cache. Let’s simplify by looking at the tech terms individually.
A gateway server serves as a middleman between a computer and a remote server, providing additional security by hiding the remote server’s address from the computer. The ARP, meanwhile, is the communication protocol used to discover the media access control (MAC) address associated with an IP address. The ARP table lists the MAC addresses and their corresponding IP addresses.
The network administrator creates the ARP table, which gets stored in the gateway server. This table points the user to the server (identified by its MAC address) that provides the computer’s IP address.
What is Ruby Language?
If you’ve used a Web app that enables managing projects online, real-time collaborations, getting funds for a startup idea, and the like, then you’ve probably already made acquaintance with Ruby.
It’s a programming language that was designed and developed in Japan back in the 90s. It has some similarities to other general purpose programming languages which make it easy to understand for programmers. Aside from that, it is also capable of supporting numerous features once it is applied.
What is a Runtime Error?
A runtime error is an error that occurs when a program you’re using or writing crashes or produces a wrong output. At times, it may prevent you from using the application or even your personal computer. In some cases, users need only refresh their device or the program to resolve the runtime error. However, sometimes, users may have to perform a particular action to fix the error.
Runtime errors are of various types, including logic and encoding errors. Such errors are caused by unpatched bugs in the software build or used up memory. Simple fixes include reinstalling the affected program, updating it with a newer iteration, or operating it in Safe Mode.
Before a runtime error shows up on your computer, you may have noticed its performance slowing down. When runtime errors occur, your computer will always display a prompt stating the specific type of error you’ve encountered.
What is a Scripting Language (Script)?
When you follow the step-by-step instructions in a recipe, you'll end up cooking a delicious meal.
A scripting language or script is like a recipe. It’s a series of instructions that tells the computer what to do. Like a programming language, the script automates the computer’s tasks. The main difference between a scripting language and a programming language is that scripts need to run within other programs, such as browsers. Programs, on the other hand, are created through a more complex process that involves compiling the program into a binary file that can run by itself on a computer without the help of other programs.
What is Scrum Project Management?
Scrum project management is an agile project management approach. The agile project management approach is, of course, one where projects are completed in small sections. In the process, insights gained from analyzing the results of a preceding step are used to determine the next step.
Scrum project management is typically employed in software development. Leadership is taken on by a point person or the scrum master whose main task is to ensure project completion within an allotted time frame.
In scrum project management, each task should be completed within a short and fast cycle called a “sprint.” You can think of scrum project management as a hurdle race that team members aim to finish within a specified time frame.
What is Seat Management?
Seat management is the process of managing an entire set of workstations, including all related hardware and software, in one network. The method also refers to having a single service provider manage all of a company’s information technology (IT) requirements. In such a case, organizations pay on a per-seat basis where each seat represents one terminal or workstation.
Seat management involves installing, operating, and maintaining an organization’s hardware and software to ensure improved overall performance.
What is SecDevOps?
SecDevOps is a software development and deployment process that places security as the first step in the life cycle. Instead of treating security as a tool, it is integrated into every stage of the life cycle and all the software’s components.
SecDevOps pushes developers to consider security principles and standards while creating software. As such, security processes and checks are introduced as early as possible in the life cycle to make the quick DevOps release approach a reality.
With ever-evolving cyber attacks occurring every 44 seconds, SecDevOps allows developers to create available, survivable, defensible, and resilient software.
What is Shadow IT?
Shadow IT refers to all software and hardware that departments in an organization use without the consent and knowledge of its IT department. Also known as “embedded IT,” “fake IT,” “stealth IT,” “rouge IT,” “feral IT,” or “client IT,” it is meant to improve employee productivity but could put a network at risk of cyberthreats.
Shadow IT usage has grown over the past few years due to the spread of cloud computing.
What is Shift-Right Testing?
Shift-right testing refers to performing software tests in the latter part of the development process, usually after deployment or release. It is also applied to application deployment, configuration, operation, and monitoring. It is connected to DevOps as well.
Shift-right testing is comparable to buying a product that you’ve never used before. You test if it works and fulfills its promises after using it for a while.
What is a Single Pane of Glass?
A single pane of glass is an IT management console. It brings data or interfaces from several different sources to let you view them on a single dashboard. This approach helps you get a better perspective on which information is valuable. That way, you can present it to employees or colleagues so they can act on it quickly.
Like a vast window that lets you see as much of the view outside as possible, a single pane of glass in IT gives you a bird’s eye view of possibly all the data you can get from your systems and applications in one go.
What is a Single-Board Computer?
A single-board computer is a completely functional computer built using a single circuit board. All features required in a computer are present, including the microprocessor, memory, and input/output (I/O) processor.
While a single-board computer is fully functional, it doesn’t have expansion slots for peripherals like printers and scanners. All of its functions are built-in. However, most single-board computer providers can configure them to include customized components. Still, upgrading or adding more features is not feasible unless the single-board computer is replaced.
A single-board computer is used in various applications, primarily as an embedded computer controller. You can see it at work in traffic light controllers, medical imaging systems, mobile phones, and many other devices.
What is Small Form-factor Pluggable (SPF)?
An SFP or a small form-factor pluggable is a small metal hardware that you plug into a device to allow it to communicate with another device. It acts as a transceiver (a transmitter and receiver), which enables data transmission between two devices that are as far as 6–7 feet away from each other.
As a transceiver, an SFP is like a telephone that transmits and receives data at the same time—you don’t need other equipment to do those processes separately. However, SFPs are mostly used in computer networks to allow the flow of high-speed Internet connection. And since it works with copper and fiber optics, an SFP is compatible with a variety of connection options.
What is Smoke Testing?
Smoke testing refers to trying out software to determine its usability and stability. It serves as a confirmatory step that a quality assessment (QA) team does before doing other software tests. It involves doing several runs to test primary functionalities and identify system stability and conformance issues early.
Smoke testing thus verifies if all of a software’s important features work and are not flawed. It helps lessen wasting time and resources incurred from back-and-forth testing.
The term was inspired by hardware testing, which primarily checks for smoke coming out of a hardware component when it is turned on. In the context of software testing, the system will not undergo further testing until it passes the smoke test.
What is Soft Computing?
Soft computing is a collection of artificial intelligence (AI) computing techniques that gives devices human-like problem-solving capabilities. It includes the basics of neural networks, fuzzy logic, and genetic algorithms.
The soft computing theory and techniques were introduced in the 1980s. And the term was coined by Lofti A. Zadeh, a mathematician, computer scientist, electrical engineer, AI researcher, and professor emeritus of computer science at the University of California, Berkeley.
What is Software Delivery?
Software delivery is the process of deploying an application to the market.
Proper software delivery follows various steps performed by different contributing groups to ensure all goes well.
Software delivery typically involves business and product owners who must deliver a written program based on a client’s specifications. All of the application’s features are described in great detail. After programming is done, the software should undergo several quality assurance tests to ensure it meets the specifications. All these are done before the completed program is produced.
What is a Software Development Process?
A software development process is the process of creating a computer software product. It is a systematic operation that includes designing, preparing the specifications, programming, testing, bug fixing and documentation. These stages are also referred to as the software development lifecycle.
It's very similar to building a house. You need to plan, create the building specs, build the structure and then inspect it to make sure it complies with standards.
What is Software Licensing?
Any license is a document that grants permission and clarifies in detail the terms and conditions for which the permission is given.
A software license states who can use a software, and for what purpose. The software developer or creator is the one who decides on the terms involved in the license. The terms listed can answer questions, such as “Can I use this software to earn money?” or “Can I give this software to others?”
What is a Software Patch?
A software patch is an update that revises the underlying structure of a computer program. It addresses vulnerabilities in the program, such as unfixed bugs or unaddressed security risks. Also called a bugfix, a patch plugs the problem areas and improves the software’s performance.
A software patch does for programs what a cloth patch does for clothing – it covers up holes and can keep you from getting cold.
What is Software Piracy?
Software piracy refers to using, copying, or distributing copyrighted software without the owner’s permission. The practice violates the intellectual property rights of software developers who, for sure, created their programs with the intention of selling them.
Software piracy occurs when you use a program without paying for a license. That happens when you download a cracked application that has been hacked, so it doesn’t require you to purchase a license from the developer to use it.
Software piracy also refers to copying a program, which happens when a developer obtains someone else’s code and passes it off as something he created himself. Things get worse if he sells the application that he didn’t develop from the get-go.
What is a Software Protection Dongle?
A software protection dongle is a device used to protect content from unauthorized access. It is a hardware key that comes with protection mechanisms, such as the user’s own product key. When attached to a computer or an electronic appliance, it decodes sensitive content or unlocks software functionalities.
Software protection dongles are usually attached to a personal computer (PC) through parallel ports. On Macs, that translates to the Android Debug Bridge (ADB) port.
Software protection dongles provide security by making computers inaccessible or inoperable when they are not plugged in. In some cases, protected software can operate without the dongle but only in restricted mode.
What is Software Testing?
Software testing is the process of evaluating whether a software application meets high standards of functionality and reliability in actual use. Besides measuring quality, software testing aims to find out if there are any defects or bugs in the application. This is to make sure that consumers get the best product and manufacturers are safe from damage arising from any defect.
You can compare software testing to taking a car on a road test to determine if it will perform according to its design and will be safe to drive.
There are different types of software testing such as interface testing, integration testing, and system testing to name a few. Each of them seeks to find out if the test results match the desired results.
What is Splunk Software?
Splunk software searches, analyzes, and visualizes machine-generated data obtained from websites, applications, sensors, and devices, among others, that comprise IT infrastructures and businesses.
Splunk was explicitly created so users can aggregate and analyze big data on a single platform. No matter how much information users get and regardless of where it came from, data can be mixed for processing and analysis.
What is Storage Tiering?
Storage tiering is simply the process of optimizing the use of available storage resources. It also enables effective ways to back up data, save on costs, and employ the best storage technology for each kind of data.
Users apply storage tiering to all devices to keep data in to handle information volume growth while incurring as little additional cost as possible. Various kinds of storage, including cloud-based, object, and distributed, can benefit from the strategy.
What is a Supercomputer?
A supercomputer is commonly used in high-performance systems because it can operate at the fastest possible rate. It comprises thousands of connected processors to accommodate users’ heavy computational needs.
A supercomputer is used in most scientific studies and engineering applications since these industries often work with high volumes of data and high-speed computational activities. It is an integral part of computation science, including weather forecasting, quantum mechanics, climate research, molecular modeling, and brain simulation projects.
What is SuperSpeed USB?
SuperSpeed USB (SS) is the latest and fastest Universal Serial Bus (USB) specification to date (June 2022). It encompasses all USB 3.2 devices and espouses much higher transfer rates than its predecessor, USB 2.0 and lower.
It is only natural for computing devices to become more advanced over time, which is true for USB. In computing’s case, more advanced can mean more features or faster. As the name suggests, SuperSpeed USB translates to faster.
What is a Terminal Node Controller?
A terminal node controller is a piece of equipment amateur radio operators use to send and receive data on their computers through radio frequency. Radio operators that utilize a terminal node controller can only participate in Amateur X.25 (AX.25) packet radio networks. AX.25 is a communication protocol specifically designed for amateur radio operators that helps with efficient data transfer.
Think of a terminal node controller as a modem someone uses to connect to the Internet. It acts as a translator between your computer or phone and Internet cables. The difference is that a terminal node controller facilitates communication through an assigned radio frequency, not over the Internet.
What is a Traceability Matrix?
A traceability matrix is a document that aims to establish the relationship between two documents that usually comes in tabular form. It usually consists of many-to-many relationship comparisons, which means that multiple records in one document are related to various data points in another document. In other words, there is no exclusivity in the relationship between records in each document.
A traceability matrix is commonly used in software development and testing to ensure that client requirements are met. For this reason, the document is also referred to as the “Requirements Traceability Matrix (RTM).”
What is a Transaction Processing System (TPS)?
A transaction processing system (TPS) helps users process data transactions within a database system that tracks transaction programs. It maintains balance and control of a particular organization’s process of purchasing goods and services. It is responsible for coordinating the inventory and distribution of products, managing transactions from payment accounts, and processing sales and payrolls. As such, it is highly beneficial when monitoring online transactions, as it allows for a momentary delay between when a product is purchased to when it was sold.
An example of a transaction processing system at work is when a customer buys a concert ticket. While the customer fills out his/her payment details, the system holds the ticket for him/her so no other customers can buy it. In short, the system is critical in ensuring that each ticket will not have two different owners.
What is a Transmission Delay?
Transmission delay refers to the time a computer needs to push a packet’s bits into a wire during network-based packet switching. Also known as “store-and-forward delay” and “packetization delay,” it is the delay that the data rate of the link causes.
Transmission delay has all to do with a packet’s length but not the distance between two nodes. As such, it is proportional to a packet’s length in bits. While we have defined the term, what we know now remains a bit too technical still, doesn’t it? Don’t fret, just read on.
What is Troubleshooting?
Troubleshooting is an important component of technical support and refers to searching for the source of the problem and addressing it. It is an approach that is used to deal with problems in computer systems, machines, and electronic devices.
Several stories point to the origin of troubleshooting. One is that during the 1800s California gold rush, mines hired tough, no-nonsense guards to shoot troublemakers. They were referred to as “troubleshooters.”
Another story tells of technicians dispatched by the 19th-century telephone and telegraph companies to hunt for problems in their infrastructure. They were instructed to “shoot” these troublesome problems down.
What is Unit Testing?
Unit testing is granular software testing performed on a single function of an application. It is named after the smallest testable part of the code, “the unit.” Unit testing is usually done independently, without considering other functions that depend on the unit being tested.
For example, developers write a unit or module of source code dedicated to an application’s Delete functionality. They would also write codes for other functions, such as Undo Delete, Upload Image, and Add Text. Some of the functions, like Undo Delete, may depend on the Delete function or source code unit. But during unit testing, only the Delete function will be tested to make sure there is no problem with the code. Testing the Undo Delete function will require a separate unit test.
What is User Experience (UX)?
User experience (UX) describes how using a product or service affects the customer. It includes the person's opinions, feelings, and attitude about the experience. If the UX is pleasant, productive, and memorable for the user, there is a good chance that they will continue patronizing the product.
A UX that anyone can relate with is the dining experience offered by a posh restaurant. If they pay attention to all the details and you leave the place with a smile on your face, then you've just had a positive UX.
What is a User Interface (UI)?
Imagine walking into a posh hotel. Whatever you see in front of you — the lush carpet, the luxurious lounge chairs, the amazing chandeliers — these are all part of the hotel's user interface. They all work together to set and define what you are about to experience.
For electronic devices and computers, the user interface consists of elements you can see, touch, and control to let the device know what you want done. Switches, buttons, levers, boxes, screens and audio speakers are among many user interface elements that let you issue commands to the computer and receive feedback from it.
What is Vaporware?
Vaporware usually refers to computer hardware or software whose manufacture gets announced to the public but is either made available much later or never produced. Note that since the advent of smart vehicles, the term is also used for them.
Vaporware products are usually announced during the world’s most significant tech events, like CES. Companies that wish to hype their upcoming offerings often talk about them in events that are bound to catch the media’s attention, especially in time for Christmas. But due to time or budget constraints, many such wares don’t make it to market as promised. Some never do.
What is a Virtual Server?
A virtual server refers to a network server located in an offsite data center. As such, it can be shared by several users, all of whom have varying levels of control over the server. Using it allows copying resources onto virtual machines (VMs) within a user’s premises. Virtual machines are, of course, computers that mimic dedicated hardware or software.
Virtualization is often done to gain access to higher-capacity servers’ processing capabilities at lower costs than maintaining and running an internal data center. A virtual server can increase an organization’s server capacity by over 80% as well.
What is Virtualization Security?
Virtualization security refers to a set of solutions, procedures, and processes that aim to protect virtualized IT systems. Also called “security visualization,” it is software-based. That means the functions of common security systems can be deployed through software instead of being hardware-based.
Virtualization security ensures that each virtual machine (VM), network, server, application, or any other virtual appliance has security controls. It protects virtualized environments from a broad range of cybersecurity threats, including phishing, malware, and denial-of-service (DoS) attacks.
Virtualization security solutions also help IT teams implement granular access control.
What is a VPN Concentrator?
A virtual private network (VPN) concentrator is a type of networking device. It lets you create secure VPN connections and deliver messages between VPN nodes. As such, it is essentially a router but is built specifically for creating and managing VPN communication infrastructures.
The onset of the COVID-19 pandemic has led to a surge in VPN use among businesses. But not all organizations have the same number of users. And while a VPN router may be suitable for small businesses, it may not be able to handle the needs of large enterprises. That’s where a VPN concentrator comes in handy.
What is the Wiegand Interface?
The Wiegand interface is a wiring standard used to connect a card swiping device to an access control system. As such, you can see it in a typical point-of-sale (PoS) device found in a store, restaurant, hotel, or any establishment that accepts card payments.
The Wiegand interface takes its roots from the popularity gained by the Wiegand effect card readers in the 1980s. The Wiegand effect is a nonlinear magnetic effect named after John R. Wiegand. It is produced in specially heated and hardened wires called “Wiegand wires.”
What is Wirth’s Law?
Wirth’s Law is a well-known saying in computer programming that states, “Software is getting slower more rapidly than hardware becomes faster.” That creates a problem as the software slows down despite the hardware’s improved processing power.
The law is attributed to Niklaus Emil Wirth, who expressed it in a 1995 paper titled “A Plea for Lean Software.” Wirth was a Swiss computer scientist considered as one of the pioneers of computer science. He helped design major programming languages, including Pascal and Oberon. According to Wirth, the major reason behind the slowdown of software is their complexity.
What Is WYSIWYG?
WYSIWYG stands for “What You See Is What You Get.” It refers to an interface where the text, graphics, and other content displayed during editing appear very similar to the final product. WYSIWYG is often used in word processors, web design tools, and desktop publishing software.
For example, in a WYSIWYG editor, if you bold text, you’ll see the text appear in boldface right away, rather than having to visualize the effect of a bold tag (<b></b>) as you would when coding by hand. That allows users to design and edit documents in a way that resembles their appearance when printed or displayed as a finished product. WYSIWYG thus makes it easier for nontechnical users to produce documents or web pages without writing complicated code.