WHAT IS RPA? WHAT IS INTELLIGENT AUTOMATION? A COMPLETE LIST OF AUTOMATION TERMINOLOGY

A COMPLETE LIST OF AUTOMATION TERMINOLOGY

1,414

Not sure what the latest automation acronym means?

You’re not alone.

The shortening of terms to an abbreviation of letters is meant to make things simpler, but we are all aware it often doesn’t .For anyone stepping into a room of people from an industry which they aren’t part of, it can feel like they are speaking an alien language.And, as automation is part of the tech industry — which is probably more guilty than most for creating swathes of acronyms — we have been known to throw one or two into a conversation.

SO, WHAT IS RPA? GETTING TO GRIPS WITH THE DIFFERENT AUTOMATION PROCESS TERMINOLOGY ON YOUR OWN TERMS

Of course, in any purchasing or investigatory situation around automation, the consultant, techie, or account manager will explain the terms. But many of you will want to understand what each acronym means and, more importantly, what each part does before starting out to ensure you know enough to challenge when looking at potential solutions to your problem, and of course, for your own sanity.

As our industry has quite a few acronyms and terms, it may seem a challenge to understand the main ones used in a short period of time. But, here’s some good news. Within the next 20 minutes, you’ll be able to grasp the basic ones. So, when somebody drops CV, DL, or CNN into a conversation — you won’t be confused in thinking they’re talking about a personal profile, slang term, or a news channel, but instead, will be able to put into the context of the automation product you are looking at.

COMPUTER VISION (CV) – EMULATION OF HUMAN VISION

The human eye and visual cortex is an amazing evolutionary system. It gives us the ability to see patterns, shapes, recognize faces and much, much more. Computer vision at its most advanced aims to emulate, or exceed this ability. In order to achieve this, computer vision uses a range of algorithms and machine learning principles to recognize, interpret and understand images.

For computer vision to be effective in daily use it needs to be trained. The training usually takes a form of being fed labelled imagery, example ‘this is a person’ and ‘this is car’, with the more data and variation provided, the more chance computer vision AI has a reference point for future decisions.

In Intelligent Automation, computer vision has a range of use cases from the complex to the simple. In simple use cases, it is used to work with systems to recognize where a button is on a screen and where it needs to click, and in complex use cases, it can be used to recognize when a car is committing a parking violation.

Ultimately, computer vision opens up a whole new set of possibilities for interactions. Providing digital workers with the ability to not only see, but if trained broadly, the ability to recognize the intent of a UI design if a search button is replaced by a magnifying glass, or in a more complex situation mimics the real-life patterns that people usually carry out

DEEP LEARNING (DL)

Deep learning is a subset of machine learning inspired by the structure of the human brain. It differs from machine learning because it learns without the need for human intervention in the process. Where machine learning requires parameters based on descriptions of the input, deep learning uses data on what the object or piece of data is, and how it differs from something else.

For instance, if you got everyone to draw a letter, each person would draw the letter differently. As a human, you can identify the letter regardless of whether a child, or an adult drew it — a machine usually would not understand this.

Deep learning gives the ability for this to be understood, by taking an input of the pattern comparing it with data of what something should look like and based on weights and possibilities within the system — giving an output of what the likely letter is.

Taking an unstructured data set and giving it a likely meaning for a decision based on a probability. An application of this is for email triage, or chatbots in simple applications, or more complex applications in medical condition recognition.

CONVOLUTIONAL NEURAL NETWORKS (CNN)

Convolutional Neural Networks are generally used as an effective means of recognition within videos or images. They use weightings and biases to work out what something is based on taught parameters from data. Think of those squares around objects that recognize a car, a cat, or a dog in recent uses of AI shown on tech programs.

Usually, these images have a probability number written next to them, this is taking the data from within the neurons and feeding out an outcome of it being that. So, a square around a cat, for instance, may have a number of 0.976, meaning out of 1, it is that sure that the thing is a cat.

So, given that’s the usual application, what is the basic principle for how they work?

CNN are a type of fully connected forward neural network. Sounds complex, but essentially what it means is, within this network, all the neurons always move forward and they are all connected. The network takes instruction data which is then used to decide what something is based upon spatial relationships of pixels on a page.

In application, this may mean that it learns a nose and mouth are usually a set distance apart, which is then combined with other information about a person’s face to give a decision whether it is a person with a probability out of 1. By analysing an image bit by bit in this way, CNN can decipher to a degree of likelihood how many people are in an image then feed that information as an output matrix for a decision, or another use.

MACHINE LEARNING (ML)

Until the last decade, machines learnt only by following instructions from a person. This works, but it means that machines were always an extension of people, instead of being autonomous. People recognized this and they also knew that people learnt from experience and didn’t simply follow instructions.

With that in mind, they thought what if machines were actually taught by people, so rather than just following instructions, they can learn to understand and reason with a decision in a similar way a human would.

This idea came to fruition in the concept of machine learning with three key approach types; supervised, unsupervised and reinforcement learning. All with the end of goal of helping machines make decisions either autonomously, or semi-autonomously, to help them adapt to changes they are exposed to and deliver the best results in the shortest time frame — without needing to constantly refer back to a person for direct instruction.

NATURAL LANGUAGE PROCESSING (NLP)

The basic meaning of this acronym is easily understood if you separate the phrase into ‘natural language’ and ‘processing’. The ‘natural language’ part, in this context, means the human language, how we communicate via speech or writing, and the ‘processing’ part is how a computer works on this information. So, Natural Language Processing means how computers can process our language. This is what the acronym means, but how does it achieve this complex feat?

A simple way to understand this is to visualize how a child learns to speak. Firstly, they learn the basic words, then the basic grammar rules, and then they begin to slowly build complexity by learning figures of speech, or other alternative ways to communicate.

Computers learn in much the same way, starting out with simple structures, and ending with trying to understand the irony in a sentence. This can either be taught via a person giving the machine understanding or through feeding large amounts of data via algorithms to give a depth of meaning to the machine of human to human communication.

In automation at this moment, NLP is used to underpin capabilities in chatbots and virtual agents in human conversation. All with the end goal in mind of a machine being able to communicate to the same efficacy as a person.

OPTICAL CHARACTER RECOGNITION / INTELLIGENT OPTICAL CHARACTER RECOGNITION (OCR / IOCR)

Despite living in a digital age, many businesses still work with paper documentation. In order to work with these documents effectively, many businesses will scan and turn the paper documentation into a PDF. On the surface it would appear this could resolve the problem, however, the PDF documentation is not actually turned into digital text, instead, it is an image of the document, a jpg, for instance.

The result of this is the need for people to manually read the document and rekey the data. The technology used to overcome this problem is OCR and for more accurate processing iOCR. So, what does it do? And, what can iOCR do that OCR can’t?

Let’s take an example of an invoice. If the invoice has static information such as the invoice number in the top corner and the cost in the bottom right, OCR can be used effectively with few exceptions to read, understand and digitize the information. However, if the information is not static and fluctuates due to variations in invoices, OCR will flag more exceptions and result in a return to people reading the scanned documents.

Thankfully, iOCR can help in this situation. As iOCR can learn from peoples’ actions, or through pattern recognition, if the document doesn’t vary wildly, the success rate can significantly improve. As it continues to learn by recognizing recurring information patterns, it can see if the product name or invoice number has shifted corners. All of which results in fewer exceptions being flagged and gives people back more time, and in the cases of automation, allows digital workers to perform the whole process.

ARTIFICIAL INTELLIGENCE (AI)

Up until the early 1990s, AI was understood as the general intelligence of machines, meaning they are self-aware and have abilities which equal, or exceed human intelligence.

This was reflected in films from the time such as The Terminator in the 80s, or HAL from Kubricks 2001: A space odyssey in the 60s. Today, AI has taken on a wider meaning, often referred to as ‘applied AI’, AI used in current automation systems and in IT systems is generally used to simulate part of human intelligence in a process.

AI deployed in systems provides the ability for machines to learn, reason and self-correct. This results in a machine which can intake information within a rules-based structure, reason on these rules to meet conclusions based on probabilities and self-correct current trajectory if they believe the current action is going to be unsuccessful.

The ability to apply intelligence to parts of machine interactions gives them the ability to recognize speech, recognize faces via computer vision, or overcome process decisions without needing human intervention.

RECURRENT NEURAL NETWORKS (RNN)

Traditional neural networks have limitations. The major one being they don’t maintain the information, so every time they try to think about something they have to do it from scratch. Recurrent Neural Networks address this challenge by forming loops of networks that allow information to stay within the architecture. Sounds simple, so what does this mean in terms of processing and what can be achieved?

Let’s take an example to explain this using a person and a sequence of context. Think about the following — a dachshund is a type of _______. As a person, it is easy for you to fill the gap in the sentence, or sequence, with dog. This is using information in the sentence in relation to your previous knowledge.

Essentially, this is the logic for how recurrent neural networks use the sequential structure of data to work something out — hence the name recurrent. The operation of a neural network exploits the sequential structure of data to loop information from previous experience and the current input to analyse every element of a sequence.

This means that RNN is made specifically for information that works sequentially, think the text, or speech example above, and many others such as time, sensors or videos. All of which are giving computers and automation the potential to achieve more by being able to use multiple information inputs to work out sequenced data outcomes.

ORCHESTRATION

Orchestration isn’t exactly an acronym. It’s here because it’s an important term within automation that is used regularly and often misunderstood, so we thought it was key to put it in. Orchestration is one way out of three main approaches of managing automation, with the other two being manual and scheduling.

Firstly, the glaringly obvious one; manual. Manual is quite simply a person triggering a job, usually for a specific process or task. The next one is scheduling — the most common technique for people managing automation platforms — works by instructing the digital workers to perform a task every 2 minutes between specified times. Although this is the common approach and is more autonomous than manual — it has its drawbacks.

Namely that once the digital worker has completed the task it will sit idly until the next one. This was the accepted outcome, until the arrival of orchestration. Orchestration leverages data and algorithms to gain an understanding of when the best time would be to perform tasks or assign themselves to other tasks instead of sitting on the bench. This approach delivers peak efficiency and means digital workers aren’t slacking off or being ‘part-timers’.

NATURAL LANGUAGE GENERATION (NLG)

Natural language generation is simply taking data that a machine understands and human can’t, and turning it into language that people can understand. We are surrounded by so much data that it becomes overwhelming and can’t be comprehended by the human mind alone. But, machines can comprehend this information and NLG gives the capabilities to feed it back to people in terms we can grasp.

The way NLG is spoken about above is the more complex end of the spectrum when talking about its wide applications today. A use today would be for something around financial advising for instance. The machine scans the market for data and brings together a stock overview.

For example; Your stocks for (company name ‘A’) today have dropped by (x number of points), your other stocks (‘B’) has gone up (x amount).

From your AI analysis of the market and data, we advise you to sell (A) and invest in (B) due to the predicted achievements of (insert predicted data) rise.

While this is simple, it demonstrates the capabilities currently being used and how the future is heading towards NLG, giving us an understanding of data which we couldn’t possibly compute in our own minds.

PROOF OF VALUE (POV)

In order to explain Proof of Value you need to understand Proof of Concept, or POC. POC is a common term which is used across software products with a simple meaning; proving the concept, or technology works as claimed. In the automation industry, this usually means showing that a process can be automated and a simple one at that.

Within automation, a POC stands as a waste of time – attempting to prove a concept that has been proven time and time again from America to Australia. This is where Proof of Value, or POV, comes in. Proof of Value may just sound like something a marketing committee came up with, but it’s so much more than that.

A POV is about showing that the business case for automation can be delivered at scale for all their business needs. While a POC will look at simple things such as ‘does the technology work as expected?’ and ‘how has it been deployed?’ a POV will scope the business case, the transformation and map, measure, design and forecast the potential outcome with leadership sponsorship.

ROBOTIC PROCESS AUTOMATION (RPA)

Robotic Process Automation or RPA is a term for a piece of software, or a ‘robot’, which carries out tasks and activities within systems, or applications, in the same way, a human would. The software is perceived as a ‘robot’ because it works in a robotic way, completing tasks automatically in the same way a human would. This element of the software is a deviation from previous automation products.

Previous automation products would need modification to applications, or systems in order to carry out processes and tasks. Robotic Process Automation works differently. It interacts with systems and applications utilizing the same interfaces a person does to capture and manipulate the required information for the process.

On top of that, they can work with other methods such as scripts, or web services. The result is a ‘robot’ which can complete an extensive number of repetitive tasks in places where once they were only easily completed by people.

CENTRE OF EXCELLENCE (COE)

The term Centre of Excellence is an acronym with slightly different meaning depending on what industry you find yourself in. Generally speaking, a CoE is usually responsible for providing leadership, best practices, research and support for the rest of the business. In automation, it means the above and more.

A Centre of Excellence (CoE) is vital in any automation deployment to deliver scale and instil an ‘automation first’ mindset. What does that mean in real terms? It means creating the go-to place for employees to gain knowledge and resources on how automation can help their department. Rather than merely setting up a team and assuming success, a CoE must be a place to distribute, reuse and enlighten staff to the possibilities of automation.  The technology lead and developers within a CoE will generally have three main areas to focus on. Firstly, they look at building a pipeline of automations — working out which processes are most suitable and have qualifying potential.

Next, they scope those processes into deployment, being responsible for the execution of delivery — from design to deployment. Before being there to pick up any improvements and support that are needed — which is important in identifying problems and for sharing with the rest of the company experiences of deployments. If allowed to entwine and grow amongst an organization, a CoE can provide lasting automation success.

ENTERPRISE RPA

You don’t use a teaspoon to dig foundations. In the same way, you don’t use simple RPA to automate an entire enterprise. It will be inadequate at dealing with the needs of the organization. Enterprise RPA is built to handle the needs of an organization spanning thousands of employees — with key characteristics to deliver automation at scale.

Unlike simple RPA, or desktop automation tools, Enterprise RPA is not a locally installed solution. No more rooms full of PCs, or locally installed versions on your laptop. Instead, it is built into servers either on-premises or in the cloud, instilling it with the ability to scale and giving the ability for overall control. In this environment, controls, availability and security can be implemented to provide the ability for management of more than one robot at a time and easy auditability.

After all organizations need to know what the bots are doing when they turn down the lights at the end of the day. What’s more, Enterprise RPA has the ecosystem and development structure around it so that it can maintain, reuse and develop automations in a simple, repeatable and reliable manner. In this way, the ‘robots’ or in more advanced AI versions, digital workers, can meet every process perfectly.

INTELLIGENT AUTOMATION (IA)

Robotic Process Automation is the mimic of human actions, Artificial Intelligence is the simulation of human intelligence, and Intelligent Automation is the combination of the two.

It takes the ‘doing’ from RPA and combines it with ‘learning’ from ML and ‘thinking’ from AI to allow the expansion of automation capabilities and possibilities.

IA takes technology such as computer vision, NLP and machine learning and applies it to RPA, allowing the automation of processes that don’t have a rules-based structure. Using IA digital workers can now handle unstructured data and provide answers based on subjective probability.

The result of this is the ability to expand the number of processes that can be automated, from the semi-structured such as an invoice being processed, to the unstructured such as email triage for an organization. But, it goes further than that, supercharging the abilities of RPA through orchestration and the ability to think without requesting human instruction.  Meaning Intelligent Automation gives organizations new efficiency and productivity, and ultimately a new digital workforce to rely on.

NATURAL LANGUAGE CLASSIFICATION (NLC)

Words can have different meanings depending on the context. As a human, we learn how these contexts interlink as we grow up and understand how a word can relate to multiple things depending on how it’s placed. Natural Language Classification is a way of teaching a machine to learn a language which is domain-specific, essentially teaching the machine to understand the context in the same way a human would. Meaning they can understand and respond to words depending on their placement, or meaning in that structure.

An example of this is demonstrated by one of the worlds most recognised brands, Apple. If you take the word Apple on its own you would assume you are referring to the fruit. But, if you are in mobile phone network the word has a different meaning. NLC is used to classify this data with labelling, so when the machine reads the word apple it knows it means a phone type to a phone network, or it could be labelled to mean an apple to a supermarket.

Hopefully, this list of acronyms gives you a real insight into the world of AI and automation. The concept always seems complex, but most of them can at least be partly understood in plain English. Of course, the above doesn’t delve into the deep areas of complex mathematics, – I don’t know about you, but we like to keep our thinking on a business level. After all, automation is about a solution to a problem and by democratizing the use of AI, we hope you can apply these definitions in your quest for the right automation solution for you.

Knowldege is power ….Stay upated !!!

Join us in league of 2400 professionals to get AUTOMATION INDUSTRY UPDATES at your fingertips