In Humans vs. Machines, do the Humans win?
One of the things that humans are inferior at doing is considering the impact of a technology promising automation. This is because we tend to vastly overestimate that impact in the short term while simultaneously underestimating its long-term implications. There are many reasons for such perceptions, but it is at least partially due to a sort of “novelty bias”, the idea that the “new” always appears better than what you currently have and leads to the false impression that achieving the result was easy.
It is more commonly known as "hype".
Excessive hype and fear surrounding automation can negatively impact both physical and mental health, inducing tension in the body and preoccupying thoughts about the possibility of automation replacing human roles, especially one's own role within a company. However, it's important to remember that the constant influx of new technologies results from ongoing progress and innovation, leading to continual change. As such, hype tends to eventually fade away as people become more accustomed to the technology and its implications.
Constant change means the realisation that any technology only exists at a point upon a continuum. We all know this to be accurate, as how often is a technology rolled out only to require enormous "patching" and "service packs" before it can be effectively used? The more advanced the technology, the more likely a constant updating cycle will be baked into the core. This is extremely important in some types of automation and Artificial Intelligence ('AI'), where the system may have been trained on one set of data to, say, predict the buying habits of a group, but the group's buying behaviour may change over time, leading to increasingly inaccurate predictions, without some form of updating.
The point here is two-fold: firstly, if any technology you are contemplating does not have this upgrading cycle adequately considered, then it is questionable how much value you will derive from the investment long term, and secondly, humans impact technology just as much as the other way around. The human impact on technology is like the impact of erosion upon a river; it can take a long time to be felt, but it can ultimately erode the cliff edge and change the entire landscape. And time is usually on the humans’ side because technology can take so long to roll out. A good example is the "computer". Initially seen as a threat to typists, the sheer volume of years it took to roll out changed the typist into the office worker and the computer into the "Personal Computer ('PC')", an aide and not a replacement. The hype burned away and gave the erosive nature of humanity a chance to influence the direction and eventual result, changing the course of the computer forever.
Let us consider this within the framework and lens of Compliance. Regarding the parts of the third-party risk evaluation process, it is clear that many a mundane and burdensome activity is more suited for machines than humans. However, how can one determine which automation technology is suitable? One way is to consider automation technology as a roadmap, where simple systems are built up to a higher level of complexity. Looking at it in this way enables your investigations of automation to process smoothly from simply replacing the human activity to replicating the human ability to learn and eventually simulating the human's ability to think.
Starting with Robotic Process Automation ('RPA'), a programmable software technology designed to replace the need for a "human at the keyboard". The advantage of RPA is that it can interface with other and older technology that would otherwise not have a method to integrate the activity cost-effectively. For example, the mainframe systems still use "green screens". In addition, some RPA products are advanced and able to make decisions aggregated across multiple robots, for example, the prioritisation of activity. The key to finding an appropriate process upon which RPA could benefit is to look for repeatable, manual and mundane activities performed hundreds or thousands of times.
A good example is data acquisition and transfer. Suppose your business process involves logging into one or many systems and collating information by hand into a compliance system. In that case, you may be able to fully automate that part of the process and make considerable gains in efficiency. So, think of RPA as a replacement of a portion or an element of business activity, not necessarily a complete replacement of the human operator. Finding these processes naturally requires that you have mapped them in the first place. This is where your business analysts can help produce a process diagram resplendent with the bottlenecks, manual burdens and the potential estimate for time to be saved by RPA.
Moving on from RPA comes another well-discussed automation technology: Machine Learning ('ML'). This technology is not new, not at all. It has been around since the 1950s. What is new is the ability of programming systems to process the data fast enough to make it work effectively. It combines statistical techniques with the ability to "learn" from experience to find patterns and inferences from data, which it then uses to make predictions or classifications. Two main types exist, "Supervised", which learns from training data provided to the system, and "Unsupervised", which learns from finding structure in the input data. ML is perhaps best thought of as a toolkit of mathematical algorithms that can be applied to solve particular problems. ML is commonly developed on "free" or "Open Source" technology, which has sped up its worldwide adoption.
Where to use this technology?
Machine Learning systems require the data to work on and make predictions; in Compliance, this can be data pertaining to decisions previously made by humans, such as match-review. If you can find a corpus of this decision data, then it could be possible to build an algorithm that can train on that data and make future decisions just as well, or better, than a human. It would be as though you brought in every person who ever worked on your team and had them train the system on every decision they ever made. However, there are some limitations worth considering because, just as the computer cannot really replace a human in the office, neither can AI or ML effectively replace a human's ability to bring context to a decision. A human can always look somewhere new for information, ask someone else for help, or even bring to bear all the domain knowledge and faculties they have accrued
to aid them. An ML cannot do this. Similarly, some ML types have methods that conflict with the GDPR and must be designed thoughtfully. This will be the job of a Data Science team who can professionally test a suitably working and narrow hypothesis. This is more significant, so Machine Learning is higher in complexity than RPA.
So far, we have covered RPA, which is very useful when automating a part of a process, and ML, itself useful when computing a decision, a prediction or the grouping of things. Now we come to the highest level of complexity, known as Cognitive.
Cognitive is part hype, part the future and part the fundamental concept of larger and more sophisticated automation systems, all utilising multiple technologies and working together. A cognitive solution would undoubtedly use machine learning as a part of its approach, but the context would be much broader than testing a narrow prediction hypothesis. For example, in terms of third-party risk compliance, we could be talking about the identification and verification of data by reading volumes of documents, researching the document's contents against multiple databases, using machine learning to find correlations between patterns of fraud and flagging risk accordingly. Unfortunately, looking at the broader context has remained the domain of humans. Still, with cognitive approaches being developed from the building blocks of successful RPA and ML projects, the future promises that the potential is there for more significant disruption. However, this is "on the roadmap" rather than ready to be deployed.
It is clear from recent innovations in AI that Large Language Models, such as ChatGPT and GPT4, is a step towards cognitive AI. These models use deep learning techniques to analyze and understand human language, enabling them to perform various natural language processing tasks, such as language translation, question-answering, and even generating coherent and contextually relevant responses to open-ended prompts.
However, it's important to note that while these models have demonstrated impressive capabilities, they still have limitations and do not possess true cognitive abilities. For example, they lack the ability to reason and think abstractly, understand and perceive the world as humans do, and perform tasks that require common sense and creativity.
Furthermore, the development of cognitive AI requires advancements in language understanding and progress in areas such as perception, reasoning, and decision-making. Therefore, while Large Language Models are certainly a step towards cognitive AI, much work must be done before we can achieve truly intelligent machines.
So, since the perfect cognitive level of automation is yet to be built, and opportunities to use machine learning apply only to particular use cases. In that case, the automation industry is responding by reducing its sights a little and leaning on the best computational engine we have ever encountered. That is the fleshy one to be found between each person's ears. Of course, humans are vital to making the system work through Humans in the Loop (HiTL).
Humans in the loop refer to the practice of involving human experts or annotators in the process of training an AI model. This is done to improve the quality of the training data, which is a crucial factor in the model's performance. Human annotators can provide labels, feedback, and corrections to the model's outputs, helping to improve its accuracy and reducing the risk of bias.
For example, in training a language model, humans in the loop might be used to annotate training data by providing correct answers to questions or feedback on the quality of the generated responses. They may also be used to evaluate the model's performance and provide guidance on improving it.
In general, using humans in the loop is an essential aspect of AI development, as it helps ensure that the resulting models are accurate, reliable, and unbiased. By incorporating human expertise and feedback into the training process, AI models can better reflect the nuances and complexities of the real world and provide more effective solutions to the problems they are designed to address.
The rollout of automation worldwide is quickly discovering that the algorithm eventually becomes the assistant and supporting act for many use cases, just like the computer did.
Regards,
James