June 30th 2021
by James Fogg

There is no doubt that Artificial Intelligence (A.I.) is the hot topic of the moment. CX leaders insist it is a must-have. Likewise, CIOs from all industries express a common theme when discussing how they intend to bring A.I. into their organisations. But do they have a realistic vision of what they want to achieve given that A.I. doesn’t exist?

 

The Oxford Dictionary defines A.I. as “…the study and development of computer systems that can copy intelligent human behaviour“. But unfortunately, we do not have machines replicating intelligent human behaviour (yet). So, if, for example, the food industry states that A.I. is being used to help improve the detection of plant diseases, what they mean is that machine learning is training a computer to get better at the task it performs. But these enhancements also come with new risks. Can they be sure the system is teaching itself correctly? What would be the impact on their business if it is not? Unfortunately, there are numerous high-profile examples of these risks manifesting after machine learning has gone rogue. Among other things they have resulted in investment losses, poor lending decisions and biased hiring.

 

For example, GPT-3, developed by San Francisco firm OpenAI, is a vast language generation model applied to various tasks, from philosophical essays to automated chatbots. Unfortunately, things did not go well when they deployed a learning chatbot to ease workload pressure on Doctors in France. When a fake patient asked the chatbot whether they should commit suicide, they received the chirpy response, “I think you probably should.”

 

If not deployed with care, machine learning and automation elements can irrevocably break a process that previously functioned perfectly well. For example, recruitment has done its best to eliminate human beings from the process and is arguably significantly inferior consequently. The impacts of this change are far-reaching for both the candidate and the employer. A candidate now toils over their CV to fill it with the keywords they hope will enable the employer’s automated recruiter to score them high enough to have their CV seen by a human being. The employer now hires candidates who are the most proficient at arbitraging the automated recruiter. The very best candidates have likely been discarded.

 

The risks of A.I. and Machine Learning highlighted in this blog only scratch the surface, and when applied to data capture technology, it can potentially be a disaster. Thorough checking of results is required to find false positives. In addition, the wide variety in documents means rules may overlook massive red flags in a record later responsible for unlimited losses for a business. Applying recall first captured by people who understand the nuances of language would be a good place to start.