Case study: Using humans and AI to deliver smarter service
Cost and complexity shouldn’t be a barrier to the use of artificial intelligence, with Baker Tilly Digital experts developing multi-stage solutions
Artificial intelligence may have captured the headlines, but for businesses looking to employ AI in their work processes, it remains a complex and costly process to find the right fit.
And if enterprises are only experimenting with generative AI platforms like ChatGPT, they might be missing out on the opportunities other AI and machine learning tools can deliver.
For Vladimir Vasilev, Digital Lead with Baker Tilly Dominican Republic, success can be found in combining the natural language processing advanced by large language models such as ChatGPT with other elements, such as computer vision, document extraction and custom machine learning to solve problems that were previously difficult or highly expensive to address.
“In the past, we were seeing new requests from clients mostly around extracting data from documents using AI,” Mr Vasilev says.
“We are a partner and reseller for different solutions, which allows us to utilise all of the AI web services for Amazon, Google and Microsoft, for example. With a small use case, where we just need to work through document extraction, we can basically utilise all those services quickly.
“But when you start to have very large use cases, where you might have thousands and thousands of documents, you need a different solution.”
ChatGPT has a free limit of a few thousand characters or tokens, while GPT-4 allows more characters before users have to pay to train the model.
“There are many models for classification or data extraction but when you need something more customisable … you need to create something custom.” — Vladimir Vasilev
But given very large volumes of content are needed for a business to predict customer behaviour or identify trends, it can become necessary to split the task into different steps.
One approach the firm has developed combines elements of computer vision, custom machine learning, natural language parsing and translation to be able to read and review documents, identify information, extract the results and digitise the output.
“First we are identifying what each document is for the document classification step, then there is a machine learning part where the system will learn to extract key values — perhaps if an insurance document includes a registration number somewhere, the machine will extract it.
“The next step is a verification component; if there is a discrepancy between the expected and actual content, we classify this manually and it will learn.
“The system we use reads printed text at 100% accuracy and handwritten text at 90% accuracy, but we have a process in place to verify and improve.”
The firm uses what Mr Vasilev calls a ‘hyper automation’ process — combining both humans and machines for best effect.
“We really have three actors: AI, people and robotic process automation,” he says.
“So if we are analysing emails, for example, we link through an API to OpenAI to analyse the content of the email and the AI will summarise the information and classify it.
“Since the emails usually have attached documents, those are classified as well, the data is extracted and the information is reconciled.
“We want really high accuracy here because you want to limit the number of documents that need to be reviewed. “
Looping in a human reviewer at this stage can ensure any documents that don’t fit the expected pattern can be quickly assessed, and the results used to train the model further.
Mr Vasilev says that by leveraging different capabilities and specialisations across systems, clients can often get a better, faster, more accurate and cost-effective response than using a single tool.
“There are many models for classification or data extraction but when you need something more customisable such as for predictive analysis, you need to create something custom,” he says.
“The better the model, the better the result.”