AI

Better safe than sorry, when it comes to Generative AI… but there’s another option

2/28/2023
Read in ITALIAN

The real power of Generative AI? Knowing its limits (especially on factual accuracy)

For those who have been following technological developments in the field of Artificial Intelligence for years, like us at Aptus.AI, the recent failure of the giant Google’s Generative AI model - Bard - is everything but a surprise. On the contrary. Back in the days, we had already dedicated a blog post to the idea that the real power of Natural Language Processing is hidden in its limits. Nothing could be truer, especially in reference to Generative AI models, which are very good at generating contents, but equally very limited at ensuring precision and accuracy on a factual level. But this does not mean that Generative AI cannot be used properly. As we write in a much more recent blog post, the game is being played on the field of awareness on what can and, more importantly, what cannot be asked of an AI, especially if it is applied to real business cases.

What do the experts say about Generative AI? Here are some authoritative voices

But let’s turn to the facts. Obviously at the origin of the debate is the incorrect answer given by Google Bard about the James Webb Space Telescope in its presentation demo, of course. Not least because it caused 100 billion dollars in losses as Google’s stock plummeted. But even those who have seriously tested Bing AI - the Conversational AI launched by Microsoft, based on ChatGPT - could not help but note how Bing AI's factual accuracy is also definitely low. And anyone in the AI industry can only confirm, pointing out the dangers of overconfidence on the part of users, but also of companies, toward these tools. Just think of the Technology Review magazine published by an absolute technology authority like the MIT (Massachusetts Institute of Technology), which notes how this technology “is simply not ready to be used like this at this scale” and that “Generative AI models are notorious bullshitters, often presenting falsehoods as facts”. Indeed, it is evident how Conversational AI models make upinformation, thus being dangerous both with respect to the information provided to people via search engines, and also with respect to the possible uses by companies developing cloud computing services and software of various kinds. Many examples along these lines can be found on GitHub, where users have reported a wide variety of failures by ChatGPT, more or less amusing. As reported in Grid magazine, the technology news site CNET, which has used Generative AI to produce articles, is now checking them all “for accuracy” after the online magazine Futurism spotted several errors in explanations of fundamental concepts related to interest calculation and how loans work. Turning to academia, on the other hand, we can cite the recent paper published by researchers at Stanford University, Georgetown University and OpenAI, where they note how Generative AI can be used by malicious parties for propaganda purposes. And we stop here, but we could go on with many more examples on the limits of factual accuracy of Generative AI models.

Human-in-the-loop and machine readable regulations for an accurate Generative AI

But let's turn to the good news! Generative AI models can still be used effectively, even for business-related purposes. How? Exactly by knowing the limits of Generative AI. In fact, at Aptus.AI, we are already integrating this technology into our RegTech platform, Daitomic, starting with specific use cases. In this post, we focus on our regulatory update abstract generator. For compliance professionals, especially in the banking industry, the chance to receive recaps of the only changes made by regulators, both through new versions of the same regulation or through modification documents, could revolutionize the entire regulatory analysis process. To ensure that this innovative Daitomic’s functionality provides its users with accurate and valuable content, we give great care in how to manage Generative AI models. The first element to take into account is the quality of the prompts, that are the requests made to the AI. These should contain only the regulatory delta, namely contents that are pre-analyzed by the AI, which reports exclusively the regulatory changes. An operation enabled by our machine readable regulatory format, which we will present better below. The second element is based on the awareness that the main risk of Generative AI is that it generates texts that are so human-like to make it difficult to recognize its errors. That's why it is necessary to have a human-in-the-loop approach, that is to let legal-qualified humans verify the goodness of the content before using the information in Daitomic's regulatory alerting service. Our SaaS is thus able to offer abstracts related to every regulatory update issued by banking authorities, such as the two below:

  • Commission Delegated Regulation (EU) 2022/2553 of 21 September 2022 - amendments to the regulatory technical standards set out in Delegated Regulation (EU) 2019/815 regarding the 2022 update of the taxonomy for the single electronic reporting format
  • IVASS Consultation Document N. 1/2023 - amendments and additions to IVASS Regulation No. 52 of 30 August 2022 and ISVAP Regulation No. 38 of 3 June 2011

Regulatory changes abstracts (and more): here’s Daitomic's RegTech use case

But regulatory abstracts represent only a part of the possible applications of Generative AI in RegTech. In fact, our goal is to also use it to create generic impact analyses based on the gap between a financial institution’s internal processes and policies and the obligations introduced by a new regulatory change. In short, it would also provide financial compliance professionals with a non-compliance risk generator and an internal policy draft generator. It is worth repeating that, although the results of the early test conducted on the most recent regulatory updates already show the potential of this technology, the application of Generative AI to business still has several knots to unravel. Issues of which, at Aptus.AI, we have been aware for years and which enable us to improve Daitomic every day. Until the recent launch of  Daitomic Chat, the Conversational AI service integrated within Daitomic which allow users to chat directly with the law and receive answers in real-time on the content of regulations. Want to be among the first to try Daitomic Chat?

DAITOMICMANIFESTOTEAMCAREERSBLOGCONTACTS
ITA
FOLLOW US