AI

There’s no writing without reading… even for machines: how to make Generative AI effective

12/20/2022
Read in ITALIAN

Generating human contents exploiting Artificial Intelligence: the state of the art

Is there anything more primarily human than writing? Well, maybe no longer. At Aptus.AI we’ve always been following the development in the field of text generation, since the very beginning of our work. Especially in respect to AI-powered text generation - or rather we should say content generation, as GPT-3.5 goes far beyond simple texts. But let’s take a step back. It’s been more than two years since we published our post about Geppetto, the first text-generation system speaking Italian (based on GPT-2 model), developed by us at Aptus.AI, in collaboration with Bruno Kessler Foundation, University of Groningen and ILC-CNR. This Artificial Intelligence writes in Italian just like a human, starting from a prompt as an input. Anyway Geppetto - and GPT-2 - now represents the past, as Open AI has already released GPT-3 and GTP-3.5… waiting to discover GPT-4.

From GPT-2 to GPT-3.5: the numbers of the most powerful language model ever

Let’s be clear on what GPT is. First, GPT is an acronym which means “Generative Pre-trained Transformer”, therefore it designates an AI-powered language generation model based on AI Transformers, a Machine Learning model type that we present in a dedicated post. The complexity of these models is measured in terms of parameters, and GPT-3 accounts for 175 billion parameters (while GPT-2 had 1.5 billion of them), requiring 800 GB of storage. This enhancement of the parameters number helped solve some limitations of the previous model released by the San Francisco based company Open AI, which was already capable of generating convincing streams of text in a range of different styles, but nothing compared to GPT-3.5. This latter version of the model brings NLP to a new level, giving rise to discussion about the possibility of substituting humans in the generation of contents. And not just “simple” phrases or paragraphs in natural language. Yes, because the new Open AI’s model is capable of creating contents in many different languages, including guitar tabs or computer code, like HTML one. 

An overview of the impressive performance of GPT-3.5… while waiting for GPT-4

Besides that, the quality of the text generated by GPT-3.5 is so high that it can be hard to identify whether or not it was written by a human, which presents both benefits and risks. The benefits are quite obvious, while the main risk is not to recognize when the model makes mistakes. Therefore it can be dangerous to rely entirely on the contents generated by the model. Anyway, in the field of Natural Language Processing, GPT-3.5 is considered the most powerful language model ever. It’s really impressive, that’s undeniable. And GPT-3.5 is not an arrival point, but a starting one. In fact everyone in the AI sector is waiting for GPT-4, which can be unveiled in the near future. For the moment, we can exploit GPT-3.5, which is already used in ChatGPT, a  general-purpose chatbot which can engage with many different topics, being trained on a blend of text and code published prior to the end of 2021. The contents generated by GPT-3.5 are really amazing in terms of human-likeness if compared to the GPT-2 ones and they could be even more astounding if the model would be directly linked to the web, therefore not basing it results “only” on the training datasets. Anyway, if the prompts are properly set, GPT-3.5 is capable of creating very meaningful texts - only on certain conditions, as we’ll specify in some lines -, also solving informatics problems and, like we said, generating code.

An effective Generative AI is based on a solid machine readable format

At Aptus.AI we follow with the utmost attention all the technological evolution in the Artificial Intelligence sector, as we have been working for years to enhance the accuracy of our NLP systems. As evidenced by our work with Geppetto, we have been capable of leveraging these kinds of text generation models for years. And are already integrating GPT-3.5 in our RegTech platform, Daitomic. We want our SaaS to exploit this cutting-edge technology within its regulatory alerting service, by generating automated abstracts in respect to any regulatory update issued by banking authorities. And more. Our goal is to use GPT-3.5 to create a generic impact analysis, based on the gap between the internal processes and policies of a specific financial institution and the obligations introduced by a new regulatory change. Therefore we are working on a specific fine-tuning of this model in order to exploit the power of GPT-3.5 for three different use cases:

  • regulatory changes abstract;
  • non-compliance risk generator;
  • internal policy draft generator.

The first results of the related experiments already highlight the potential of this approach, even if the application of Generative AI technologies is still very early and presents many issues - like copyright, safety and costs - which need to be faced. Anyway, one of the main elements to be considered in this activity is, as we already pointed out, the quality of the prompt. In fact Daitomic’s features are based on a proprietary AI-powered regulatory machine readable electronic format which allows the system to identify what are the changes in the different versions of the same regulations, using them as an input for the text generation model. This one uses as prompts only the regulatory delta, therefore being able to exploit a pre-analyzed content. As we said, in fact, the main risk in respect to GPT-3.5 is that it’s very difficult to understand when it makes a mistake. But, within Daitomic, this risk is prevented by the pre-analysis allowed by our innovative machine readable format. Therefore, by releasing such advanced Generative AI models, we could make a step forward in our process to empower compliance and risk professionals by revolutionizing the regulatory change management, to make it a revenue-generating process, instead of a cost center. Want to learn more about Daitomic?

DAITOMICMANIFESTOTEAMCAREERSBLOGCONTACTS
ITA
FOLLOW US