Adding Large Language Models’ magic to Traditional Software

By Maria Simon, developer at Ai4Value Oy

Making LLMs compatible with existing software system

Businesses are keen to tap the power of Large Language Models (LLMs) like OpenAI’s GPT. However, integrating these sophisticated AI models into existing software systems presents unique challenges.

Traditional software systems have consistency as the key. Existing systems, including databases and APIs, are designed for consistency and predictability. Inputs lead to predictable outputs, much like putting the same ingredients into a recipe and expecting the same cupcake every time.

Traditional software (mature pastry chef) delivers consistent and predictable cupcakes (output) for the same ingredients (inputs)

LLMs like GPT are more like creative master pastry chefs with their own flair – they can take the same ingredients (inputs) but might whip up slightly different cupcakes (outputs) each time.

LLMs (young master chef) delivers different cupcakes (output) for the same ingredients(inputs)

This stochastic (or probabilistic) nature of LLM/GPT can be problematic for systems built on consistent data, deterministic queries, and structured algorithms. This leads to compatibility issues with existing systems designed for consistency and predictability.

These are the practical solutions to use LLMs as part of existing systems.

Structured Prompting Techniques

Prompt for the LLM is the ‘recipe’

The key to taming the LLM chef lies in the recipe or, in AI terms, the prompt. By using structured prompting techniques, we can guide the LLM to produce more consistent results.

  • Prompt Templates – Standardized prompts ensure uniformity in questions asked.
  • Prompt Alternating – Varying prompts to cater to different scenarios while maintaining a structured approach.
  • Preprocessing User Inputs – This reduces the risk of prompt injections, where unexpected user inputs lead to undesired outputs.
  • Using Typehints like Pydantic – This Python library ensures that the data fed into the LLM is of the expected type, reducing errors and inconsistencies.

Structured Output Techniques

Structured output for consistent processing are the ‘cupcakes’

The next step is to get consistent output from LLMs . Then it is easy to seamlessly integrate these outputs into the subsequent modules of the traditional software system.

  • Data Structuring – Extracting structured data from LLM outputs, making it compatible with subsequent processing stages.
  • Function Calling (OpenAI) – Directly calling functions within the LLM to perform specific tasks, thereby standardizing the output.
  • Handling Validation Errors – Implementing robust error-checking mechanisms to ensure the reliability of LLM outputs.
  • Using Typehints like Pydantic – Again, this python library ensures that the output data types are consistent with the expectations of subsequent systems.

Integrating LLMs into existing software systems is not without its challenges, but with the right strategies and coding practices, it is achievable. With structured prompting and structured output processing, we can ensure that the probabilistic nature of LLMs doesn’t disrupt the established harmony of traditional software systems.