Orchestration as a Data Management Challenge-Part 3

Tim Coats
Last Updated on January 5, 2024 by Editorial Staff
January 5, 2024
Reading time < 5 minutes

In blog one of this series, we discussed how orchestration is about data. In blog two, we talked about how effective orchestration with advanced data management techniques can be used to develop a digital twin.  In this blog, we will discuss what we can do with all that data. More specifically, we’ll take a look at using data-empowered RAG AI to deliver stateful orchestration

Introduction

Generative AI is all the buzz.  People use it to write code, create chatbots, and draw pictures of dogs playing Texas Hold’em. Companies are in a race to understand how Generative AI can be used to improve their products and services to provide a competitive advantage. I even thought about using it to help write this blog. Then I realized that originality is one thing Generative AI has a hard time with. It shouldn’t be surprising, as the LLMs behind them are built from historical data. 

So, while everyone talks about Generative AI as the future, it is essential to understand that it is really just pieces of the past put together in different and novel ways.

There are a multitude of prompts to provide direction, tone, and other directions for the AI. However, predicting the AI’s response to these prompts can be challenging. Moreover, the results will still be based on generalities. And to the unsuspecting, it can sound more authoritative by citing references that may not even exist.

In a 2020 paper1, Patrick Lewis and colleagues developed a retrieval-augmented generation technique. While it has the unflattering acronym of RAG, this technique can dramatically improve the accuracy and reliability of generative AI models.  RAG allows general models to be augmented with data from specified sources external to the model. Users don’t have to retrain a model with the additional datasets specific to the application. They can use a general model and augment the model with new sources on the fly. This creates specificity in the model.

Introducing Generative AI to Orchestration

And this is the segway back to our data management and orchestration discussion.

Generative AI is a natural choice to build the code to drive complex orchestration. The AI can be trained using the configuration and management information for the infrastructure, on-premises, and cloud used to power today’s enterprises.  The model would also be developed using the workflows that make up the various tasks and activities to provide a basis for building the execution plans.

However, there is the same issue of generality.  For example, if you go to ChatGPT and ask for the common lines to add a new VLAN to a Cisco network, it will come back with the following code:

vlan <vlan_number>
name <vlan_name>
interface <interface_type> <interface_number>
switchport mode access
switchport access vlan <vlan_number>
interface <interface_type> <interface_number>
switchport mode trunk
switchport trunk allowed vlan <vlan_number>
show vlan
show interfaces switchport

 The code does not contain the specifics for the new VLAN, like the number, the interface it is exposed over, trunking mode, etc.  ChatGPT will even let you know you still have some work to do:

Generative AI is helpful, but by itself, the above lines will not provide autonomous infrastructure.

Augmenting Generative AI with Operational Data

A Generative AI model can be developed using the data around the automation data from the various infrastructure components coupled with the various types of workflow elements. The LLM will generate the required workflows, but they will fail because they lack the specifics of the environment. RAG can use the operation data, user inputs, and constraints to provide particulars of the environment and enable stateful orchestration. Figure 1 illustrates how this process is carried out.

Figure 1 – Retrieval Augmented Generation (RAG) Assisted Stateful Orchestration

This new model could be used for any use case, as it has been trained for the environment. As part of the post-processing, the model would be updated and trained using the operational data gathered from the environment. This model could be updated as technology changes are introduced into the environment.

Bringing RAG mainstream for Orchestral: Maestro and the Symphony Platform

Orchestral has been building its orchestration platform as “AI” ready. They understood that once tasks have been codified and data contextualized, advanced AI can provide autonomy to the system. Ingesting context-specific data allows the enhancement of the Base LLM to a Context Augmented LLM. RAG algorithms allow Orchestral to leverage data from the environment to deploy AI’s advanced capabilities. The approach is not unlike those being designed for autonomous driving and flight. The control models use real-time data from sensors to navigate the vehicles.

Orchestral brings these advanced capabilities to enable stateful orchestration and empower autonomous infrastructure.

Conclusion

In this blog series, we have navigated through the intricate relationship between orchestration and data management, culminating in valuable insights into the integration of Generative AI into orchestration frameworks. Our journey from understanding the essence of orchestration as a data-centric process to exploring the potential of Generative AI has been enlightening.

Generative AI, as we have seen, brings a unique blend of creativity and automation to the table, but it’s not without its limitations. Its reliance on historical data and the general nature of its outputs make it less effective for tasks requiring high specificity and contextual understanding. However, the introduction of Retrieval-Augmented Generation (RAG) marks a significant leap forward. By enabling AI models to dynamically incorporate specific, external data sources, RAG addresses the critical challenge of generality in Generative AI, paving the way for more accurate and reliable applications in orchestration.

Orchestral’s incorporation of RAG into the Symphony Platform is a testament to the evolving landscape of data management and orchestration. Just as the Benzite resolution in Star Trek emphasizes problem-solving with ready solutions, Orchestral demonstrates how identifying and addressing the limitations of Generative AI can lead to innovative advancements. Our approach, akin to methodologies in autonomous vehicles, reflects a broader trend where real-time, context-sensitive data transforms the capabilities of AI systems.

As we conclude, it’s clear that the future of orchestration lies in the harmonious integration of advanced AI technologies like RAG with robust data management strategies. This synergy will not only enhance the precision and efficiency of orchestration tasks but also lay the groundwork for increasingly autonomous and intelligent infrastructural systems. The next wave of innovation in orchestration is set to revolutionize how enterprises manage and leverage their data, driving unprecedented levels of automation and efficiency. Join us and let’s take this journey together.

Learn more at: Orchestral.ai

  1. https://arxiv.org/pdf/2005.11401.pdf ↩︎