Gracenote Explores TV’s AI Future and ‘The Power of Grounding’

0

The growing presence of artificial intelligence (AI) in our daily lives has amplified consumer desires for personalization and immediacy. In the world of entertainment, people expect instant answers, tailored suggestions and the ability to act on what they see and hear. For some, that means finding the perfect movie for a specific mood; for others it means finding the sports game between their favorite team and biggest rival.


To deliver these experiences, publishers, platforms and services will increasingly leverage large language models (LLMs), which will become the default engines for next-generation entertainment environments, Gracenote says in a newly released report.


 

LLMs are probability matrices, not databases, trained on exhaustive, but finite, data. The fundamental nature of this novel technology means that LLMs synthesize, but do not retrieve, data, and are therefore prone to ‘hallucinations’, where they return plausible-looking, but incorrect, data. To ensure that LLM responses are accurate, relevant and trustworthy, LLMs must be connected to supplemental, real-world knowledge sources. This process of ‘grounding’ reduces errors, enriches results and provides contextual relevance. There are two primary methods for grounding LLMs: retrieval-augmented generation (RAG) and the Model Context Protocol (MCP).

RAG and MCP each address the limitations of LLMs, but they approach the issue in fundamentally different ways.

At a very high level:

RAG involves a process where the original prompt is enriched with relevant data from an external knowledge base of relevant documents. It processes this enriched prompt, and its response reflects the accuracy of the data from the external knowledge augmentation.

MCP is an open source protocol that standardizes how applications provide context to LLMs. As a universal interface, sometimes described as the ‘USB-C for AI,’ MCP connects LLMs with external data sources and logic, eliminating the need for custom coding and custom integrations. Here, an LLM is connected to one or more MCP servers that facilitate real-time validation for incoming queries.

RAG: the document retrieval expert

Among the two techniques, RAG has the most history. With roots in early information retrieval, RAG was first discussed as a process to dynamically connect LLMs to external knowledge repositories in 2020.

Simply put, RAG allows LLMs to refer to information outside of its training data before responding to a user query. With RAG, LLMs don’t need to be constantly retrained to stay current. Instead, periodic updates to the external knowledge base keep an LLM’s responses current and relevant.

While RAG dramatically improves the accuracy and relevance of LLM outputs, it does have some limitations:

  1. RAG is best employed with unstructured, document-based information retrieval.
  2. RAG enriches the prompt before the LLM begins processing, so the LLM cannot employ its internal logic to source relevant information.

MCP: The real-time action enabler

Made available in late 2024 as an open-source protocol, MCP solves the challenge of developing custom integrations between data sources and AI models. As a universal connector, MCP creates an interface between an LLM and external data sources. With a low-latency API and uniform, abstracted interface, MCP allows LLMs to get real-time, current information from one or more domain-specific services, and permits the LLMs to invoke specific tools that allow it to provide the best possible answer.

With MCP, LLMs can be plugged into many data sources seamlessly, enabling real-time data access and actions. Given this architecture, MCP grounds its responses with external knowledge bases after a user query, making it ideal for scenarios requiring the most up-to-date information available, such as sports scores or stock price changes.

MCP is the right tech for tomorrow’s entertainment

RAG and MCP are inherently designed to do the same thing: enrich LLM responses with external context to reduce hallucinations and provide contextual relevance.  However, they excel at different data types, provide different levels of sophistication, and the methods they execute this task are fundamentally different.

RAG is best for unstructured, document data retrieval, whereas MCP is optimized for structured, real-time data access. While LLMs provide dramatically improved search and discovery experiences than traditional tech infrastructures, MCP is the ideal grounding solution in content-first experiences that leverage structured data and rely on reasoning best provided outside the LLM itself.

Data currency is key here. Static document files may appease a user query about the history of how a specific movie or TV show was developed, but they can’t help users identify when or where that show or movie will be broadcast. LLM training data is fixed-in-time, so grounding provides LLMs not only with correct data, but timely and up-to-date data that allows LLMs to access the real-world outside its ‘knowledge locked’ model.

But program availability is just the tip of the iceberg. MCP unlocks a new era of entertainment experience because of its ability to create extremely rich and personalized experiences. Here’s an example:

  • A Connected TV provider wants to create a daily rail of recommended movies based on a user’s location and current weather.
  • The provider connects their foundational LLM to two MCP servers: one that returns the weather for the location and one that returns the content ID for a movie and year combination.
  •  When the user logs on for the first time that day, the provider asks the LLM to return the top 10 films by popularity based on weather at the specific IP address.
  • The LLM uses the weather MCP to determine the current weather
  • The LLM then curates a list of films featuring that weather based only on the knowledge in the foundational model.
  • The LLM queries an entertainment metadata provider that returns the content ID for each recommended movie.
  • The movie recommendations are passed to the Connected TV provider and are displayed to the user.

The use of LLMs will become transformative for audiences, but without grounding, LLMs are restricted to unverified, time-locked responses. Grounded LLMs will allow platforms and services to break from the limitations of traditional search infrastructures, while opening the door to an expansive range of sophisticated queries that will generate more topical and relevant responses.

To learn more, download Gracenote’s MCP Server white paper.

You do not have permission to view the comments.

Leave a Reply

Your email address will not be published. Required fields are marked *