DeepMind’s Retrieval-Enhanced Transformer (Retro) is a language model that improves auto-regressive language models by conditioning on document chunks retrieved from a large corpus. It uses a $2$ trillion token database and can achieve comparable performance to GPT-3.
The main benefits to the user include:
- Improved accuracy in text generation
- Access to trillions of tokens
- Comparable results to GPT-3
- Ability to condition on document chunks
The service can be used for various tasks that require accurate text generation such as natural language processing, artificial intelligence development and data analysis. It is also able to leverage the power of GPT-3 for tasks such as summarization, question answering and dialog generation.