Use Artificial Intelligence (AI) to interact with Official level documents from your own document set

You may summarise documents, ask questions related to one or more documents, and chat directly with the AI to assist you in your work. You can use up to, and including, Official documents.

Upload

Easily upload a wide range of document formats into your personal Redbox.

Chat

You can communicate with your Redbox and ask questions of the documents contained within. Use Redbox to unlock deeper insights in the various documents you have uploaded, pulling together summaries of individual or combined documents.

Secure

Redbox is built in our secure and private cloud which enables you to upload, up to, and including, Official documents in your Redbox.


Frequently Asked Questions

Responses to these FAQs about Redbox were generated using Redbox.

What is Redbox?

Redbox is a new kind of tool for civil servants which uses Large Language Models (LLMs) to process text-based information. Redbox is currently only available to a small number of DBT staff as the department assesses its usefulness.

What makes Redbox particularly useful is that it can look at any documents you upload and work with you on them.

What is Generative AI?

Generative AI is a fairly new kind of algorithmic machine learning based on 'neural architecture' This architecture allows machine learning models (called large language models in this case) to be trained on large amounts of information, and then be able to converse with a user by 'generating' answers to questions or assisting with text-based processing tasks.

Do I need technical skills to use Redbox?

Not at all, the interface for Redbox comprises a document upload feature and a chat feature. No technical skills are required, but an understanding of what large language models are and what they can do makes an enormous different to users when interacting with them. The principal skill to use Redbox well comes down to a skill called 'prompt engineering', which is just natural language you type in and get the tool to respond. There are some great Gov.uk short courses on generative AI you may consider taking: See the courses here.

What can I and can’t I share with Redbox?

For the moment, you can share text and documents up to the Official classification level. Take care to check before sharing text or documents with Redbox that they do not exceed this designation.

Alongside this, do not share Personal Data with Redbox, for more information on what constitutes personal data, watch this short video. If you are unsure whether you are processing personal data, you should contact the Data Protection Team to check.

What is a ‘hallucination’?

In the context of LLMs, a hallucination refers to the generation of information that is incorrect, nonsensical, or entirely fabricated, despite being presented in a credible manner. This can happen when the model produces details, facts, or asserts that are not grounded in reality or the data it was trained on, potentialliy leading to misinformation or misunderstanding.

What are the main things Redbox can do?

Redbox allows you to chat to a LLM, summarise and interact with documents (up to around 140 pages), and even search within documents. Here are some helpful insights into those:

Chat

By using @chat at the beginning of any message you send, you will speak directly with the LLM, gaining access to its 'world model', which is everything it was trained on.

Top tip: when you chat with the LLM, all the contents of your current chat are considered by the LLM, and you can switch between chat and other interaction types (more below) in the same chat.

Summarise

By using @summarise at the beginning of any message you send, and ensuring that a document is selected in the left hand panel, you can get the LLM to look at an entire document and summarise it for you.

Top tip: like using @chat, using the @summarise feature is very flexible, and you can instruct the LLM to summarise from a particular point of view, or follow a particular kind of format in its response. Here's an example prompt - imagine you have selected an HR Policy document on taking leave at DBT:

@summarise this HR policy from the perspective of a line manager, as my line report has asked to take extended leave and I want to know what steps I need to take.

In this example, Redbox will take those instructions into account and generate a response that best fits your requirements.

Search

By using @search at the beginning of any message you send, with any number of documents selected in the side panel, Redbox will search within them for results that relate to the question or instruction given.

Top tip: Search is great at finding exact parts of documents which relate to a given theme or topic, and the results will also show you the exact text it found from the documents os you can be sure of accuracy.

What can’t Redbox do?

Redbox is limited to its own training data and document processing, as it cannot currently search the internet, making it inadequate for getting reliable real-time information.

Redbox is not well-suited for tasks that involve precise numerical calculations, counting, or data tabulation, such as those often required in spreadsheets or financial analyses. They may generate incorrect numerical results or misinterpret mathematical expressions, leading to unrealiable outputs. Additionally, Redbox struggles with organising and manipulating structured data effectively, as it is primarily designed for language generation rather than data processing.

This can make Redbox inadequate for tasks requiring accurate data comparison, aggregation, and spreadsheet functions, where human oversight and intervention remain essential for ensuring accuracy and reliability.

Why can Redbox get things wrong?

Redbox can get things wrong for several reasons, primarily due to its reliance on patterns found in the training data of the LLM rather than true understanding or reasoning. Redbox may generate inaccurate information because they lack context awareness, leading to misinterpretations, especially in complex or nuanced scenarios. Additionally, biases inherent in the LLM training data can result in the propagation of errors, stereotypes, or nonsensical outputs, further compromising the accuracy of Redbox' responses. For more information, read the FAQ section on 'hallucinations' above.