Large Language Models and Where to Use Them: Part 1
Over the past few years, large language models (LLMs) have evolved from emerging to mainstream technology. In this blog post, we'll explore some of the most common natural language processing (NLP) use cases that they can address. This is part one of a two-part series.

A large language model (LLM) is a type of machine learning model that can handle a wide range of natural language processing (NLP) use cases. But due to their versatility, LLMs can be a bit overwhelming for newcomers who are trying to understand when and where to use these models.
In this blog series, we’ll simplify LLMs by mapping out the seven broad categories of use cases where you can apply them, with examples from Cohere's LLM platform. Hopefully, this can serve as a starting point as you begin working with the Cohere API, or even seed some ideas for the next thing you want to build.
The seven use case categories are:
- Generate
- Summarize
- Rewrite
- Extract
- Search/Similarity
- Cluster
- Classify
Because of the general-purpose nature of LLMs, the range of use cases and relevant industries within each category is extremely wide. This post will not attempt to delve too deeply into each, but it will provide you with enough ideas and examples to help you start experimenting.

1. Generate

Probably the first thing that comes to mind when talking about LLMs is their ability to generate original and coherent text. And that’s what this use case category is all about. LLMs are pre-trained using a huge collection of text gathered from a variety of sources. This means that they are able to capture the patterns of how language is used and how humans write.
Getting the best out of these generation models is now becoming a whole field of study in and of itself called prompt engineering. In fact, the first four use case categories on our list all leverage prompt generation in their own ways.
More on the other three later, but the basic idea in prompt engineering is to provide a context for a model to work with. Prompt engineering is a vast topic, but at a very high level, the idea is to provide a model with a small amount of contextual information as a cue for generating a specific sequence of text.
One way to set up the context is to write a few lines of a passage for the model to continue. Imagine writing an essay or marketing copy where you would begin with the first few sentences about a topic, and then have the model complete the paragraph or even the whole piece.
Another way is by writing a few example patterns that indicate the type of text that we want the model to generate. This is an interesting one because of the different ways we can shape the models and the various applications that it entails.
Let’s take one example. The goal here is to have the model generate the first paragraph of a blog post. First, we prepare a short line of context about what we’d like the model to write. Then, we prepare two examples — each containing the blog’s title, its audience, the tone of voice, and the matching paragraph.
Finally, we feed the model with this prompt, together with the information for the new blog. And the model will duly generate the text that matches the context, as seen below.
Completion:
You can test it out by accessing the saved preset.
In fact, the excerpt you read at the beginning of this blog was generated using this preset!
That was just one example, but how we prompt a model is limited only by our creativity. Here are some other examples:
- Writing product descriptions, given the product name and keywords
- Writing chatbot/conversational AI responses
- Developing a question-answering interface
- Writing emails, given the purpose/command
- Writing headlines and paragraphs
2. Summarize

The second use case category, which also leverages prompt engineering, is text summarization. Think about the amount of text that we deal with on a typical day, such as reports, articles, meeting notes, emails, transcripts, and so on. We can have an LLM summarize a piece of text by prompting it with a few examples of a full document and its summary.
The following is an example of article summarization, where we prepare the prompt to contain the full passage of an article and its one-line summary.
Prompt:
Completion:
You can test it out by accessing the saved preset.
Here are some other example documents where LLM summarization will be useful:
- Customer support chats
- Environmental, Social, and Governance (ESG) reports
- Earnings calls
- Paper abstracts
- Dialogues and transcripts
3. Rewrite

Another flavor of prompt engineering is text rewriting. This is another of those tasks that we do every day and spend a lot of time on, and if we could automate them, it would free us up to work on more creative tasks.
Rewriting text can mean different things and take different forms, but one common example is text correction. The following is the task of correcting the spelling and grammar in voice-to-text transcriptions. We prepare the prompt with a short bit of context about the task, followed by examples of incorrect and corrected transcriptions.
Prompt:
Completion:
You can test it out by accessing the saved preset.
Here are some other example use cases for using an LLM to rewrite text:
- Paraphrase a piece of text in a different voice
- Build a spell checker that corrects text capitalizations
- Rephrase chatbot responses
- Redact personally identifiable information
- Turn a complex piece of text into a digestible form
4. Extract

Text extraction is another use case category that can leverage a generation LLM. The idea is to take a long piece of text and extract only the key information or words from the text.
The following is the task of extracting relevant information from contracts. We prepare the prompt with a short bit of context about the task, followed by a couple of example contracts and the extracted text.
Prompt:
Completion:
You can test it out by accessing the saved preset.
Some other use cases in this category include:
- Extract named entities from a document
- Extract keywords and keyphrases from articles
- Flag for personally identifiable information
- Extract supplier and contract terms
- Create tags for blogs
Conclusion
In part two of this series, we’ll continue our exploration of the remaining three use case categories (Search/Similarity, Cluster, and Classify). We’ll also explore how LLM APIs can help address more complex use cases. The world is complex, and a lot of problems can only be tackled by piecing multiple NLP models together. We’ll look at some examples of how we can quickly snap together a combination of API endpoints in order to build more complete solutions.
