Shared Frameworks

Every team should have a gameplan for how to use prompt engineering as part of the design process. The following guidelines help members of all disciplines use an LLM effectively to streamline shared work. If you need help establishing your team’s frameworks, reach out to us.

Develop guiding principles

Principles keep everyone on the same page and can steer important decisions. Focus less on areas of consensus (i.e. “be helpful, trustworthy and context-aware”) and more on areas of ambiguity:

  • What is your protocol for escalating to a human agent?
  • What guardrails should be put into place to prevent costly hallucinations?
  • How human do you want to sound?
  • How do you disclose content created by generative AI?
  • How does tone shift based on context?

Establish universal boilerplates

Good prompt design starts by providing the LLM with plenty of context (more on that below). Standardize reference content like your brand overview and project context so LLM output is consistent no matter who’s writing a prompt.

Establish voice patterns

Similar to context boilerplates, standardized voice patterns (VPs) maintain style and voice consistency across prompt outputs. VPs (sometimes called voice paragraphs) are often a composite of voice guidelines, hypothetical personas, comparisons to public figures, and reference content. (Tip: you can create multiple voice patterns to respond differently to different contexts.)​

As a shorthand, you can name a VP for quick use in recurring situations. However, make sure to test and refine your VPs until you get the desired output (see Prompt Evaluation below) before deploying them across your team.

Audit and refine inputs

Your LLM is only as good as its data set. Carefully curate what goes in and what doesn’t, and make a plan for maintaining your data. There are a number of platforms that can help ingest and incorporate company content and knowledge bases. Remember, the LLM doesn’t know what it doesn’t know, so audit the knowledge base frequently to identify gaps and document a process for incorporating new content and use cases.

A note on APIs: APIs will dictate how tailored your output can be, so work closely with engineering teams to make the right content available to your LLM and prompt designers.

Prompt Syntax

#Context + #Criteria + #Instructions

#Context

Start prompts and prompt threads by giving the LLM some background. Some things to include are:

## Reference content like project background and universal boilerplates from your framework

## Target audience, cluing the LLM in on how much detail is required and how to frame the response

#Criteria

Specify what you’re looking for, such as:

## Response format if you want the output in a certain structure (i.e. a summary in bullet points)

## Word count to help focus your response and conserve tokens

## Approach when you want the LLM to follow a particular writing style—like the inverted pyramid method—to summarize an article

## Goals that you’re using to measure a successful response

#Instructions

Specify desired output, including:

## Voice patterns and “act as” statements to transform the voice and tone of the response

## Desired action, such as: Summarize, Classify, Recommend, Explain, Analyze, Provide an example, Generate an image, etc.

Prompt Techniques

​Learn to use the language of prompt engineering to get the most out of your responses:

#Role Play

Role play guides the LLM to provide output in a specific style or voice. In most general contexts, this can be initiated with an “Act as” statement (i.e. Act as a pirate). For more complex, recurring roles, you can use predefined voice patterns/paragraphs (see Shared Framework above) or create custom personas ("Meet Dave. Dave is a customer service agent who…).

#Headers

Headers are best used to indicate information hierarchy

# Heading 1

## Heading 2

### Heading 3

#Bullets

group related ideas into a list

- item 1

- item 2

#Separation

  • Triple backticks (```) train the LLMs attention on a particular block of text. This helps highlight clear instructions (i.e. summarize this ```), and can be useful when providing the LLM with examples or providing multiple options to choose from.

Summarize the text into a single sentence.

``` [text] ```

  • Horizontal rule (---) indicates a transition in topics and can be useful in more involved prompts.

#Priming

Priming (also known as Scaffolding) engages the LLM on a topic before prompting for the desired output. This may include asking “warm up” questions to gauge the LLM’s knowledge, or sharing a snippet of reference material that may not have been included in the LLM’s training data.

#Prompt Chaining

Prompt chaining accomplishes more complex tasks by linking multiple prompts together.

#Few-Shot

Few-shot models a few instances of the correct behavior before prompting the LLM to complete a specified task. It’s particularly helpful when training the LLM on a task that’s easiest explained by “show,” not tell. This is in contrast to zero-shot instructions, which provide no example.

#Prompt Evaluation

Prompt evaluation asks the LLM to evaluate how successfully the output achieved the intended task. You can compare the output to a manually labeled “correct” response or ask the LLM to evaluate the prompt based on specific criteria. The evaluation criteria can be objective (i.e. accuracy) or subjective (i.e. quality rating).

#Edging

Edging tests the LLM for break points and boundaries around a topic. This helps identify where guardrails need to be put in place to prevent hallucinations.