Prompts

⌘K
  1. Home
  2. Dokumente
  3. Prompts

Prompts

Prompts are templates that can be repeatedly used in various use cases and filled with additional information (e.g., the topic for which a particular text is to be created) depending on the use case.
Prompts can be shared and contain variables.

When creating a prompt, please enter a name and, optionally, a description for the respective prompt. The description will then also be displayed as additional information when querying the variables.

Variables can be entered in the respective prompt with double curly braces {{}}. It is also possible to have multiple variables in one prompt.

Additionally, it is also possible to use the prompt immediately after creating it by clicking on “Save + Use”.

 

Using Prompts

The input is made through a slash “/” and prompts from templates can be selected via the chat bar:

 

By clicking on “save + use prompt”, a popup opens where the filters can be filled in:





After entering the variable, the entire prompt is inserted into the chat bar and can be sent:


 

What Makes a Good Prompt
A good prompt should be clear, precise, and detailed enough when using language models to give the model a clear direction.
It is important that the prompt provides enough context so that the model can generate the type of response desired.
Vague or too general prompts can lead to nonspecific or unexpected responses.
A good prompt should also specify the desired form as a template for the response if relevant. For example, if you want a list, a tweet, a table, or another specific type of text, you should already indicate this in your prompt.
The prompt should end with a question or a concrete call to action to prompt the model to generate a complete and detailed response.

Model Selection
Depending on which model has been booked, individual models have different advantages:


GPT-3
– Prompts: For GPT-3, it is important to use clear and precise prompts that provide enough context. It can be helpful to specify the desired output format and to end the prompt with a question or a call to action.
– Context: GPT-3 can sometimes have difficulty maintaining context over long conversations.

GPT-3.5 Turbo
– Prompts: GPT-3.5 Turbo can work with shorter prompts and often responds better to complex or multipart prompts.
– Context: GPT-3.5 Turbo has an improved ability to follow instructions and maintain context.
– Fine-tuning: GPT-3.5 Turbo offers the option of fine-tuning, which allows the model to be controlled more effectively and to maintain consistent response formats.

GPT-4
– Prompts: GPT-4 can process prompts in conversations even more precisely and specifically, and is capable of following complex instructions better or retaining deeper context.
– Context: GPT-4 shows better contextualization in conversations and prompts, thus a better understanding of context.