Prompts

⌘K
  1. Home
  2. Dokumente
  3. Prompts

Prompts

Prompts are templates that can be used repeatedly in the use cases and can be filled with additional information (e.g. the topic for which a particular text is to be created) depending on the use case.

Prompts can be split and contain variables.

When creating a prompt, please enter a name and optionally a description for the respective prompt. The description is then also displayed as additional information when the variables are queried.

Variables can be entered in the respective prompt with double curly brackets {{}}. It is also possible to enter several variables in one prompt.

It is also possible to use the prompt immediately after creation by clicking on Save + Use.

Using prompts

Prompts are entered using / “Slash” and prompts can be called from templates via the chatbar:

By making a on “save + use prompt”, a popup opens in which the filters can be filled:

After entering the variable, the entire prompt is inserted into the chatbar and can be sent:

What makes a good prompt

When using language models, a good prompt should be clear, precise and detailed enough to give the model precise direction.

It is important that the prompt provides enough context for the model to generate the desired type of response.

Vague or overly general prompts can lead to unspecific or unexpected answers.

A good prompt should also specify the desired form as a template for the response, if relevant. For example, if you want a list, tweet, table or other specific type of text, you should include this in your prompt initially.

The prompt should end with a question or a specific call to action to prompt the model to generate a complete and detailed response.

 

Selecting the model

Depending on which model has been booked, individual models have different advantages:

GPT-3

Prompts: For GPT-3, it is important to use clear and concise prompts that provide enough context. It can be helpful to specify the desired output format and end the prompt with a question or call to action.
Context: GPT-3 can sometimes struggle to retain context over long conversations.

GPT-3.5 Turbo

Prompts: GPT-3.5 Turbo can work with shorter prompts and often responds better to complex or multi-part prompts.
Context: GPT-3.5 Turbo has an improved ability to follow instructions and retain context.
Fine-tuning: GPT-3.5 Turbo offers fine-tuning capabilities, which allows it to control the model more effectively and maintain consistent response formats.

GPT-4

Prompts: GPT-4 can process contextualized prompts in conversations even more precisely and specifically and is able to better follow complex instructions or retain deeper context.
Context: GPT-4 shows contextualization and thus a better understanding of context in conversations and prompts.