1. Home
  2. Dokumente
  3. Documentation CompanyGPT
  4. FAQs and error handling

FAQs and error handling

FAQs

Content generation stops halfway through – why?

The models have a maximum number of text tokens that can be generated per chat. Currently this setting is limited to 1000 tokens, but will be increased as new models become available. To continue generation, simply type “continue” or “weiter” in the chat bar. The model will usually continue the generation. The two responses must then be merged manually.

Certain prompts are not editable?

Shared prompts can only be edited by administrators to ensure that users working on content at the same time do not overwrite each other. To work on it, copy the chat history or prompt by moving it to your own private folder. This allows you to work on it undisturbed.

In which language is the interface available?

The CompanyGPT interface is currently available in German (default) and English.

 

How can I improve the quality of the generated content?

The quality of the generated content can be improved by using clear and precise wording in the prompt.If you add specific details, CompanyGPT can deliver better results.This also depends on the model used.Experiment with different formulations to achieve the desired result.See also the menu item “Prompting techniques”.

How safe is it to use CompanyGPT?

Security is our top priority.Hosting takes place using a European cloud solution.

What data is stored from my chats?

The entries and responses to the chats are stored in 506’s own database; access is reserved exclusively for the respective customers and 506 – purely for administrative purposes. Personal data or confidential information is also stored via the user administration through the use of persons with accounts.

What happens if I lose my internet connection during a chat?

If the internet connection is interrupted during a chat, your current interactions will be lost. It is recommended that you regularly save your chat history so that you do not lose your progress.

 

Errorhandling GPT-3.5

When using e.g. GPT-3.5, an advanced language model from OpenAI, various errors can occur.

Error handling is an important aspect of working with advanced technologies such as GPT-3.5. Making users aware of common errors and how to fix them can ensure a smoother and more efficient experience with the model. Some of the most important errors and their solutions are therefore outlined below:

 

Poor internet connection

Problem: A stable internet connection is required for communication with the GPT-3.5 API. A poor connection can lead to delays or interruptions.

Solution: Check your Internet connection and try to establish a more stable connection, e.g. by switching to a wired network or moving closer to the WLAN router. If you experience repeated problems, it may be helpful to contact your Internet provider.

 

Model does not give any answers

Problem: Sometimes the GPT 3.5 model does not return a response to a request.

Solution: Check that the request is formatted correctly and that all required parameters are set correctly. Press the rain button and restart the output.

 

Token count exceeded

Problem: Each GPT 3.5 model has a maximum token limit (e.g. 4096 tokens for the davinci model). If a request exceeds this limit, an error is returned.

Solution: Reduce the length of your input or split it into smaller sections. Note that both the input and output tokens count against the total number of tokens.

 

Timeout error

Problem: When communicating with the GPT-3.5 API, it can happen that the request takes too long and a timeout error occurs.

Solution: Make sure that your request is not too complex or too long. Restarting the conversation is recommended. In case of repeated timeouts, you should resend the request at a later time, it could also be due to the server load of the cloud service (Azure).

 

Rate limiting

Problem: OpenAI sets limits on the number of requests that can be sent in a given time period. If this limit is exceeded, you will receive a rate limiting error.

Solution: Reduce the number of requests or spread them out over a longer period of time. In this case, consultation with colleagues could also help to reduce the number of simultaneous requests.

 

Server-side problems

Problem: Sometimes temporary server problems can occur, e.g. maintenance work or server outages.

Solution: In such cases, there is little you can do except wait. Check the OpenAI status page or support section for information on known issues.