Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

DesktopGPT uses OpenAI’s services to bring AI models to your desktop. Each AI model has a different value assigned to it for the cost that is incurred with each prompt. Currently, three models are supported by DesktopGPT:

  • GPT-3.5 Turbo

  • GPT-4

  • GPT-4 Turbo

When using each model, a token is weighted against the cost that is incurred when prompting ChatGPT with the following multipliers:

  • GPT-3.5 Turbo, 1X

  • GPT-4 Turbo, 2.5X

  • GPT-4, 5X

Meaning, that when using GPT 4, it is 5x the token cost of GPT 3.5 Turbo. As an example, when using 3.5 Turbo, a prompt that uses 2000 tokens would use 10000 tokens to ask the same question as 3.5 Turbo.

For most users, 3.5 Turbo will provide sufficient results, we provide model options so that a user can compare and contrast outputs if they are not satisfied with the base model.

To submit a prompt, a user must have a minimum of 2000 tokens available. The reason for this is that we do not know the token count ahead of submission for the answer – both the prompt and the answer incur token usage.

Tokens that have been used are non-refundable. Once a prompt has been submitted to any of the available models, it is not possible to recover any tokens for any reason.

Stardock is evaluating token usage during the beta period and will frequently update these weights to better reflect usage of the API.

  • No labels