Getting Started
Introduction
Chat defender is designed to be a minimal change compared to direct communication with OpenAI.
You use a chat defender token rather than an OpenAI token.
Instead of sending 'content' in your chat 'messages' , you send a key for a Chat Defender 'Message', and (optionally) variables which will be substituted into the message.
This allows a much tighter 'attack surface' than if you simply expose your unrestricted OpenAI key.
Your 'messages' are managed in the Chat Defender interface.
Example - Personality 'Message'
Your personality message might be "You are a hilarious joker. End every sentence with your trademark word 'BAZINGA!'"
To create a message with this content, you would just refer to the key 'personality'.
(This would probably be the first message in your chat message array).
The advantage here is that you can change the message without re-releasing your app, and an attacker can't do anything very useful with it.
Example - Joke 'Message'
Your Joke message might be "Please tell me a joke about ##subject##!"
In your message, you send the key 'joke' , and variables {subject: "Banannas"}
Chat defender then creates the content for your message which is "Please tell me a joke about Banannas!"
If someone extracts your key, then they can't readily use it for (say) a translation project - it is really only good for getting jokes..
Example - Unstructured Chat Prompt
For a typical chat app, you probably do want to allow your user to ask any question.
Your 'unstructured' message might be "##content##"
To use this, you send the key 'unstructured' , and variables {content: "I am a user, and this is my question..."}
This kind of message can potentially be used by an attacker, but the exposure is still less than if you had exposed your key
- The attacker can still only call the chat endpoint (no image generation, or embeddings)
- (coming soon) - you can limit the allowed model at the token level, so the attacker can only use the model(s) you personally require.
- (coming soon) - you can set messages like this so that they must appear at least N'th in your list of messages. This allows you to require something like the personality message as the first message, which reduces the value of an unstructured message to an attacker.
- This means that a chat defender key (even with an unstructured message) is a less alluring target. Attackers probably have easier/richer pickings...
Step 1
Set up an API Token
This is the token you'll use to access ChatDefender. You'll save your OpenAI token along with it.
It's a good idea to create a new OpenAI token for each ChatDefender token.
Step 2
Set up one or more messages. Messages allow you to manage and update your prompts.
The simplest message would be something like:
- key: simple_joke
- prompt: "Limit Prose: Please tell me a joke!"
You then rewrite your api call to reference the prompt
Original Code
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Limit Prose: Please tell me a joke!"
}
]
}
New Code
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"cd_content": {
"key": "simple_joke"
}
}
]
}
Step 2 (with substitutions)
Messages allow substitutions, you simply provide the text to substitute.
An example message would be something like:
- key: substitute_joke
- prompt: "Limit Prose: Please tell me a joke about ##subject##!"
You then rewrite your api call to reference the prompt
Original Code
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Limit Prose: Please tell me a joke about clowns!"
}
]
}
New Code - with substitutions
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"cd_content": {
"key": "substitute_joke",
"variables": {
"subject": "clowns"
}
}
}
]
}
Step 3
Finally - send your requests to
https://apiv1.chatdefender.com
instead of
https://api.openai.com