OpenAI ChatGPT-3 API experiments with temperature

~ 2 min read

OpenAIs Chat completions API is an amazing tool, and once you get past the basics of prompt engineering, you need to start exploring using role=system and tempertaure to get the best results.

Temperature

A value between zero and two which controls the sampling temperature to use. Higher values will make the output more random, while lower values will make it more focused and deterministic.

It is recommended to use top_p or temperature but not both in the same call.

I’ve found that values between 0.5 and 0.8 work best for me.

top_p an alternative to temperature

A value between zero and one which controls nucleus sampling, where the model considers the results of the tokens with top_p probability mass. Where 0.25 means only the tokens comprising the top 25% probability mass are considered.

It is recommended to use top_p or temperature but not both in the same call.

Roles

Roles, specifically the system role, are a great way to give the API more context about the conversation, and helps guide the results eg.

const response = await openai.chat.completions.create({
    model: "gpt-3.5-turbo",
    messages: [{'role':'system', 'content':'You are a solicitor giving legal advice'},
        {'role':'user', 'content':'Please give advice on the following dispute...'}]
});

Bring the above together

Here’s an example using the node.js API to get a response from the API using temperature and roles.

const response = await openai.chat.completions.create({
    model: "gpt-3.5-turbo",
    temperature: 0.5,
    messages: [{'role':'system', 'content':'You are a solicitor giving legal advice'},
        {'role':'user', 'content':'Please give advice on the following dispute...'}]
});

all posts →