

Here are a few phrases that folks have found work well with OpenAI to achieve certain outcomes. Sometimes it's just about finding the exact phrase that OpenAI will respond to. GPT will take those requirements into consideration and return a prompt that you can then use on it-it's the circle of (artificial) life. I have a few needs: I need to understand the error, I need the main components of the error broken down, and I need to know what's happened sequentially leading up to the error, its possible root causes, and recommended next steps-and I need all this info formatted in bullet points. You can do so by using something like this:īasic prompt: I'm looking to create a prompt that explains an error message.īetter prompt: I'm looking to create a prompt for error messages. You'll want to end your prompt with "Agent:" to indicate where you want the AI to start typing.

Providing examples in the prompt can help the AI understand the type of response you're looking for (and gives it even more context).įor example, if you want the AI to reply to a user's question in a chat-based format, you might include a previous example conversation between the user and the agent.

#Writing challenge prompts for learning language plus
(The exception here is if you're using ChatGPT Plus and have turned on access to its built-in Bing web browser.) 3. While it might appear to work sometimes, it's actually just using the text within the URL itself (as well as its memory of what's typically on that domain) to generate a response. This means you shouldn't expect it to be up to date with recent events, and you can't give it a URL to read from. Remember that GPT-3 and GPT-4 only have access to things published prior to 2021, and they have no internet access. Summarize the content from the above article with 5 bullet points. Here are a few examples of ways you can improve a prompt by adding more context: Think about exactly what you want the AI to generate, and provide a prompt that's tailored specifically to that. Just like humans, AI does better with context. If you do each of the things listed below-and continue to refine your prompt-you should be able to get the output you want. GPT prompt guide: 7 tips for writing the best GPT-3 or GPT-4 prompt If you notice the AI is stopping its response mid-sentence, it's likely because you've hit your max length, so increase it a bit and test again. Maximum length is a control of how long the combined prompt and response can be. The default of 0.7 is pretty good for most use cases. A higher score gives the bot more flexibility and will cause it to write different responses each time you try the same prompt. A lower score makes the bot less creative and more likely to say the same thing given the same prompt. Temperature allows you to control how creative you want the AI to be (on a scale of 0 to 1). It can be a lot to get the hang of, so to get started, I suggest playing with just two of them.

The tips here are for GPT-3 and GPT-4-but they can apply to your ChatGPT prompts, too.Īs you're testing, you'll see a bunch of variables-things like model, temperature, maximum length, stop sequences, and more. GPT-3 and GPT-4, on the other hand, are a more raw AI that can take instructions more openly from users. ChatGPT, the conversation bot that you've been hanging out with on Friday nights, has more instructions built in from OpenAI. GPT-3 and GPT-4 aren't the same as ChatGPT.
