The prompt - how important is it really?

On February 7th I was named the winner of Prompt-SM 2025. The competition (which I have written about in earlier blog posts) challenged participants to craft the best possible prompt to solve three tasks, and I solved them by applying various recommended techniques to the prompt, such as giving the AI a role, or asking the AI to ask me questions to clarify the task.

A question I’ve been asked a couple of times is how much the prompt actually matters. Do you get different results depending on whether you’re polite or not? Do you really need to give the AI a role?

I didn’t have a good answer when I first got the question, so I sat down to find out.

Let’s start with what the research says. In a recently published paper, “Prompting Science Report 1: Prompt Engineering is Complicated and Contingent” by Lennart Meincke, Ethan Mollick, Lilach Mollick and Dan Shapiro, the authors find that while it does matter whether you are polite or commanding (“Please be so kind as to answer the question” or “I command you to answer!”), the outcome depends on the specific question - for some questions a polite phrasing yields a more correct answer, while for others the opposite is true - a command produces better results.

Giving the AI a role (“You are a highly intelligent assistant that follows instructions correctly”) does not noticeably affect the quality of the answer.

To get a somewhat broader perspective, I asked ChatGPT to do a deep research on the topic - and here is what that search produced:

  • Giving the AI a role is a technique that affects how the answer is presented, but does not appear to affect whether the answer is more or less factually correct.

  • Asking the AI to format the answer in a specific way (such as a bullet list) seems to have a slightly positive effect on factual accuracy, but mainly for smaller models (GPT-3.5 is affected more than GPT-4, for instance). That it also affects the format of the response goes without saying…

  • Politeness sometimes affects the result positively, but sometimes negatively (something also observed by Meincke et al).

  • Offering the AI a fictitious bribe (“if you do a good job you’ll get a thousand dollars”) may cause the AI to follow instructions somewhat more strictly, but otherwise has little impact on the result.

  • Asking the AI to ask follow-up questions to clarify ambiguities is a method that appears to have a measurably positive effect - which was also a technique I used in the second task of the championship finals! Compare this with a much simpler prompt: “help me”, followed by the text of the task. The simple prompt produces a long, somewhat generic answer that lists things to consider before and during an interview situation.

In summary, it seems that the way a prompt is formulated does not affect how factually correct an answer is. The one exception is when you ask the AI to pose follow-up questions to clarify ambiguities.

What the prompt does affect, however, is how the answer is presented. Which presentation style works best depends entirely on your own preferences.

The best prompt is therefore the one that gives you the answer in a format that you prefer!