TRADITIONAL SOFTWARE responds predictably to instructions. “Generative” artificial-intelligence (AI) models, such as that used by ChatGPT, are different: they respond to requests written in everyday language, and can produce surprising results. On the face of it, writing effective prompts for AI is much simpler than, for example, mastering a programming language. But as AI models have become more capable, making the most of the algorithms within these black boxes has become harder. “Prompt engineering”, as this skill is known, has been likened to guiding a dance partner or poking a beast to see how it will respond. What does it involve?
For starters a good prompt should include a clear instruction: compile a given policy proposal’s potential downsides, for example, or write a friendly marketing email. Ideally the prompt should coax the model into complex reasoning: telling it to “think step by step” often sharply improves results. So does breaking instructions down into a logical progression of separate tasks. To prompt a clear explanation of a scientific concept, for example, you might ask an AI to explain it and then to define important terms used in its explanation. This “chain of thought” technique can also reveal a bit about what is going on inside the model.
AI users need to be able to see that detail. Because big models are trained on what one prompt engineer calls “everything from everywhere”, it helps to include authoritative texts in a prompt, direct a model to give particular sources priority or, at the very least, to tell the model to list its sources. Many models offer settings for “temperature”, which, when raised, increase the randomness of results. That can be good for creative tasks like writing fiction but tends to increase the frequency of factual errors.
Asking an AI to role-play can be helpful, too. To produce advertising copy, Crispy Content, a marketing agency in Berlin, tells a model to rewrite, and then defend, a sample from the points of view of a sales director, a marketing boss and a “creative”. The best spin is then tweaked by staff. This “persona” approach leads to answers that seem more human, says Bilyal Mestanov of Promptly Engineering, an agency in Bulgaria.
Asking models to act like humans raises the issue of AI etiquette. Some argue that a prompt with a “please” can nudge a model towards source materials, and therefore a reply, written in a similarly polite tone. A “thank you” in response to a helpful reply might suggest to the model that it is on the right track. But tripping over yourself to thank a model to an excessive degree can “muddy” prompts, misdirecting some processing power, says Josh Hewett of Discoverable, a British marketing agency.
Good prompts are valuable. Crispy Content develops templates that tell models to write 1,000-word articles for its clients. Users type in keywords (“red wines of Andalucia, Spain”, say) and a desired tone. Developing one of these templates takes about €25,000-worth ($27,000) of man hours, says Gerrit Grunert, the firm’s managing director, and output must be checked by a human editor. But where Crispy Content used to spend about €400 for each article by a human, those generated with prompts cost about €4 each.
Prompting agencies and online courses that purport to teach the skill are flourishing. Jobs for prompt engineers started popping up in late 2022 and are becoming more common. Graduates with a background in languages or the humanities are popular candidates. AI advances may eventually render such jobs obsolete, as models learn to better anticipate users’ needs. But for now it looks as though AI-whisperers will enjoy an edge.
© 2023, The Economist Newspaper Limited. All rights reserved.
From The Economist, published under licence. The original content can be found on www.economist.com