This lie could be because of hype, a self created illusion, a desire of how LLMs should be rather than how they are, not keeping up with how the world moves –> the reasons could be any.
Whatever the reason might be, “One can use natural language to communicate with LLMs” is a very misleading statement and one that will prevent you from making actual, professional use of LLMs or at the least it puts serious limitations on what you can achieve.
Beyond the basic prompt refinement phase:
– Your prompts mature and start looking more like Templates with Placeholders
– Markups and structures become a dominating part
– Careful choice of words and even the sequence of phrases start to matter
– Thinking of integration with other tools e.g. code interpreters, online tools, plotters etc becomes evident
This craft and art is the engineering part and it takes deeper knowledge on part of the prompter than what the above mentioned lie seems to suggest.
“Just talk to it” is not prompt engineering.
PS: IMO, as LLMs advance especially in their context management where a single point of reference (output) is tweaked based on a 2-way conversation and point of action/control with LLM (and underlying tech), this situation will improve much. As of today, we try to figure this out through only our prompts, and in the absence of deeper context management being the default, it is a hit-and-miss, without over-complicating prompting or retention of output, outside of the chat.
Leave a comment