Prompting: Experimenting with Markdown (with Example of Multi-Lingual Test Data Generation)

As my experiments with ChatGPT went deeper, the prompts were getting more and more complex. Sometimes, all such complexities have simple solutions, which we know about, which we have used elsewhere but we are not able to recognise them at the moment.

The solution to re-structuring the prompts was really simple – Use Markdown. I learned this trick from one of the presentations of Tariq King which he later briefly explained in a discussion with me. As I built on the idea, my prompting experience became much better.

I’ll keep this article short. What I’ll suggest is to have a look at the prompt and response and try to identify patterns. Experiment with them in your own work.

An Experiment with Generating Multi-Lingual Data using ChatGPT

Here’s the prompt I used:

I used Markdown to give a structure to the prompt, as well as to introduce context and sequencing. Please remember that the Markdown itself is not a replacement of thinking about steps or context building on your part as the prompter.

Following 2 images contain the response:

Experimenting with Markdown is pretty interesting in prompting. Just keep one thing in mind, don’t expect a complete deterministic output from LLMs. They are not meant for that.

Use the output in light of your own opinions and design. In the above prompt’s response also, there is a sentence at the top and there was a trailing sentence at the end (truncated, not seen in the image) apart from the 2 tables I asked for. Also, there could be finer details in the languages which could be wrong, who knows. It might be that a human better conversant with a given language under the constraint of a single test data, could have given a better combinations of inputs.

But these are all ifs and buts. LLMs are tools. Like any other tool, use them with caution. While using them, learn how to properly use them.

Have fun!


Leave a comment