Because LLM’s are not meant for precise calculations. They are subsequent token predictors based on language patterns. The problem discussed here can be solved using LLM by constructing a Prompt fit for the purpose.
You:
Goal is to do basic arithmetic. To do this calculation:
1. Ask me what is the arithmetic expression to be evaluated.
2. Generate Python code to do the math.
3. Answer with only the calculation. No Intro-Outro.
ChatGPT:
What is the arithmetic expression to be evaluated?
You:
3000 + 80000 + 50000 + 35000 + 2890 + 2863 + 21394 + 10595 + 3136 + 20213 + 3277 + 50000 + 2718
ChatGPT:
285086
This is correct. Following is the same calc done in Python interactive shell for reference:
>>> 3000 + 80000 + 50000 + 35000 + 2890 + 2863 + 21394 + 10595 + 3136 + 20213 + 3277 + 50000 + 2718
285086
>>>
PS: This answer is based on a tip shared by Girish Nuli in his recent course that I attended clubbed with what I learned from Tariq King about step-by-step instructing. It also has elements of Meta Language negotiation with LLM.
Leave a comment