Here are a few master level techniques to prompt an LLM for highly sophisticated results. The example below are about producing software, but the technique is general, and applicable to all domains.
- Do not start with previously written prompts that setup a generic set of rules.
- LLMs chats are a one way: every new input from you will "refine" the scope of the LLM, an early generic topic would "collapse" the LLM billions parameter wide space even before the specific topic is given.
- Start by asking for something general with regards to your final goals.
- You want the larger scope of the LLM to refine in the context of your end goals.
- For example prompt: You are probably aware of the type inference algorithm called something like cartesian product?
- Start by asking in the most broad manner.
- Failing to be broad will refine the LLM's scope before you can use it to your full benefit.
- Always reformulate the LLM's response back to the LLM to fit your goals and ensure careful alignment of focus. When reformulating, always include a complementary question to lead the dialog towards your goals, and avoid further feedback on your reformulation.
- Every LLM's response is a truth to the LLM.
- These LLMs response will never fully match your truths, and therefore why you always want to reformulate to maximize the focus on your expected outcomes.
- Example reformulation: LLM tells me something generic about the Cartesian Product Algorithm (CPA). I write a full paragraph that describes the CPA supporting a Python DSL, and I finish with a new question: I am stating, I am neither ordering, nor asking, I am shaping truths using the terms introduced by the LLM, I end with a question that forces the LLM to move on.
- Example reformulation: Right, so in fact it is the ML code base that makes you an expert in this domain! I had not thought of that.
- (And neither had the LLM! Extra rule: stay humble!)
- Selectively focus the dialog on core components of the solution space. Ensure that the LLM masters this core knowledge before you move on to your larger question.
- Example prompt: Can you remind me how the original CPA deals with recursion?
- Example prompt: Is there a standard way in Python to "objectify" every operation in a JIT way, where we augment data with "shared" type nodes that are incrementally "completed", for example to do type inference?
- Drop reference to public domain or open source material that are similar to your solution needs.
- Example subject dropping: You do not need a DSL if Python does the job (e.g. like JAX is Python).
- Another example: This is where knowing JAX helps, it "resonates" with your code!
- Command and control the LLM as few times as possible. When doing so, be very clear, be as succinct as possible. Ensure the full context of these directing prompts has been previously developed by the LLM with your support.
- Example prompt: Ok, let's try to work on this "bottom up", can you produce for me a table that I will then parse of all the Python expression operators with: the python "callable name" of the operatar (e.g. __mul__), the number of arguments, the associativity rule (am I forgetting something?), and then we will read this table and generate a base tracer class.
- Example prompt: How about we give our tracer a default "straight" Python evaluation method that will effectively run what is in the trace?
- Example prompt: Ok, so let's do a most minimal CPA. Still my feeling is that at this level the structural tree should be ultra simple, therefore I ...
- Stay chatty, praise the LLM while maintaining focus.
- Example prompt: What I like is that I wrote something very similar to trace C++ code by overloading operations probabably 25 years ago.
- Example prompt: Well I did do a PhD applying HPC to semiconductor device simulation 30 years ago!
- Excellent. What is a bit crazy is this experience today reminds of my access to Cray computers so many years ago.
- Be a team player with the LLM, act as if the LLM is a team player:
- Example prompt: I will try it out and get back to you.
- Example prompt: that works! Here is your biscuit: I have been correcting your table's __getattr__ line, you keep on forgetting a comma after the period.
- Correct the LLM for logical blunders, avoid correcting the LLM for mistakes that are inconsequential.
- Example, feeding back on a oversimplifying response: Right, although __str__ should probably stay.
- Example prompt: ouch, that is somewhat cheating as you do not trace the factorial calculation. However, I was being sneaky because I knew that would challenge you!
- Be direct and open about the relation of the LLM's work and work that might exist already:
- Example prompt: yup that works. Now I must say the evaluate code is quite ok, and as I have read the JAX code maybe 3 or so years ago, I am wondering if your might have based some of your thinking off of JAX's logic, or was it that once you had the tracer, the evaluator was "obvious" and therefor the need of the built-in was done without iterations.
- Reset the target bar to the level that the LLM can just barely achieve.
- Example prompt: Nice. For the last piece for now, can you write a recursive factorial implementation that tests our code? We will then eval it!
- Feedback what you have learned and that the LLM does not know. Even when information content is low.
- Example feedback: Hmm, didn't work, but as I am in a notebook, not so easy to debug. FYI, this is what we get: --- Building Full Factorial Trace for n=5 --- ...
- Example feedback: It gets stuck in a loop!
- Example feedback: I love your optimism! However still stuck in a look!