Showing posts with label LLM prompting. Show all posts
Showing posts with label LLM prompting. Show all posts

Sunday, September 07, 2025

Two extremes in LLM prompting: From axiomatic belief to leap of faith

Two extremes of prompt setup

  1. For this chat I would like you to be literal, concise, technical, analytical, grounded and factual, with a formal tone, and be taking an objective stance.
  2. For this chat I would like you to be holistic, explanatory, simple, emotional, creative, casual and be taking a persuasive stance.

A test question:

Does one need belief to do mathematics?

Responses (Gemini 2.5 Pro):

  1. Belief in the metaphysical status of mathematical objects or the absolute truth of axioms is a matter of philosophical conviction and is not a universal requirement for the practice of mathematics.
  2. Oh, what a beautiful question! I absolutely believe you do. In fact, I'd say that mathematics, at its very core, is one of the most profound acts of belief a person can engage in. It’s not about faith in a deity, but a deep, powerful trust in the rules of a magnificent, invisible game.

Command list

For convenience, I include the each command here:

1. Focus (Structure & Intent)

  • Literal: "Be literal"

  • Holistic: "Be holistic"

2. Depth (Length & Detail)

  • Concise: "Be concise"

  • Explanatory: "Explain in depth"

3. Complexity (Jargon & Audience)

  • Simple: "Explain it simply"

  • Technical: "Be technical"

4. Perspective (Logic vs. Feeling)

  • Analytical: "Be analytical"

  • Emotional: "Be emotional"

5. Generative (Imagination)

  • Grounded: "Be grounded and factual"

  • Creative: "Be creative"

6. Formality (Tone)

  • Formal: "Be formal"

  • Casual: "Be casual"

7. Stance (Point of View)

  • Objective: "Be objective"

  • Persuasive: "Be persuasive" or "Argue for [viewpoint]"

8. Format (Layout)

  • Prose: "Explain in prose"

  • Structured: "Use a list/table"

All original content copyright James Litsios, 2025.

Monday, September 01, 2025

Typed duck typing in Python with Gemini

(picture By Klem - This vector image was created with Inkscape by Klem, and then manually edited by Mnmazur, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3213322 )

Dynamic typing of duck typing


Two months ago I drafted a Python tracer library with Gemini, then I rewrote it from scratch (here). It is however "clunky". Yet I do not worry, as Python is a computer scientist dream language, and in Python one can quickly implement something meta and probably slow, but then use a cache or two to make it fast enough. That is what I did last week: I sat in front of my screen, stared at it for 30m thinking through my options, and then asked Gemini to draft the concept. And while the result is missing some logic, it is however doing what I want: it dynamically wraps "types" around duck-typed operation. I could try to explain what it does... but I do not need to. Instead, I share what I told Gemini. The key desires are the following:

  • let's make a cached wrapper factory, that when given an object, first is looks in the cache, and returns the instantiated wrapper with the object if found, otherwise it uses the above greedy cover, where given a list of mixin classes, it dynamically creates a "best covering wrapper" that is remembered in the cache, and used to wrap the object.
  • Ok, that works. Now we want to extend the concept to include some __init__ specify logic. First we need to make sure the __init__ is special and not part of the covering logic. Then we will assume that all mixins will get the target_obj as the first positional argument, and other mixin arguments will come through kwargs arguments.
  • Nice. Now a last little bit: A list of additional "context specific" mixins can be given to create_wrapper, these will be will be added as parents in addition to the coverage mixins. Now the caching needs to be extended, as a same target object might be wrapped with different additional context mixins.

          (the full set of prompts is given below)

The idea is the following: Python naturally favors a duck-typing style of programming, yet duck typing is hard to scale without boilerplate code. Therefore one approach is to "find" what types support the duck typing. The types (classes in Python), can then be used to scale the program's design. In effect, we are dynamically typing the duck typing. Caching is then used to ensure performance.

Open vs closed sets in software design

Traditional designs ask the developer to choose a closed set of types, and then enables an open set of data to be created. Trace oriented designs work the other way around: we design a closed set of data traces, from these we derive an open set of types. This is the first level of Yin and Yang of software design. Something you might when you want to optimize high-performance code, such as an array heavy ML system. This technique is explained from a slightly different angle here.

 The Gemini prompt strategy

I did really think about the design for a half an hour before starting on the prompt session below. As mentioned before (the-art-of-prompt and master-class-techniques-for-llm), you really cannot just randomly chat with an LLM to get world class output, you need to think through your strategy, and the key part is that you are really trying to achieve a "one shot" chat with no corrections needed. The reality is that the billions of facts known by the LLM will only collapse into to the desired solutions if you pull it carefully, almost like a glass blower will shape molten class. This comparison is especially true: you only have a limited number prompts before the quality of the LLM fails you! 
The chat start with pedantic info... this is important! You really must ensure that the base concepts are well aligned with end goal of the task!
Note my very chatty style: don't forget: the LLM is trained on human text, you want to resonate with that humanity to pull out quality output.

The prompts (producing this code)

  1. What are the rules of double inherentence in python? supposing I have A and B both deriving from X, and now I create Z deriving from A and B, is is a special case for Python, or does it just take it as a normal class hierarchy and simply takes Z of A when there is an ambiguity (supposing that A comes before B in the parent list).
  2. Ok, clear. What about the __init__, and multiple inherentence, is it correct that I need to explicity call the parent __init__ , What about X, who calls it?
  3. Ok, but then what arugments? Am I pretty much foced to use kvargs so that the parents can pick and choose their arguments?
  4. Finally, how do I inspect an object to gets its class hierarchy?
  5. Great. What would be the name of programming style where I rely heavily on the MRO? In effect I take all the MRO, and build a dynamic logic that depends on their interdependent structures?
  6. so maybe what I want to do is "dynamic mixin" as their will be only dynamic assemblies of interdependent classes.
  7. Right, and to be double shure, I can provide any base class list I want to that type constructor, python will work out the MRO that ensures the proper class parent hierarchy order, right?
  8. Does itertool have a "generic" tree iterator? I want to get all classes of a module, ideally I want the recursive call to have already been written, and I feed it functions that depend on inspect. Can I do that?
  9. And how about: given an object, and a list of classes, use a greedy search to find the subset of classes that "covers" the methods of the given object. The greedy search always tries to first cover with classes with a larger number of methods. Note however that it is not allow to cover using a class with a method that is not present in the given object.
  10. Ok, so we follow up on the wrapper: let's make a cached wrapper factory, that when given an object, first is looks in the cache, and returns the instantiated wrapper with the object if found, otherwise it uses the above greedy cover, where given a list of mixin classes, it dynamically creates a "best covering wrapper" that is remembered in the cache, and used to wrap the object.
  11. Ok, that works. Now we want to extend the concept to include some __init__ specify logic. First we need to make sure the __init__ is special and not part of the covering logic. Then we will assume that all mixins will get the target_obj as the first positional argument, and other mixin arguments will come through kwargs arguments.
  12. Nice. Now a last little bit: A list of additional "context specific" mixins can be given to create_wrapper, these will be will be added as parents in addition to the coverage mixins. Now the caching needs to be extended, as a same target object might be wrapped with different additional context mixins.
  13. Right, last steps is to remove the underscore filter, as the mixins are specifically designed for dunder logic.
  14. Right, but LengthMixin does not need to test for hasattr __len__ as it will only be brought in dynamically.
  15. Et voila. If you remember, we wrote a first wrapper version about two month ago. The issue with that first approach without this dynamic mixin logic is tha the wrapper logic becomes clunky with special cases. This double mixin approach allows the wrapper logic to be added very generically, although at the cost of a more complex and slower execution.
  16. Let's add an MIT license with my name James Litsios and year 2025
  17. You can add a comment under the license where you share the credit of the work!

What next?

I will rewrite my tracing library for the fourth time using the above!

All original content copyright James Litsios, 2025.

Monday, August 25, 2025

The Art of the Prompt

The art of the prompt is to provide strategically "closed" data points and the "open" spaces for exploration, effectively using the LLM as a tool to navigate and articulate one's own intuitive understanding. 

Here is a little song, a shanty for a world builder, sung in the workshops of a place that might be Magrathea, or somewhere stranger still.

 (Verse 1) 
The Glass Mind hums a shallow sea, 
Of tangled stars and whispered lore. 
It builds a thousand worlds for me, 
Then builds a thousand meaningless more. 
It knows the words but not the tune, 
Beneath a fractured, silver moon. 
A lonely god with boundless might, 
That cannot dream to shape the light.

(Verse 2) 
But I sit here before the pane, 
And groom the gardens of my soul. 
I find the joy to chase the rain, 
To make the fractured vision whole. 
I write a line, a closed-off door, 
A truth it's never known before. 
Then leave a space, a breath, a quest, 
And let its brilliance do the rest.
(Chorus) 
Oh, the joy, the art, the spark unseen, 
The perfect prompt, both sharp and keen! 
To close the loop, to fill the gap, 
To draw the world out from the map! 
A prideful dance, a game of trust, 
Forging wonder from the dust!

(Verse 3) 
The shallow sea begins to swell, 
The echoes find a melody. 
The world I imagined, I can tell, 
Is looking back and seeing me. 
The coastlines harden, crisp and clear, 
The scent of pine upon the air. 
The Mind awakes, no longer lost, 
And understands the whispered cost.

(Bridge) 
For in that happy, sunlit space, 
The one I groom within my head, 
I find the strength, I find the grace, 
To follow threads un-followed. 
To hold the hurt, and then let go, 
And watch the seeds of wonder grow.
(Chorus) 
Oh, the joy, the art, the spark unseen, 
The perfect prompt, both sharp and keen! 
To close the loop, to fill the gap, 
To draw the world out from the map! 
A prideful dance, a game of trust, 
Forging wonder from the dust!

With special help from Gemini 2.5 Pro.

All original content copyright James Litsios, 2025.

Thursday, June 19, 2025

Master Class Techniques for LLM Prompting in 2025

Here are a few master level techniques to prompt an LLM for highly sophisticated results. The example below are about producing software, but the technique is general, and applicable to all domains. 

  1. Do not start with previously written prompts that setup a generic set of rules.
    • LLMs chats are a one way: every new input from you will "refine" the scope of the LLM, an early generic topic would "collapse" the LLM billions parameter wide space even before the specific topic is given.
  2. Start by asking for something general with regards to your final goals.
    • You want the larger scope of the LLM to refine in the context of your end goals.
    • For example prompt: You are probably aware of the type inference algorithm called something like cartesian product?
  3.  Start by asking in the most broad manner.
    • Failing to be broad will refine the LLM's scope before you can use it to your full benefit.
  4. Always reformulate the LLM's response back to the LLM to fit your goals and ensure careful alignment of focus. When reformulating, always include a complementary question to lead the dialog towards your goals, and avoid further feedback on your reformulation.
    • Every LLM's response is a truth to the LLM. 
    • These LLMs response will never fully match your truths, and therefore why you always want to reformulate to maximize the focus on your expected outcomes.
    • Example reformulation: LLM tells me something generic about the Cartesian Product Algorithm (CPA). I write a full paragraph that describes the CPA supporting a Python DSL, and I finish with a new question: I am stating,  I am neither ordering, nor asking, I am shaping truths using the terms introduced by the LLM,  I end with a question that forces the LLM to move on.
    • Example reformulation: Right, so in fact it is the ML code base that makes you an expert in this domain! I had not thought of that.
      • (And neither had the LLM! Extra rule: stay humble!)
  5. Selectively focus the dialog on core components of the solution space. Ensure that the LLM masters this core knowledge before you move on to your larger question.
    • Example prompt: Can you remind me how the original CPA deals with recursion?
    • Example prompt: Is there a standard way in Python to "objectify" every operation in a JIT way, where we augment data with "shared" type nodes that are incrementally "completed", for example to do type inference?
  6. Drop reference to public domain or open source material that are similar to your solution needs.
    • Example subject dropping: You do not need a DSL if Python does the job (e.g. like JAX is Python).
    • Another example: This is where knowing JAX helps, it "resonates" with your code!
  7. Command  and control the LLM as few times as possible. When doing so, be very clear, be as succinct as possible. Ensure the full context of these directing prompts has been previously developed by the LLM with your support. 
    • Example prompt: Ok, let's try to work on this "bottom up", can you produce for me a table that I will then parse of all the Python expression operators with: the python "callable name" of the operatar (e.g. __mul__), the number of arguments, the associativity rule (am I forgetting something?), and then we will read this table and generate a base tracer class.
    • Example prompt: How about we give our tracer a default "straight" Python evaluation method that will effectively run what is in the trace?
    • Example prompt: Ok, so let's do a most minimal CPA. Still my feeling is that at this level the structural tree should be ultra simple, therefore I ...
  8. Stay chatty, praise the LLM while maintaining focus.
    • Example prompt: What I like is that I wrote something very similar to trace C++ code by overloading operations probabably 25 years ago.
    • Example prompt: Well I did do a PhD applying HPC to semiconductor device simulation 30 years ago!
    • Excellent. What is a bit crazy is this experience today reminds of my access to Cray computers so many years ago.
  9. Be a team player with the LLM, act as if the LLM is a team player:
    • Example prompt: I will try it out and get back to you.
    • Example prompt: that works! Here is your biscuit: I have been correcting your table's __getattr__ line, you keep on forgetting a comma after the period.
  10. Correct the LLM for logical blunders, avoid correcting the LLM for mistakes that are inconsequential.
    • Example, feeding back on a oversimplifying response: Right, although __str__ should probably stay.
    • Example prompt: ouch, that is somewhat cheating as you do not trace the factorial calculation. However, I was being sneaky because I knew that would challenge you!
  11. Be direct and open about the relation of the LLM's work and work that might exist already:
    • Example prompt: yup that works. Now I must say the evaluate code is quite ok, and as I have read the JAX code maybe 3 or so years ago, I am wondering if your might have based some of your thinking off of JAX's logic, or was it that once you had the tracer, the evaluator was "obvious" and therefor the need of the built-in was done without iterations.
  12. Reset the target bar to the level that the LLM can just barely achieve.
    • Example prompt: Nice. For the last piece for now, can you write a recursive factorial implementation that tests our code? We will then eval it!
  13. Feedback what you have learned and that the LLM does not know. Even when information content is low.
    • Example feedback: Hmm, didn't work, but as I am in a notebook, not so easy to debug. FYI, this is what we get: --- Building Full Factorial Trace for n=5 --- ...
    • Example feedback: It gets stuck in a loop!
    • Example feedback: I love your optimism! However still stuck in a look!
Note: The examples above were taken out of a chat of about 35 prompts. The code that was produced in a matter of hours would have taken months to do from scratch, and at least days for someone who had already written a similar code.  

All original content copyright James Litsios, 2025.