Sunday, January 31, 2021

Market participants, structural dysfunctions and the GameStop event

At least eight dimensions can be used qualify the way financial participants trade:

  1. Transaction speed from slow to quasi-instantaneous.
  2. Transaction rate from rare/infrequent to quasi-continuous.
  3. Selection of transactions from disorganized to very organized (e.g. backed by mathematics and research).
  4. Transaction's future obligations from no future obligations to with future obligations (e.g. backed by personal wealth).
  5. Time scope of transaction's obligation from immediate (e.g. transfer cash) to long term (e.g. payout bond coupon after 30y).
  6. Number of participants on "same side of transaction" from small to large
  7. Size of single transaction from small to large.
  8. Influence of fundamental/"real world" properties of traded contract from none to very important.

In the context of the GameStop event we note the following: 

Traditionally,  retail investors execute transactions:

  1. Slowly
  2. Infrequently
  3. In a disorganized way
  4. With no future obligation
  5. With only immediate obligation
  6. As part of a very large group of similar participants
  7. On transactions of small size
  8. With more care about the image/brand of the traded products than the fundamentals

To differentiate, one type of hedge fund's transactions might be qualified as:

  1. Quasi-instantaneous
  2. Quasi-continuous
  3. Organized with algorithms and machine learning
  4. Including much future obligation
  5. With future obligations up to ~1Y
  6. As part of a small group of similar hedge funds
  7. On transactions of small size and of complementary transactions of larger size.
  8. With a combination of caring only about short term machined learned properties to some caring about longer term fundamentals

And to differentiate with at least thirty other market participant profiles going from broker to settlement bank, or insurer. This last point being important: it is not "just" about the retail investors and certain types of hedge funds, there is a whole "ecosystem" out there of financial interdependence. Also coming back the hedge fund example: important are the strong future obligations, the hedge funds "have promised" something. In this case to give back the GameStop shares they have borrowed, or pay out options that depend on the high value of the stock.

Now then, the key changes in the GameStop event are the retail investors GameStop transactions becoming:

  1. Slow
  2. More frequent
  3. Very organized: buy only, "buy for ever"
  4. With no future obligation
  5. With only immediate obligation
  6. As part of a very large group of similar participants
  7. On transactions of small size, (probably bigger average)
  8. Caring about "beating hedge funds", making a killing with the rising share price, the charismatic"gaming product" brand, with absolutely no caring about fundamentals.
The "very organized on one side" is the killer ingredient here. All trading strategies are a form of balancing act, and all participants assume some amount of future market behavior will support their trading strategy. The traditional retail investors assume that someone will be there to buy back what they have purchased. Hedge funds assume they will be able to take advantage of the different needs and random nature of the different market participants, and more importantly, they assume that they can rebalance their risk "on the fly" within their trading strategy.  One can visualize a hedge fund as a bicycle that is pulled to the left or the right as trades are made, and that actively needs to rebalance from time to time by making selected trades, to avoid "falling over". However, if all the trades are "one sided", and worse, they are all counter the initial assumptions of the hedge fund, things go bad quickly, as the hedge fund is mostly only able to make trades that imbalance it further, leading to it hitting its financial limits, and either being acquired by a bigger fish, or going bust. 

The flash crash, was another example of "structural dysfunction" to the market, when the prices plunged because most quotes were pulled. With the GameStop event, the prices exploded because a large enough group of participants suddenly decided only to buy and hold. 

There are many markets with an imbalance of buyers and sellers. What is new here is: in a market with future obligations a disproportionate amount of participants suddenly decided to actively participate only on one side of the transaction (here buying). A learning that hedges funds will integrate at the cost of limiting their leverage.

All original content copyright James Litsios, 2021.

Thursday, January 21, 2021

From sparse tensor parameter space to orchestrated stream of distributed processes

In 1996, my team adopted a tensor view of its application parameter space.

Credit must be given to one of my partners at QT Optec/Actant, who presented the following view of the world: each client could "tie" each parameter to any point/slice in "high-dimensional" cartesian space of primary and secondary keys. It was an advanced concept which took me many years to fully master. The simple statement is: "it ain't easy"!

Higher dimensional data views are "all the rage" now, but not in 1996. I'll try to illustrate the concept as follows:

Imagine we have a parameter P1 used in formula F that goes from X to Y. Imagine that X can be queried with keys kx0 and kx1 (it is two dimensional) and Y is three dimensional with "axis" ky0, ky1, and ky2. P1 is a parameter, which means that it is a "given value". The question is then: is P1 just one value? Should we use a different P1 for the different values of X tied to its 2d mapping to kx0 and kx1? We could even imagine that P1 is different depending on F producing Y in different values of the three dimensions ky0, ky1, ky2. We could even do better. We might say that P1 sometimes depends on X, and sometimes depends on Y, this would especially make sense if F was dependent on yet another set of data Z, and the new condition is that the choice of P1 depends on Z.

With hindsight, the key learning is that no data is static. Therefor P1 is not "a parameter", P1 is a stream of values. The simple rule is nothing is "just data", it is always data within a process of update of that data, parameters do not exist. To simplify: we can say that P1 is a stream of data. Yet to get the design model right, we need to say that P1 is a stream of data constrained by a very specific process. 

All of the examples above are about P1 being parallel streams of data. And the second key learning is that most often this means that P1 is a distributed process of streamed data that must follow a very specific distributed process of data.

The original problem/solution formulation was somewhat OO or DB. Put in an orchestrated stream of distributed processes form we actually have most of the tools we need to make this concept scale and work.  

All original content copyright James Litsios, 2021.

Saturday, January 02, 2021

What works in higher order Python (Functional Programming, early 2021)

 Here is what works for me to write higher order typed Python:

  1. Use class with __call__ instead of lambdas or functions (aka explicitly specify each closure).
  2. Favor constraining types with Callable, ParamSpec and Concatenate (not with class constructions).
  3. Replace *args by immutable list (Tuple[a,Tuple[b, ...) when ParamSpec fails to scale. 
  4. Replace **kwargs with object using @overload with Literal typed arguments (sadly TypedDict seems not to support generic typing) .
  5. Use Generic to “store” multiple polymorphic types
  6. Use Protocol to separate semantic tiers of types (e.g. ownership in a smart contract)
  7. No nested functions, lambdas or classes.
  8. Use cast to type "unmanaged" types (e.g. eval)
  9. Use phantom typed "dummy" arguments (and type casts) to get around "wrong direction" type dependencies
  10. Visual Code's Pyright works. Early 2021, pycharm fails.

The untyped application scope is a broader Python 3 stack. Note this is 3.10 Python typing (e.g. ParamSpec).  

Not all is easy. Deeply typed Python seems to be magnitudes more expensive to write than "classical" Python. Also, Union types may be needed to "break out" from "tight" type relations. My current feeling is that one is sometimes forced to introduce Unions to allow complex trace joins and bifurcation, to then need to add additional layers of type constraints to control the generality of these Unions. All of this needs to be done with careful "locking" of types. Not the best of situations!

Python is known to be a dynamically typed language, which is ok, as static typing is more of a luxury than a necessity. I learned the above this 2020-2021 Xmas holiday writing lens/optic like python code. Initially with no types, fully lambda centric. Then I thought: let’s try to make it typed in Python!

All original content copyright James Litsios, 2021.