Showing posts with label algorithmic trading. Show all posts
Showing posts with label algorithmic trading. Show all posts

Monday, October 30, 2023

The personality of going Infinite

 "A bit shallow" is my short review of Michael Lewis Going Infinite book on the rise and fall of Sam Bankman-Fried (SBF). Still, I don't have an issue with the lack of content, as that reflects the reality of the open legal situation. I do however find the psychological presentation of SBF a bit shallow. 

(Background info is that SBF started his career as a trader at Jane Street Capital, a high frequency trading firm).

When hiring a trader, you are looking for someone that balances drive and analytical thinking (personality state 2 below). You also want a trader that is emotionally connected to their work. In high-frequency trading the software and the math trades, not the person. As a result the trader must have "empathy" for their models, for their algorithms, as these do the trading. These models and algorithms are complicated, and need clear mind to be felt. As a result, when hiring an electronic trader, one might look for people that can "shut out normal emotions" and connect to the "made up emotions" of markets, models and machines. In a previous post this year, I mentioned this "shut out emotions" state as  "preserving a depleted emotional space" state (state 1 below).  

A depleted region in semiconductors, is an area where electrons have been "pushed away". Resulting in a region that needs electrons to be "normal" again. I use here the term depleted emotions in a similar maner: it is a state of mind that lacks emotions, and only becomes active emotionally with an inflow of emotions from others, or by a focused effort of the person. With no inflow, the amount, and the depth of emotions stays minimal.

Let's call "Goal Oriented-Analytical-Preserving emotional Personality" (GOAPP), as person that seeks to balance their personality state of analytical goal orientation with their need to preserve an almost none-emotional (e.g. intellectuel) view of the world.

Most of us react to social moments, many of us buy things. State 4 below captures the balance of the two. Most of us resonate with our emotions (state 3 below). 


There is a "Follower-Consumer-Emotional Personality" (FCEP) which is on the opposite corner of  GOAPP.  These two personalities can co-exist in multiple manners. From a GOAPP perspective:
  1. FCEP might be embraced as the ever elusive normality.
  2. FCEP might be rejected in a form of self-handicapping denial.  
My feeling is that SBF falls into the second category above: A goal-oriented analytical person, that in part denies their emotions, and yet also the emotions of others, as well as denies the normality of others to be part of society, both as consumers and as followers of social trends. This explain in part the crazy interviews: having denied himself the normality of emotions and social belonging, he lived out his fantasy of rational normality, which really does not end up containing much.

All original content copyright James Litsios, 2023. 


Sunday, January 31, 2021

Market participants, structural dysfunctions and the GameStop event

At least eight dimensions can be used qualify the way financial participants trade:

  1. Transaction speed from slow to quasi-instantaneous.
  2. Transaction rate from rare/infrequent to quasi-continuous.
  3. Selection of transactions from disorganized to very organized (e.g. backed by mathematics and research).
  4. Transaction's future obligations from no future obligations to with future obligations (e.g. backed by personal wealth).
  5. Time scope of transaction's obligation from immediate (e.g. transfer cash) to long term (e.g. payout bond coupon after 30y).
  6. Number of participants on "same side of transaction" from small to large
  7. Size of single transaction from small to large.
  8. Influence of fundamental/"real world" properties of traded contract from none to very important.

In the context of the GameStop event we note the following: 

Traditionally,  retail investors execute transactions:

  1. Slowly
  2. Infrequently
  3. In a disorganized way
  4. With no future obligation
  5. With only immediate obligation
  6. As part of a very large group of similar participants
  7. On transactions of small size
  8. With more care about the image/brand of the traded products than the fundamentals

To differentiate, one type of hedge fund's transactions might be qualified as:

  1. Quasi-instantaneous
  2. Quasi-continuous
  3. Organized with algorithms and machine learning
  4. Including much future obligation
  5. With future obligations up to ~1Y
  6. As part of a small group of similar hedge funds
  7. On transactions of small size and of complementary transactions of larger size.
  8. With a combination of caring only about short term machined learned properties to some caring about longer term fundamentals

And to differentiate with at least thirty other market participant profiles going from broker to settlement bank, or insurer. This last point being important: it is not "just" about the retail investors and certain types of hedge funds, there is a whole "ecosystem" out there of financial interdependence. Also coming back the hedge fund example: important are the strong future obligations, the hedge funds "have promised" something. In this case to give back the GameStop shares they have borrowed, or pay out options that depend on the high value of the stock.

Now then, the key changes in the GameStop event are the retail investors GameStop transactions becoming:

  1. Slow
  2. More frequent
  3. Very organized: buy only, "buy for ever"
  4. With no future obligation
  5. With only immediate obligation
  6. As part of a very large group of similar participants
  7. On transactions of small size, (probably bigger average)
  8. Caring about "beating hedge funds", making a killing with the rising share price, the charismatic"gaming product" brand, with absolutely no caring about fundamentals.
The "very organized on one side" is the killer ingredient here. All trading strategies are a form of balancing act, and all participants assume some amount of future market behavior will support their trading strategy. The traditional retail investors assume that someone will be there to buy back what they have purchased. Hedge funds assume they will be able to take advantage of the different needs and random nature of the different market participants, and more importantly, they assume that they can rebalance their risk "on the fly" within their trading strategy.  One can visualize a hedge fund as a bicycle that is pulled to the left or the right as trades are made, and that actively needs to rebalance from time to time by making selected trades, to avoid "falling over". However, if all the trades are "one sided", and worse, they are all counter the initial assumptions of the hedge fund, things go bad quickly, as the hedge fund is mostly only able to make trades that imbalance it further, leading to it hitting its financial limits, and either being acquired by a bigger fish, or going bust. 

The flash crash, was another example of "structural dysfunction" to the market, when the prices plunged because most quotes were pulled. With the GameStop event, the prices exploded because a large enough group of participants suddenly decided only to buy and hold. 

There are many markets with an imbalance of buyers and sellers. What is new here is: in a market with future obligations a disproportionate amount of participants suddenly decided to actively participate only on one side of the transaction (here buying). A learning that hedges funds will integrate at the cost of limiting their leverage.

All original content copyright James Litsios, 2021.

Saturday, December 28, 2019

Beware of the language trap in software development

Consistent design of multiple views of distributed state

The notion of "state" is key to software development. For example, we talk about stateful and stateless designs. Still, there are multiple notions of state. For example, there is the stored state, the communicated state, the received state, the processed state. Therefor when we make a statement like "stateful", we are always just referring to one "notion" of state and not the others. This "abuse of stateful view" was historically not an major issue. Yet with the rise of processing speeds and truly distributed systems, these multiple views of states started more and more to exist "simultaneously" and this caused problems. For example, we might model a trading system as four different states: the state of trading action messages received by the exchange, the current state of the exchange, the state of confirmations received from the exchange, and the current state of our trading system. A question is then: in our concrete code, from which state views should we build our design, and how to do it consistently? We now know this is the wrong question. A trading system is a distributed system, all views of states are real and have business significance. Therefore the question is not which state, but how to design all state views/models as one coded design in a cohesive manner?

We now know we can use higher order types to do this, to capture the multiple views of states "as one model" within a distributed computation. It took me a long time to understand this. One reason being that only C++14 had the explicit template specialization needed to do this purely in C++. And while I had experimented with Prolog to generate complementary C++ code models in the mid-1990s (and was not convinced). And in early 2000, I tried to use generative technics with dynamically typed functional programming (I wrote about a generative programming conference in Dr. Dobb's at the time). It was only when I picked up statically typed FP (initially F# in 2006), that I understood how it could be done (e.g. with monad-comonad adjunctive joins as hinted here pre-Elevence).

Business success

The reason I bring "managing distributed states" in this posting, is that I was retrospecting on the challenge of success in my many software developments. This made me think back to my experience in developing a market making system. (I wrote about Actant in a recent posting).

Early 2000 market and regulatory pressures meant that many new derivative exchanges where being created. This resulted in many "easy" new business opportunities to sell an algo trading and market making system, as one "just" needed to connect a product to these new exchanges. However, what we did not foresee, was that this "lively" exchange business development would also have the exchanges competing "lively" among themselves and be updating their trading and quoting APIs at an unprecedented rate (e.g. every quarter). In parallel, we consistently had mismatches in our different view of distributed state, for the reason mentioned above, but also because we were exposing different state views that were easy on our end-users, but which broke distributed consistency. The result was, we spent most of our development resources maintaining high-performance exchange connectivity and managing a hard to manage "mismatch" of state models, with little resources left over to be strategic.

The language trap

One of my Actant partners once said: C++! It was C++ that hurt us the most. (Again Actant mentioned here). By that he meant "stick to C", but C is a subset of C++, right? So how can C++ be an issue?

Here is a timeline that will help us:
  1. Pre-C++98 (templates): we used macros, generative and model driven programming to write consistent models that ran distributed programs as multiple and consistent views of state.
  2. Post-C++98: we used templates, type unsafe models, and redundant code to write distributed programs with often inconsistent views of state.
  3. Post-C++14 (explicit template specialization): we used templates and explicit template specialization to write consistent models that run distributed programs as multiple and consistent views of state.
Because we chose to adopt the rules of C++, because we did not understand that by doing so we could not code a single consistent model of multiple views of state that we needed for distributed computing, we got "caught" in the 16 year gap of formal inconsistency that C++98 introduced! 

I write "formal inconsistency" because nothing in C++98 says that you couldn't continue to use macros, generative and model driven programming to get around the limitations of templates. The thing is "we do not know what we do not know", so we did not know that it would have been best to ignore the "formal" template solution and stick with our old technics. And that is example of a language trap.

A language trap is when developers choose to adhere to limiting coding language semantics without understanding that they are "shooting themselves in the foot" because they are no longer able to "get things right" when adhering to these limiting language rules. In some sense, a language trap is a technical form of semantic barrier.

Unfortunately, again "we do not know what we do not know". So we may or not be in a language trap, we do not usually know if this is the case. In early 2000, we did not realise how much the choice of purist C++ approach had been a bad choice.

My learning from that period was: never let a language nor language purism restrict you. Because you do not know how that it harming you until it is too late. The safer and future resistant approach is to deconstruct "a single language", and be purist in how you formally compose multiple language constructions.  An advantage of this approach is that it may also be applied consistently across multiple languages.

All original content copyright James Litsios, 2019.

Saturday, November 02, 2019

QT Software, the Troika, to insights on co-founding Elevence

Mattias Jansson, Lukas Lüthy, James Litsios

In summer 1996, I met Mattias Jansson and Lukas Lüthy (left and Luki middle above, I am at the right), as well as Adrian Lucas (left below) and in what seemed to be less than 15m, they were inviting me to join them set up a software company to develop a non-proprietary derivative market making system. Another key partner in this story is Stig Hubertsson (not shown). Therefore, we found ourselves co-founding QT Software, which later was renamed Actant.


I bring up this past in this blog to cheer excellent individuals and a great team, and give you insights into how QT and Actant led me to co-found Elevence and develop a unique smart contract language (then acquired by Digital Asset).

First the cheering: Still now Actant software is known to be the fastest non-proprietary derivative trading software. And while I don't have the numbers, I could well believe that they compete well with proprietary derivative trading software on classical hardware (meaning CPUs and not FPGAs).
And we might mention golden years of early 2000's when two of the top three European derivative market making firms were relying on Actant's product to do their business.

Mattias, Luki, and I were the core the company's engineering. Like minded, we worked much as one,  relying on voting to resolve the sometimes disagreement, and on the way, shaped a unique company with its specific market niche and customers.

That first picture above was taken in early 2018. Two hours later we were out of the house watching the fireman putting out the fire of the gas barbecue grill. Luckily no one was hurt, the gas tank did not explode, and house suffered only minimal damage. We then cooked our steaks in the oven and enjoyed an evening of great friendship.

The second picture is LIFFE Connect advertisement in the Financial Times!

In 1998, QT software was running on LIFFE Connect and Eurex. For the LIFFE Connect opening, which was first electronic access to the LIFFE exchange, we were asked to help publicise the new exchange offering. Adrian is to the left of that picture, and I on the right. Funny story, we showed up for the photoshoot at the exchange. They offered to walk down to the trading floor, but then proceeding to disallow Adrian's brown shoes, as only black shoes were allowed there!

One insight I gained in market making was how to model distributed systems with "adjoint" relations. The date was October 2008, a difficult period because of the impact of the 2007 crisis on our customers. I was lying in bed at our company's yearly executive get-together at the Palace Hotel Lucerne, and it occurred to me that one could model the market maker as some form of adjoint dual to the market of buyers and sellers. Years later I used that insight to co-found Elevence Digital Finance where we developed a language of rights and obligations sustained by a dual adjunction like relation to a blockchain ledger (see BearingPoint's announcement when Elevence was acquired by Digital Asset).

Anecdote: to whom do you go for a market making product in 2021 for derivative cryptocurrencies? Actant of course, they are leader in non-proprietary cryptocurrency derivative market making systems!

All original content copyright James Litsios, 2019.

Sunday, February 08, 2015

Is order spoofing ok?

Spoofing is the act of generating orders to buy or sell something on an electronic market, but to then immediately cancel these orders in order not to trade. Spoofing is used to generate a burst of activity with the hope that it will cause other algorithmic trading systems to react in a sub-optimal manner, allowing the spoofer to follow through with profit making trading activity. Spoofing happens when you enter orders with no intention of letting those orders trade. Which is different from entering orders in to a market and to then cancel them because you have changed your mind. There is no change of mind in spoofing, and therefore it might be seen as a form of deception. Spoofing works because other trading systems are waiting to see certain market prices before they change their behavior. Spoofing can also slow down the market as it may overload the exchange’s network or matching engine, such a slowdown may put at disadvantage other market participants. The question is: is spoofing ok?

Deception is never far from risk taking in business. Taking risks implies building a mental abstraction that works out the pros and cons of future scenarios, and that concludes that certain actions are more profitable than others. Sharing this understanding with others, might well affect negatively your future outcomes. A first form of deception is to avoid public debate around subjects that would develop information that would be against your interests.

Is business deception morally wrong?

The simple answer is yes, if only because I am unhappy when something I have purchased breaks because of its "fragile design". This emotional state is a sign that a part of me thinks that these companies are behaving badly. Yet the full answer is less obvious.

All companies that take risks are potentially being deceptive. Again the issue is that minimizing risk and maximizing profit leads to minimizing information disclosure. And keeping information “quiet” is a form of deception. The issue is that business is not possible without taking risks or making profits. In fact the whole economy is built on embracing risk and profit, and therefore also built on embracing certain forms of deception.

That our way of life leads to deception and that deception can hurt people, is a reality we would be happier without. And yet as the issue does not go away, people have found ways to deal with it. When confronted with deception, people will either:
  • Ignore it, and act as it does not exist. 
  • Try to keep it from happening by introducing rules and legislation. 
  • Accept it as fact of life. 
  • Work around it.
  • Find flaws in the method of deception, and take advantage of these flaws. 
  • Join in, and deceive others in a likewise manner.
The magic of human beings is that for each “deception situation”, each of us will take one of these approaches, and this choice will be based on personal values. We will either take the escapist approach of ignoring the deception, or take the constraining approach of trying to limit future deception, or take the liberal, somewhat darwinistic, approach of accepting the deception as “a natural part of the system”, or when possible work around or take advantage of the deception, or finally become a deceiver ourselves. Note that the constraining approach has two variants:
  • Rules or laws are set up that disallow a form of deception. 
  • Rules or laws are set up that limit behavior with the goal to limit the ability of deception. 

Given these remarks, what should be done about spoofing in financial markets? Exchanges prefer either to ignore spoofing or to ban it by disallowing spoofing as a form of deception. I think that is wrong, that spoofing should be allowed, and that throttling mechanism within the market APIs are the best way to deal with exchange overloading causes by trading behavior. Here is my reasoning.

Market prices are the result of equilibrium that happens between buyers and sellers. It is the outcome of competing ideas, competing beliefs, competing interest, competing focuses, competing technologies, etc. Someone or an algorithm may want to buy, others and other systems want to sell, they are all taking risks, often thinking differently, and the outcome is market activity.

Spoofing is an ultra-short-term strategy that provides a healthy counter balance to strategies that are based on speed, and only on speed. The thing is, among all those traders and trading machines, are those that base their decision making more on the current market prices than on real world activity. These market participants are taking short-term risks, risks that often lead to profits because of their extreme speed. These are profits taken from others that are competing on the short term, and also profits taken from those are taking longer-term view. In this jungle of “eat or be eaten”, it is perfectly right that spoofing makes profits by disrupting short-term strategies.

Spoofing that makes a profit by physically overloading the exchange’s matching engines is wrong. It is wrong because it puts the whole exchanges at risk, both in its operational integrity, as in the ability for the exchange to be master of its rules and regulations. It is not right to allow spoofers to purposely affect the processing behavior of other participant’s orders. For example, by purposely causing the exchange to slow down and order queues to build up. Yet here again we are confronted with the difficulty to distinguish between behaviors that are purposely trying to kill exchange performance, and behaviors that are directly business driven, yet end up impacting exchange performance because of the amount of trading activity they generate. Therefore I agree that exchanges should introduce constraints that protect their networks and execution engines, but care must be taken to how this is done. As mentioned above, two approaches are possible: One can simply “outlaw” spoofing, and hope that it is not done, and if done, hope that it will not harm exchange infrastructure; Or, one can change things, to keep spoofing from impacting on the exchange’s execution. Throttling dangerously high trading activity is the natural way to approach this problem.  It is perfectly reasonable that exchanges limit trading activities that disrupts exchange integrity be implementing throttling within the trading APIs. Therefore, it is is right for the exchanges to configure API  throttling limits to limit spoofing activity that would impact too strongly on the exchange's network or matching engine. It is wrong for exchanges to simply ban spoofing, as that would give an unfair advantage to participants that apply trading strategies that are hurt by spoofing. 

Ps: Thanks to Adrian for remarking that deception may lead to deception.

All original content copyright James Litsios, 2015.

Sunday, June 09, 2013

Two algorithmic trading software requirements

This blog post is about two basic requirements when asking developers to write a very fast algorithmic trading systems.
Designing fast system seems initially an easy task. You hire developers that are in touch with the physical nature of computers and networks. And you ask them to write fast code.
Nevertheless, there are difficulties. The first is that allocating and freeing dynamic memory slows you down, so ideally you want to do without dynamic memory. Yet developers usually have a hard time with this requirement. One reason that developers are insecure in doing this is that they have been brought up with languages that only work with dynamic memory. The other reason is exactly the language issue: modern languages do not make it easy to program without dynamic memory allocation. Therefore it is best to present this requirement differently, in a form that is much easier to swallow, which could be as follows:
Make sure to preallocate your dynamic memory before activating your algos, and minimize any later reallocation.
Now this first request seems a bit heavy because it tends to use a lot of memory and brings the code back thirty years. Yet, as you expect your very fast system to make lots of money, you need to start by assuming you have no limits to you infrastructure costs, and are willing to pay for terabytes of memory if needed. In addition, languages like C++, with a wise use of templates, or functional languages with mutability (like Ocaml or F#), can actually do a good job with this requirement. More importantly, this requirement leads me to the next requirement, but first I need to repeat my “motto” in trading, which is:
"It is easier to make money than to keep it".
Which leads me to remind you that there are two very distinct activities in trading: making a profit and trying to prevent a loss. This may sound pretty trivial, but it is a key concept, and not all developers understand it. (I have reviewed enough trading code to know that). What it means to an algo system is that as much effort and thinking must be put in the code that actively tries to make money as in the code that actively tries to prevent losing money. As a requirement, I would state it as:
Given that trading algos can be in three different states: profit making, idle, and preventing loss; Put as much effort and optimization in the transitions from each of these states to another as you put in staying within each state.
Said emotionally:
I want no delay when I slam my foot on the brake or on the gas pedal!
If you have written a few complex real time systems, you may realize that this is not a trivial requirement. And in fact, it is even harder to implement than the first requirement. But I can tell you that it is easier to achieve if you also worry about the first "no dynamic allocation" requirement.

All original content copyright James Litsios, 2013.