Thursday, December 29, 2011

Surfing on for news in numerical methods

I spent 2h yesterday evening surfing for "fun things to read". My theme was "what is new in numerical methods". This is what I found:
FROM RANDOM MATRICES TO STOCHASTIC OPERATORS (ALAN EDELMAN AND BRIAN D. SUTTON)
PNL is a numerical library for C and C++
  • I really put this here because it had hyper matrices. The truth is I have not looked into new numerical math libraries in ages but if  I were to use one I would like it thave multi-dimensional matrices that I could slice and dice the way I would like. I am not sure this library is up to all that but it is used in the next article.
A Parallel Algorithm for solving BSDEs - Application to the pricing and hedging of American options (C´eline Labart)
  • What I liked here is that the extrapolation polynomial is based on on a sparse polynomial chaos approximation. Not to mention that the rest of the work was nice, although all put together I really do not know how efficient the  implementation is.
Parametric Optimal Design Of Uncertain Dynamical Systems  (Joseph T. Hays)
A Tour of the Jungle of Approximate Dynamic Programming (Warren B. Powell)
Solution of Large Systems of Equations Using Approximate Dynamic Programming Methods (Dimitri P. Bertsekas, Huizhen Yu)
Automatic Differentiation methods in computational Dynamical Systems: invariant manifolds and normal forms (Àlex Haro)
Optimization-based Approximate Dynamic Programming (Marek Petrik)

Wednesday, November 16, 2011

The beauty and the beast: explicit and implicit types

Working with Haskell has brought on an unexpected question:

  • Is it more productive to work with implicit types or explicit types?
By this I mean: “Should major types be mostly inferred or mostly specified with type signature”?

The bizarre fact is that you may well use two different parts of your brain to work with inferred types or with “defined” types!  Here is the reasoning:

Programming is an incremental process: changes are made locally and then overall consistency is assured. When programming around types, changes are made to type specifications or to expressions which affect type. The two kinds of changes lead to two styles of programming: the first is to start by changing a type definition or type annotation, the second is to start by changing an expression, which results in a change to the inferred types, and possibly needing a fix to a type signature.

The bare minimum of type annotation is not even to provide argument to “top level” functions, such as in let f = fun a b -> … , where we only know that the function f has two arguments because it is inferred from its body. This style may sound contrived but is a way to work around the type class restriction in F#.

So how does this affect productivity? Mental work is much about having a good inner dialog; Your mind is taking a very static view of the program’s design when you worry about type signatures. While when you worry about the result of the type inference, you are effectively “re-running” the program in your head. It is not the same thing! And I find in many cases the second thought process, the reliance on inferred types, the more productive way to work.

Getting past the Peter Principle

Today I am asked about X. I say "You know the Peter Principle"?

This answer was more an amusing way to explain a deeper issue, and not really what I thought about X.
Although the Peter Princple says  "in a hierarchy every employee tends to rise to his level of incompetence", the real story is we all grow past some of our competence zones, but we also build some new ones. A key competence for managers to aquire is to have enough confidence to pull their ego back, and let others shine; It is to understand that management is about helping others to succeed.

Highly skilled employees that are given management positions will fail or succeed on their ability to understand that it is not "about them", but about the project and the people they are asked to manage.

Tuesday, October 18, 2011

Stream oriented and parallel programming

Good software design comes to applying simple patterns; for many years now my simple pattern is stream oriented programming, and in the functional programming monads. In this post, I will introduce the concept of stream oriented programming by showing the value it brings to parallel program.
Parallel programming design has three basic patterns:
  1. Sequential execution
  2. Parallel execution
  3. Hierarchical execution (or “compositional”, “encapsulating”; please help with name)
The two first are classic: operations happen sequentially along the execution, operations can happen in parallel along synchronous or unsyncronized execution flows. One way to present the third pattern is through hierarchical state machines (HSM) where each sate can be a yet another HSM, recursively to any depth.

Hierarchical patterns of execution are important because they are really expressing higher order properties of the solution space. Sequential and parallel patterns are good: they do work, they make things more complex; but they do not “shape” the specific nature of an algorithm; hierarchical patterns do. A good way to bring in a notion of stream oriented programming is by describing the same example hierarchical system in two different ways: state machine oriented and stream oriented:
  • The “hierarchical state machine” oriented way: the system is a machine, with an application, with a socket connection that transports packets of messages.
  • In “stream oriented” way: there is a machine, it starts up, it starts an application, which opens a socket, which sends a packet that contains multiple messages. Later it sends more packets with more messages, the socket closes down, then starts up again, to later close down again with the termination of the application. The application is restarted, the socket start up, etc. Again later, the machine is rebooted, the pattern continues.
Note how the hierarchical stream pattern is built of many sequential pattern used at multiple levels: the messages are sent sequentially, so are the packets; the application is run a multiple number of times, the machine is restarted time after time. Also, the messages, packets, application, etc. can all be seen as happening synchronously in parallel. Yet looking at the system as separate sequences and parallel is not good enough, from a state machine perspective it would be like considering each element of the system as separate state machines, which they are not. What is really happening is that the system is one big sequence and we choose to see this sequence as a structure of levels of hierarchies. It is exactly this notion of sequence “over a hierarchy” that the hierarchical execution pattern captures.

Most classical programming languages are built around the state of data, while “execution” state is implicit and ideally hidden. Indeed, one of the first efforts in program development was to “hide” the program counter first with subroutines and then with structured programming. A side note is that in many ways what separates software from hardware is the notion of “execution state”: hardware makes that execution state explicit.

Execution state is central to the notion of shared access to resources: a locked resource is really defining specific exclusion rule for the execution state. SQL and its variants extend the notion of implicitly managing these exclusion rules for shared resources. Multi-threaded programming is supported by different locks and basic thread operations. Yet still execution state is hidden. And for good reason, anybody who has attempted to use a thread suspension as part of their design know how impossibly difficult it is to reasoning about certain execution states.

Unfortunately, the “hidden” nature of execution state also makes it very hard to structure and scale the execution of multithreaded and parallel programs. So while a good architect may map certain computations to thread resources, it is hard to do this a systematic way; and even then, it is hard to foresee how well a parallel designs will perform. A solution to this problem taken in GPU processors has been to impose very simple execution models on the developer.  The execution state is still implicit but with GPUs it has an imposed structure: we know that multiple states of the threads all “in-line”.  It has to be said that languages like Erlang have also taken a similar approach but they too impose strong restrictions on the execution model, limiting their applicability.

An attentive developer will have noted the steady but sure growth of stream oriented and functional style programming as a “solution” to the difficulties of multi-threaded and parallel programming. Functional programming has been around for fifty years and yet would previously have never been recommended for efficient parallel execution, so clearly there is something more than “just” functional programming in making it a parallel complexity slayer. The answer lies in explicitly managing state of execution! This deserves a little explaining.

One way to define a stream is as follows:
  • Repeatedly triggered by an input
  • Performs I/O and computation sequentially
  • Local readable and writable state
  • Within a constant, read only, environment
  • Drives an output
(Except the first item and the notion of synchronicity, all of the above are optional.)

Now here is the key insight: A stream has an overall state of execution! To be more precise, both the repeatedly triggered executions on stream events have execution states as well as the overall “bigger” life cycle of each stream. This is just like a socket which can be opened or closed and have individual packets running over it.

The functional style programmer may have noted the notion of stream is not far from the notion of a monad. Indeed, monadic programming shares very much the ability to have two execution states: the state seen as part of each individual monadic expression, and the whole execution state when “running” the monad. It is this property that allows monads to be a basic building block for parallel programming.

Looking again at an overall system, at any given time a stream oriented system can be seen as a hierarchy of stream flows; Each of these streams is associated to an execution state; The key insight is that an optimal use of the parallel resources of a system is found by optimally assigning these execution states to the parallel resources of the system.

You might ask: “Why it was necessary to introduce streams, can we not write parallel systems with only hierarchical state machines”?  As we have seen in the system description above, HSMs and hierarchical stream patterns are really very similar. In fact the stream pattern can be seen as the edges of the transition graph of the state machine. (It is a form of data differentiation of the hierarchical state machine pattern). A single graph has usually many edges, therefore it by assembling many stream oriented patterns we can build equivalent HSMs. We know now that stream patterns have two levels of state execution, and that good parallel design is about smartly assigning execution state to parallel resources, and finally we know that by combining many stream patterns we can build hierarchical state machines. The result is that stream oriented programming is more modular and flexible than direct HSM type programming:
  • Each stream can be assigned locally to execution states,
  • The compositional property of streams allow them to adapt both to algorithms as well as to parallel topologies ,
  • The dynamical nature of their assmbly allow them to be reconfigured quickly.
 

Monday, July 25, 2011

Haskell and my summer work

My posting is at an all time low, I am taking a time out this summer. Nevertheless, I have been doing a few things:
  • I am slowly making progress on a posting presenting monads and zipper building on concepts from continued fractions.
  • I am teaching stream oriented programming for C++ developers as part of my architect job. I'll tell you about that because I realized how few people really understand how to do that.
  • My big job on the way and back to work has been to rewrite F# code in Haskell.
About Haskell: it is so easy to use type classes in Haskell that I am coming to think that it was good to go deep first in F# which does not have them. My reasoning is the following: I can see both sides of the solution. Without type classes you need to "feel", in your mind, the type invariants that you create with your recursions. With type classes, you specify the invariants and then you program against them. Clearly the second method is more efficient, but it does not make you smarter!

Anyways, the low posting rate will continue this summer. It is better to enjoy time with your family than to post blogs. Happy summer times!

Wednesday, June 15, 2011

From F# to Haskell

I am learning Haskell by converting F# code: I take it piece by piece from Visual Studio and copy it to the Leksah IDE. Then I reedit it to compile. I then refactor it for style. These are the basic style changes:
  • Equational style: A few years ago, I prototyped some decently complex code in Q, so I am unphased by the equational style of Haskell. (Equational style is when a function can be built with multiple declarations, each declaration covering specific variants of the function’s arguments). The style is more or less efficient depending of the ratio of context arguments to “local arguments”. Therefore I would not argue if the style is better or worse than single definition declarations.
  • “where” declaration are great. Its top down style often provides a better reading and they also result in less deep indentation.
  • “Let indentation” rule is a drag: Needing to align right after the beginning of the left hand side implies 5 or more indentation spaces. You can get around that by dangling the “let” alone on its line but that is no real gain. Luckily, in many cases the “where” statement saves the day.
  • Non-ascii characters like the λ character are “a bit strange. I tried them but found them hard to read on my little laptop. I have now reverted back to ascii chars.

Tuesday, May 31, 2011

Moving to Haskell (from F#)

I am a bit busy these days helping out with architecture and agile development for a large ticker plant and calculation farm. Therefore when I sit down to program (using my slack time!), it needs to deliver bang for the buck. I have my sturdy little Dell X1 laptop, visual studio 2010 and F#; and although I have suffered by F# limitations, I have not been blocked. Yet now I am stuck, my needs are getting seriously heavy and the F# type system cannot cope. One issue is that monadic programming style in F# needs to be defined with the help of a class and this class needs explicit types, and the other issue is that there is no explicit way to define type class, or types with localized polymorphic definitions. In a non monadic framework I have been bootstrapping my polymorphic types on start-up: The trick is to wrap all functions of a module in to one big function whose arguments are the functions that constrain the types as would the parameters to a type class; The big function returns one big tuplet or record with all the functions of the module that have now been “bound” to a type structure (you can still have some unbound types to cover the localized polymorphic needs). The application will then first bind all modules as functions, the “main” then proceeds with the bound functions. Yet now I am stuck as the monadic programming style need the use of explicit type variable and this does not work with the trick above. I could write all my monadic code “raw”, as functions. But that beats the purpose of the abstraction. Therefore…

I am ditching F# as a core language; I’ll generate it when I want .net compatibility with the advantage that it is a functional language and can hope that the compiler will do a good job with it. In search of a replacement, I see only three candidates: ocaml, Scala, and Haskell. I have always liked ocaml, but I feel the support for it diminishing. I see Scala as F#; better in some ways, worse in others, it is only a candidate for those that are forced to stay in a Java and JVM environment. Therefore Haskell it will be by elimination.

Haskell is easy to install on Windows (XP on my old laptop):
  • Install a minimum Cygwin to have the grep function.
  • Install Haskell (HaskellPlatform)
  • Install Leksah (a Haskell IDE)
I’ll tell you what I like and dislike about Haskell in a future post.

Thursday, May 19, 2011

delivery focus versus creative slack

I made a presentation on scrum today in which I first presented a generic agile framework, to then present scrum within that agile framework. Although I had not planned it, I found myself emphasizing the need to focus on delivery in order to make time available, what I call slack time, to be creative and bring your development up to a new level. I made the mistake to compare this to how you run a hedge fund, and although there are mathematical links between the management process and product development process and how to maximize a portfolio, I think that it was lost on my public of software developers. Still the reality is that you are squeezed by your boss, or by your investors, or even by the market and you cannot afford to waste time, therefore you must maximize the effort to sustain your current growth model; That is the notion of focusing on delivery and on your client. And yet you cannot fully keep that focus, you need space to tackle the hard problems, you need time to reinvent yourself; That is the notion of making space for slack time. Make sure you accumulate enough of it and then invest it wisely. Your focus needs to be breakthrough, nothing else; keep your "normal" efforts on delivery effort. For the slack time you need to try to be extraordinary because it will change you or your company and may well give you a chance of new futures that would not exist without that special effort.

Friday, May 13, 2011

Bad scrum, good scrum and trust

Years ago I switched my development team to scrum. Or at least what I thought was scrum: we had teams, product owners, we carefully followed the sprint and stand-up process. Then we carefully sized our stories and we carefully worked to produce average work. of decent quality. It was scrum, but now I would call it "bad scrum".
It was bad because we had built ourselves a straight jacket, and none of us dared to step out of line with the risk of blowing the sprint. Arguably, there were attenuating circumstances: the teams were walking on eggs, working on a large C++ program that few us understood. Still I have seen this behavior elsewhere. It is a behavior of risk aversion, sustained by worries of the whole team.

So you might ask, "What is then good scrum"? Good scrum is when you follow the process but stop worrying about it.But most importantly, good scrum is when you have managed to build a liberating trust among all members of the team. All too often developers are suspicious of other people's work, or worse they have the "not invented here syndrome" where they need to make their mark in the code or the design for it to be acceptable. Strangely enough this type of behavior is more destructive in an agile process than a waterfall process! In an agile process like scrum, a lack of trust will amplify the risk aversion of the team, and this will kill your productivity. Remarkably, the notion of trust is rarely mentioned and yet it is fundamental. Take the size of a scrum team; Some people say 5 members, some say 7 or 9, or even 11 in certain domains; And yet you could even have a scrum team size of 21... if you manage to get the whole team to trust each other! A team of 5 often works better than a team of 9, simply because it is easier to build trust among 5 than among 9!

So you might ask me, how to build trust? If you are a crazy person like me, you have spent years building mathematical models of processes and people, you see trust as a process. But if you are not, then the simple rule is to show others that you trust them. You need to show it with your words but also in your actions. Then, if you are the scrum master, be a trust policeman! Anytime you see someone behaving in a manner that goes against trust, you stop them gently and bring the conversation back to trusting relation. Finally, you eliminate people who really cannot play the game, or if they are "the chosen ones" you ring fence them, which usually means you give them a babysitter to keep them from damaging trust relations (a painful job).

Having said all this, I am sure you have heard of catastrophe theory, which can visually be seen as moving from a continuous regime to another as a single event. Here is the insight: what do you think happens when you over trust your scrum team, and they over trust themselves? They can hit a wall, fall completely apart, and lose their trust (and yours too!). The magic then of a the good manager is to understand how complex the projects you give to your teams can be; And that my friend, is a very difficult task!

Arduino and the joys of electrical engineering

I have purposely stayed away from my kid's Arduino kit in order to let it be “their” toy. But the other day I was forced to give in when I found my daughter controlling a servo motor with the Arduino. Searching the web I see that I can order a 2d gyroscope and 3d accelerator sensor board for about 40$ that would be easy to interface with the Arduino. Now I am dreaming of using this sensor and the Arduino as a fly by wire layer between a remote control receiver and servos, to control a car or airplane

(Note: I lost this post and the last one. So if they appear twice it is because both I and blogger has brought them back)

Immutable, mutable, higher order designs and monads


One of the great pleasures of programming functionally is to be able to rely on immutable data. With immutability, side effects need to be explicitly defined, either as return values from functions or as encapsulations in monadic states. Immutability allows two important properties: first it increases productivity by diminishing what needs to be cared about (freeing the mind to care about the right things!), then, almost as more importantly, it provides the foundation for higher order programming concepts by preserving an acyclic relationship between functions and data. By higher order, I mean that data is replaced by functions or that functions encapsulated in data; for example, in a context like data differentiation or in the use of monads. By acyclic, I mean that all trace trees are acyclic (trace tree can be seen as dynamic call graph where recursion is not a loop). Let me explain…
It is OK to have cycles in the static call graph. That is because we know how to infer invariants (e.g. like types) from inter-dependent and recursive functions. Likewise, recursive data types are also OK. Nevertheless everything is a bit trickier if we look at things from a data flow dependency point of view. This is the view we need to take if we really want understand how the functions are working on data and producing new data. And the sad reality is that it is generally hard to work out invariants from dependency graphs that have cycles because of mutable data, Nevertheless, there are cyclic dependency graphs that are easier to work with than others. More specifically, the more local, or of limited life span, a part of the dependency graph is, the easier it is to understand it scope and properties.
I mentioned trace trees above because they give me the bridge I need to wrap up this posting. The insight is the following: mutability brings in cycles in dependencies, cycles in dependencies kill invariants that allow higher order code constructions, trace trees are a down to earth way to work with dependencies, monads can seen as redefining call points in the trace tree (nodes), and as a consequence one the safest way to introduce mutability is within the scope of a monad. Again, we know the local trace tree will end up with cycles when we introduce mutability, but if this trace tree is “protected” in a monad, then its impact into the rest of the trace tree is easier to understand because of the intimate relation between monads and trace tree nodes.
You could argue that we do not need mutability. Yet real world I/O is mutable and therefore you you cannot escape working with mutable data. Not to mention that to get performance you may need to go mutable too.

Monday, May 02, 2011

What to recommend to a young quant

I share with you my recommendation to a "young quant" that wrote me asking for leads:

...

A tricky question. As with any industry, there is a certain “commoditization” of derivative technology including quant work, that plus the uncertain future of the US regulations on derivatives, and to add to that a bleak economic outlook in Europe and the US. The grass is not necessarily greener on the other side and many parts of the market are in a squeeze.

Having said that, there are still smart people out there making money and there is still a lot of new development in Asia. My recommendation is not to look for change of activity, but to get closer to successful firms. I know that is easier said than done. Still, one idea is that you are better off accepting a low pay to be closer to success than a higher pay that keeps you out of the fast lane. Again, I know this is not easy. And worse than that, you are in competition with many others.

The best I can do is to give you the following advice: I have looked at your profile and it is too much like “another quant”. Further more, you admit you do not have “...” experience by saying that you have strong interest in it. Therefore my recommendation is to try to stretch your resume in new “original” directions that will make it stand out. Ideally you should back this up with personal work. My gut feeling would be to add more software development experience: if I hire a quant I want him also to be good programmer to get quick turnaround times. You say nothing about your development skills. Spend time to work on these skills if you do not have enough of them. The second concept would be to be more exotic with your choice of quant projects: think of the senior quant that reads your resume and give him something to think about. So go buy a book on approximate dynamic programming or spectral methods in stochastic computation, do a little development at home and add it to your resume.

...

Saturday, April 23, 2011

OO versus functional programming versus agile

Have you ever asked yourself the question of why database tables do not have two primary keys? Or maybe you have started a spreadsheet with a choice of columns and rows to only change them later?

One of the little understood properties of programming is that at the lowest level things tend to be single dimensional. So for example, the balanced tree has just one key, and so on. There is theoretical reason for this, but the general fact is that it is much easier to design a structure or algorithm to do one thing than to do two or more.

Object orientation starts with the fundamental idea that objects can do many things. And this naturally leads to the expectation that they can do them equality well. We know from the previous paragraph that this is not true, and yet many do not know this or forget it. You would think that this mismatch between expectation and reality is not really a big problem, yet sadly it is. OO programmers are constantly trying to fit multiple dimensions of abstractions into single objects without understanding that they will never be able do it well without giving more weight to some dimensions over others. Not prioritizing things, not having a clear idea of hierarchy of concepts, leads to a random choice of hierarchy at the deeper implementation level, and leads to redundancies that do not support each other well. The end result is that OO software tend often both to grow and age badly.

Functional programming approaches things differently. A function has only one entry point: from the top. Primitive data structures are all simple (one dimensional) and must be composed to build anything real. The consequence is that with functional programming you need to decide "on construction" how you will interleave the different dimensions of your problem. Unlike OO, you are immediately forced to care about how you will implement things. This is both a good thing and a bad thing. The good thing is that in the end, there is no escape that the deep down implementation of your structures needs a hierarchy, so you might as well take responsibility and choose how you will build your solution. By doing so you can keep things consistent, and consistent does not always mean the same, it may mean that your choice of order of composition is different for different parts of your program but that the unrolling of one part matches the "rolling-up" of another part of the your program. The bad thing about needing to choose early how you split and hierarchize things is that you need to have a lot of experience to do it right (think monads for example). This is why although OO programs tend to not grow well, they are still cheaper to produce as you need senior guys to work functional style.

But now I would like to bring in something different and yet very relevant. Agile development is a lot about prioritizing your requirements and implementing them in iterations. That means that the first level of your design, what might be seen as the first draft of primary keys of your data base, are directly tied to the most important business concept. If you think of it, that is not a bad choice! So to observe that agile development helps structure your design when you do not know better.

Sunday, April 17, 2011

Should you use F# in your projects?

A few years back I read that a guy killed a child with his Ferrari: he missed the turn after a strong acceleration (as he left a Ferrari gathering!). And then more recently I read the same type of story, again with a Ferrari and an uncontrolled turn at an intersection.

Well, F# is like a Ferrari (maybe that is what the F stands for ;-) ). It is incredibly more productive than other .net languages; see it as a language with an incredible acceleration. But the caveat is that programming in a functional language is much more of a balancing act, and therefore beware the slippery turns and “intellectual crashes”. This is especially a management and team issue, a language like F# allows you to blend in the same code concepts that can be dealt by your “standard” developer and concepts that take years to master. (It can be even worse than C++ template meta-programming.)

If you were going to set up a taxi company would you buy a fleet of Ferraris if they were available at the same price? Well that is a little bit the same type of question you need to ask yourself when you choose a language like F# or Scala for your team. And it is a tricky question and I would argue that Microsoft is wise not to over market F#.You choose it because you know what it will bring you and you are "old enough" to understand the risks you are taking!

Friday, April 15, 2011

The big picture and the touchy feely side of technology

I have a big directory in which I have been accumulating articles and downloads from the web. I think I have mentioned that I do not believe you can be good at what you do and stick to the nine to five focus of your job; You need to add a lot of private time, to write code if you are a developer, and to read up on research and development in the different fields that support you business activities.

I'll admit that I rarely revisit any of these documents. Yet by giving them an independent place that I own, I am making them part of my life, and it allows me to grow with them. I believe that the brain can be seen as a big optimization process that grows like a plant. In that metaphor my download stash can be see as a little part of the soil I grow on.

I can't share the documents with you, mostly for copyright reasons. So my second best is to tell you what I am interested in. I include here a raw dump of the top level of my directory structure. Arguably it needs clean up work, but it is good fun already in this form.

Update: do ask me to blog about any of these subjects. I'd love to do it, I just need an audience!

Here is the list:

3dLibraries
AbstractInterpretation
AbstractStateMachines
ACE
ActiveDB
ADD
ADT
Agents
AHEAD
AI
AlgebraicSystems
AlgorithmicTrading
AlgoTrade
AnalogCircuits
Antler
ANTLR
APIGen
apowell
approximateSorting
approximation
architecture
ArtOfAssembly
ASN1
aspect
AssemblyLanguage
Assert
auction
AutomatedReasoning
AutomatedSolver
AutomaticDerivative
AutomaticDifferentiation
berglas
BernsteinBase
BinaryMagic
blackice
CacheOblivious
CAS
CASE
CellProcessor
CGAL
ChangeManagement
CLI
cluster
cminusminus
CMP
Coach
codetransformation
Combinatorial optimization
Communication
compiler
Component
constraint
Constraint propagation
ConstraintSolver
ContinuedFraction
ContinuousQueries
Contract
Control
ConvexHull
CPP
CPPPreProcessor
CTO
CUDA
Databases
Dataflow
DataGrid
DataModel
DecisionTree
Declarative
Delanauye
DeltaVolatility
DerivativePricing
Design
diophantine
dll
dotnet
DSL
duality
ElectronicMarkets
EmbeddedLanguages
Emulators
Erlang
EventDriven
excel
Exchange
ExpertSystems
failsafe
FaultTolerant
Financial
FiniteAutomata
FiniteStateMachine
FIX
FlashCrash
fog
Formal
FPGA
FPGL
FpML
fsa
fsharp
Functional
functionalCompiler
Fuzy
GADT
GameTheory
GarbageCollection
Generative
GeneticLanguage
google
GPU
Graph
graphics
Greeks
Grid
GTTSE
hamiltonian
Hardware
Haskell
Hedging
HighLevelLanguages
Hilbert
html
indexing
InMemory
InMemoryDB
Interpolation
IntersectionTypes
Interval search
Interviewing
java
jira
KnowledgeAndDataEngineering
KnowledgeModeling
LambdaCalculus
lean
Library
lisp
LLVM
Locking
LogicProgramming
malign
Management
MarketMakers
MarketMaking
Markets
MemoryAllocation
Mesh
metacpp
MetaInterpretation
MetaProgramming
Military
modeling
Movie
mp3
MPI
NonBlocking
NonLinearProgramming
NSDE
NumericalMethods
ocaml
ODBC
OfficeAutomation
Oleg
OnLineAPI
Optical
OS
Parallelization
ParityGame
Parser
particleFilter
particleSimulation
Patents
PDE
physics
pictures
Polyhedra
PolynomialChaos
PolynomialContinuation
Porting
pqueue
prescheme
Process
ProductManagement
Products
ProgramAnalysis
ProgrammingLanguages
ProgramTransfomation
projectmanagement
Prolog
psycho
pure
python
QueryCompilation
QueryOptmization
quotientApproximation
Randomized
RandomizedAlgorithm
randomNumbers
Reactive
RealTime
recursive
Reengineering
Reflective
regexp
RegRel
RemoteCommunication
RendezVous
Rewriting
RuleCompilation
rules
Sampling
SAT
scala
Scheduler
scheme
scilab
scrum
SDE
SDI
SearchEngines
Security
sforce
shedskin-0.0.4
shmem
SkipList
SkipListJava
SoftEng
software
SoftwarePatterns
SoftwareProcess
SoundLib
SourceControl
Spanish
sparsegrid
SpatialDatabases
speed
SplayTree
srcml
StateMachines
StoryGeneration
strategic
string
SVD
SwissCompanies
SymbolicSimplification
TCAD
TemporalData
TheoremProver
TheoreticalComp
Threads
tomography
trace
Trading
TransactionalMemory
TransactionalModel
TransactionCost
translation
Transport
Travel
treap
tree
TreeMatching
TreeRewriting
Trees
Tropical
typeinference
TypeSystems
UML
UnscentedKalmanFilter
Valuation
ValueAtRisk
Values61
VaR
Velara
Volatility
Voronoi
vpn
Wavelet
WordTemplate
wpf
www
Zurich

Tuesday, April 05, 2011

Google recognizes the "F#" keyword!

Finally! Hopefully not because they fear being broken up for monopolistic behavior, Google search finally recognizes the F# key work and is not just search for "F".

Monday, March 28, 2011

Teaching and agile management

Today a friend's son asked me to help him with his math. The magic of internet meant that could read up the topic on wikipedia while listening to the request and accept in confidence.

Here are a few hints on how to teach:
  • Allow learning from mistakes: the young man's first request was to erase what he had done, as it was wrong. I stopped him and led him through the understanding of what he had done, building a knowledge of what "not to try again". This happened a few more times where I let him make his mistakes so that he could understand better how not to make these mistakes.
  • Do not provide solutions but support the learning of decision making along the way: Learning is about associating together thoughts and methods and by doing so building a new way to do or understand something. In this case, I help him identify which of the mathematical tools he possessed already would help him for this type of problem.
  • Help maintain a notion of goal at all times: The problem he needed to solve involved a good amount of tedious algebraic work. It was easy to get lost a this level of the task so I helped him maintain a link with his goal during the whole process. In this way, even when he made a mistake and needed to backtrack, he still knew where he was going.
Now I ask you to go and read up on Scrum or other agile process. You will see that what I have said above is how you should lead your agile teams. It does not matter what type of leader you are, people, technical or business, the art of helping people to be productive with quality is to get them to learn how to always do better.

There is one caveat to this process. As the "student" needs to know enough and be skilled enough to be "a good student", the agile team member needs to be knowledgeable and skilled enough to be a "good employee". Special attention must be put into the hiring process and the simple rule that remains the best is to hire the bright ones fresh out of school and have them work with your senior experienced people.

C++ and functional programming

What to say to C++ programmers about functional programming, especially now that many compilers have implemented C++0X features like lambda and auto.

I am pretty good at C++, having started using it with the Zortech compiler V1 and the first versions of CFront (the orginal C++ to C translator). I have also written much fancy template code. Therefore thinking about this post leaves me in the dilemma that there is one part of me that likes the features of C++0X that "wrap up" loose ends, and another part of me, the functional programming part, that makes me want to rant! I will not rant today, but part of that reasons for being upset needs to be said, as is relevant.

The "magic" of mature functional programming are:
  • Statements, expressions, functions and types support each other
  • Type inference and polymorphism automate the relation between statements, expressions, functions and types
  • Monadic and other "advanced" types are available to define new execution models such as parallelism.
C++ templates can bee seen as a primitive functional language. But I would argue against going into production focusing on C++ templates that way as I have been burnt so many times by C++ compilers failing me with complex templates, that I really recommend not to go there. So template functional style is not where the emphasis should be.

The real issue is that although C++ expressions have a C++ type counter party (think operators like + and -), there is not C++ type support for statements. As a result many of the core concepts of functional programming cannot mapped over into C++, even with the new C++0X features of lambda, garbage collections, auto, etc.

My recommendation is that some concepts should be understood by C++ developers but not necessarily with a functional emphasis. My favorite topic is as always monads. In the C++ terms, monadic concepts define properties on the "traces" of the code. If we say:
f(...);
g(...);

then the execution trace of the program will first have gone through f and therefore when g is called, properties set by f may still be valid. A good C++ architect should understand enough about monads to define invariants that can be used in this type of trace oriented context. For the moment, I would forget about other "functional programming concepts" in C++, even if it is C++0X!

Monday, March 21, 2011

Personalize your containers

When I was playing around with Scala I was disappointed with the limited operations available on the map container. It is nice to have the power of the for comprehension but if it only performs selects and maps the functionality has its limits.

Many years ago I implemented an augmented tree structure in C++. So while I was at the Functional Programming eXchange last week I decided to implement the structure in F#. FYI by augmentation I mean storing extra data at each node of the tree that will accelerate the later operations on the tree. In DB terms, augmentations are typically used include secondary key conditions in your tree query.

This is what I did:
  1. Take a balanced tree implementation off the web. I was thinking finger tree but then though better to start with a simpler implementation. I chose an AVL tree. (I may revert the code to a red and black tree to simplify).
  2. Add a node field to store the augmentation (a generic type)
  3. Refactor the code by replacing the call of the node constructor by a call to a function that will first build an augmentation and then call the node constructor
  4. Pass the generic augmentation "maker" function as argument to the function calls
On this last point I wasted much time. I mistakenly gave F# another chance and tried to encapsulate the "augmentation" api with abstract methods on one the the types. As all this code is polymorphic the compiler barfed with "would leave scope" and this type of error. After much wasted time I gave up AGAIN on using F# in any other way than a glorified ML language: API signatures are only based on functions.

Then I wrote a generic select like operation that uses augmentations. And finally a "general" map operation: it uses the augmentations to optimize its search, it only transforms nodes that have the selected property AND it allows nodes to be deleted.

There are still a few more things to do to wrap this code up but I again can only admire the speed at which one can write functional style.

I'd love to tell you why you want to have these types of data structures but not everything can be for free.

Saturday, March 19, 2011

Functional Programming eXchange 2011

I am in London and one reason was to go to the Functional Programming eXchange organized by Skill Matters and Robert Pickering. Here are a few highlights and comments:
  • Simon Peyton Jones introduced the day presenting four common ways to bring parallelism into (functional) programming: transactional memory, "synchronous domains" (he did not that term) with messaging with sharable channels, data parallelism (as in vectorized code), and "combinatorial" parallelism (as with a parmap operator). It was very good, Simon is both an expert on the subject as well as a master presenter.
  • Simon Cousins presented a talk on "F# in the Enterprise" in the context of energy trading. It was straightforwards but very relevant and emphasized working "in partnership" with C#.
  • Adam Granicz presented the latest goodies of Websharper: a F# to javascript translator packaged with tons of javascript library bridges. I have said it before but this is really nice technology that allows you to write "programmatic" web development in record time.
  • Antonio Cisternino presented his VSLab Plugin for Visual Studio. It allows your F# scripts to create window forms graphics inside visual studio. Really very nice and handy. I'll give it a try.
  • Tomas Petricek's F# on the Server Side introduced a number of example of F# web server examples. It was introductory but well done and entertaining.
I talked with many and had a good time. One remark I'll make is that the M word is still a major issue for many of these developers; Functional developer truly need to be able to think in monadic terms and swivel around monadic representations in order to get the best out of functional designs. I'll give you an example: twice the notion of agent and actor was presented, but not with a monadic approach. The result is that the designs presented were stunted.

A nice remark is that Simon Peyton Jones did share my misgivings of implicit laziness in the context of inhomogeneity of parallelism. He said he was not sure it really worked, and I agree.

Thursday, March 10, 2011

"Cache oblivious" agile management follow-up

I previously said: "Your overall software development process needs to degrade gracefully. This implies that no impediment should block the development machine. The consequence then is that architecture, scrum master and HR related work need special attention!"

Let's see how we can do that!

First think about keeping the productivity has high as possible.
Then, and this is a key concept: you need to set limits! Or, put differently, it is not possible to degrade gracefully if you do not protect your organization from moving out of its competence area. Therefore the real process you want to support is "be efficient and degrade gracefully" AND "understand your limits and act on this understanding"; And that brings us to the core concept of this posting: focus your inter-team collaboration on the limits of your product development ability. This may all sound painfully inefficient, but do not worry, it is not.

The limits of an agile software process are:
  • The work culture and general agility of the team
  • The business understanding and ability to convert business concepts into a solution
  • The underlying architecture and technology choices
  • The granularity of the product
  • The knowledge and skills of each team
Within these limits, the teams have their independence. Company wide management and inter-team work focus on these limits.

A first comment to make is that these overall limits to your development process are a good thing! They are good because they define the borders of the team which shield the team "from the outside"! Without these limits the teams would be in a state of confusion. Thus a first comment is that much of the job of "upper" management is to manage these limits, and ideally to expand them. But they cannot be changed too much, as this would destabilize the teams and downgrade overall productivity. The magic of any good manager is to understand how to achieve change, and even more important when they cannot!

With these insights, the overall software development process can approached as follows:
  • Have a team oriented agile process in place. (I assume scrum for the details below).
  • Each team needs a local work culture and general agility evangelist. The overall HR effort is to help these evangelist improve the culture of their team. Inter-team work is about sharing experience and growing together. In a scrum process, I would give this role to the scrum masters; Although their ownership is formally limited to the scrum process, it makes things simple to give them this ownership too (but not necessary).
  • Each team needs a business evangilist. The overall product management effort is not only to support the continuous flow of getting business requirements understood by the teams, but to improve this process. Teams need education and clarity in business but at the "level" of the team. The product owner takes this responsibility in a scrum process. Again, the job of head product owner to challenge each product owner in improving their team's business knowledge and efficiency of picking up or working with business concept, not just focus on getting the backlog ticked off.
  • Each teams needs to know what are the rules for architecture and technology. The underlying architecture and technology choices need have their limits and therefore I strongly recommend that you have chief architect and chief technologist (if relevant). The hard part is not only to find people with the deep architecture/technology experience to pick up these roles, but also that they have the right work culture to take on these roles. These roles need to understand that there first role is not to choose solutions but limit solutions (the team chooses solutions!). Choices need to be made but these must hold over long periods with the benefit of providing clear limits to the teams.
  • Marketing, sales, product owners, and relevant team member must get together from time to time to discuss the granularity of the product. How you split your product range can strongly limit how teams can pick up their work; As it can limit your market growth and competitiveness; It often also has strong ties into architectural and technology issues too. Product granularity creates strong limits which need extra management care to tackle!
  • Knowledge and skills of each team and their members strongly limit the software development process. Tactical HR is about hiring enough knowledgeable people mixed with staff that are quick learners. Care also needs to be taken to keep up a culture of education. I have always been shocked by how many developers prefer to get drunk than to improve their skills through personal studying.
Now you may ask what has this to do with "cache oblivious" management?

Absolutely nothing! Or even worse. What I have described is pure cache optimization: keep it packed and use it wisely.

Having now spent two month to try to convince myself that there was something in back of my first post, I need now to concede that the granularity of people means that "all or nothing" is more competitive than a work process that is only about graceful degradation.

Tuesday, March 08, 2011

Scala experience report: part 3

Continuing on my DPLL rewrite I introduced an extra level of mapping, moving from a plain Set to a Map[Boolean,Set[...]]. This rewrite is interesting because it introduces similar patterns as array oriented languages (i.e. APL). As I continue to use only mutable structures, I would like to have a "Map.map(key, data => data):Map" function in order to be able to update the two level storage in one functional statement. But this functionality is unavailable on the maps and forced to used a get and set like pattern. Generally there is a strong "mutable" feeling to the Map and Set methods. Another element that bugged me was no union of maps, or double map iterator. Thus there is this feeling that Scala provides a "higher level of abstraction" with constructions like the for comprehension but does no "pull it through" in being able to offer a smooth experience.

I am sure more needs to be said, but that is what learning is about.

(On a side note, I am into playing with WPF programming with F# these days. I'll keep you updated on that too).

Teaching kids to program

I have received an Arduino kit by the mail today.
My daughter (who is ten) made a market analysis and convinced me that she absolutely needed an Arduino micro-controller to "build things". Having looked at what others have done on youtube and knowing that the Arduino can be programmed in Scratch (which she has already played with), I ordered a breadboard kit (this one).

I'll tell you about it.

Update:
  • The Arduino and kit are great. So is programming it in a C like language.
  • The Modkit/Scratch option does not seem to be available.

Flash crash thinking

An old friend asked me if the flash crash had been caused by "spoofing" of the exchanges with very large flows of order entry/order cancel commands. My feeling was that this was not the case and a quick search on google picked up a confirmation in this article: The Microstructure of the ‘Flash Crash’: Flow Toxicity, Liquidity Crashes and the Probability of Informed Trading” This article supports the explanation that the main cause of the flash crash was simply the lack of liquidity because the market makers had become too edgy due to too much new information coming in to the market. It is an excellent article and brings back the notion that some systems are only sustainable with enough "friction", or said differently, by being under-optimal with regards to certain ways to look at them.

When a group of people need to work together, they nominate a chief, or at least they give specific roles and ownerships to members of the group. These people get an "unfair" advantage as their ownership will most often provide benefits, even if they is not trying to use them for their own gain. Ideally for the group, processes are in place that limit personal advantages. And these advantages cannot be completely removed as that would take away the leverage that allows the overall process to happen AND BE SUSTAINABLE!

The current northern African revolutions are a good example of the whole trickiness of this type of "social process". If people remove all their leadership they end up in chaos. And yet they are upset with the unfair advantages that the leaders have had and therefore want change.

Market makers that cannot make a profit on the bid and ask spread must make a profit on "another spread". This can be a volatility spread, or more generally on an "information spread". Information spread is, for example, "I don't know and you don't know", going to "Now I know, and you don't"; Or it could be "I expect parameter P3 to change in my model, you don't". When a market maker can no longer find a profit making spread, he will limit his exposure; especially when this exposure leads to continuous losses caused by others having their own "better" information spread (often at the limit of inside trading).

The sad conclusion is that market makers become less of a sure thing when rules that diminish their "advantage" are imposed by the exchanges, the SEC or the ESMA. Mathematically you can see this as moving from a continuous to a non-continuous regime. Call it a "percolation" of the market making process past a critical level. The big problem is that in the non-continuous regime, one where parts of the markets simply stop, as no prices are quoted, is "a nightmare". And many people simply ignore it as "impossible". The truth is that it is much more a possibility now than it ever was and that seems to be what the flash crash was all about.