Monday, March 28, 2011

Teaching and agile management

Today a friend's son asked me to help him with his math. The magic of internet meant that could read up the topic on wikipedia while listening to the request and accept in confidence.

Here are a few hints on how to teach:
  • Allow learning from mistakes: the young man's first request was to erase what he had done, as it was wrong. I stopped him and led him through the understanding of what he had done, building a knowledge of what "not to try again". This happened a few more times where I let him make his mistakes so that he could understand better how not to make these mistakes.
  • Do not provide solutions but support the learning of decision making along the way: Learning is about associating together thoughts and methods and by doing so building a new way to do or understand something. In this case, I help him identify which of the mathematical tools he possessed already would help him for this type of problem.
  • Help maintain a notion of goal at all times: The problem he needed to solve involved a good amount of tedious algebraic work. It was easy to get lost a this level of the task so I helped him maintain a link with his goal during the whole process. In this way, even when he made a mistake and needed to backtrack, he still knew where he was going.
Now I ask you to go and read up on Scrum or other agile process. You will see that what I have said above is how you should lead your agile teams. It does not matter what type of leader you are, people, technical or business, the art of helping people to be productive with quality is to get them to learn how to always do better.

There is one caveat to this process. As the "student" needs to know enough and be skilled enough to be "a good student", the agile team member needs to be knowledgeable and skilled enough to be a "good employee". Special attention must be put into the hiring process and the simple rule that remains the best is to hire the bright ones fresh out of school and have them work with your senior experienced people.

C++ and functional programming

What to say to C++ programmers about functional programming, especially now that many compilers have implemented C++0X features like lambda and auto.

I am pretty good at C++, having started using it with the Zortech compiler V1 and the first versions of CFront (the orginal C++ to C translator). I have also written much fancy template code. Therefore thinking about this post leaves me in the dilemma that there is one part of me that likes the features of C++0X that "wrap up" loose ends, and another part of me, the functional programming part, that makes me want to rant! I will not rant today, but part of that reasons for being upset needs to be said, as is relevant.

The "magic" of mature functional programming are:
  • Statements, expressions, functions and types support each other
  • Type inference and polymorphism automate the relation between statements, expressions, functions and types
  • Monadic and other "advanced" types are available to define new execution models such as parallelism.
C++ templates can bee seen as a primitive functional language. But I would argue against going into production focusing on C++ templates that way as I have been burnt so many times by C++ compilers failing me with complex templates, that I really recommend not to go there. So template functional style is not where the emphasis should be.

The real issue is that although C++ expressions have a C++ type counter party (think operators like + and -), there is not C++ type support for statements. As a result many of the core concepts of functional programming cannot mapped over into C++, even with the new C++0X features of lambda, garbage collections, auto, etc.

My recommendation is that some concepts should be understood by C++ developers but not necessarily with a functional emphasis. My favorite topic is as always monads. In the C++ terms, monadic concepts define properties on the "traces" of the code. If we say:
f(...);
g(...);

then the execution trace of the program will first have gone through f and therefore when g is called, properties set by f may still be valid. A good C++ architect should understand enough about monads to define invariants that can be used in this type of trace oriented context. For the moment, I would forget about other "functional programming concepts" in C++, even if it is C++0X!

Monday, March 21, 2011

Personalize your containers

When I was playing around with Scala I was disappointed with the limited operations available on the map container. It is nice to have the power of the for comprehension but if it only performs selects and maps the functionality has its limits.

Many years ago I implemented an augmented tree structure in C++. So while I was at the Functional Programming eXchange last week I decided to implement the structure in F#. FYI by augmentation I mean storing extra data at each node of the tree that will accelerate the later operations on the tree. In DB terms, augmentations are typically used include secondary key conditions in your tree query.

This is what I did:
  1. Take a balanced tree implementation off the web. I was thinking finger tree but then though better to start with a simpler implementation. I chose an AVL tree. (I may revert the code to a red and black tree to simplify).
  2. Add a node field to store the augmentation (a generic type)
  3. Refactor the code by replacing the call of the node constructor by a call to a function that will first build an augmentation and then call the node constructor
  4. Pass the generic augmentation "maker" function as argument to the function calls
On this last point I wasted much time. I mistakenly gave F# another chance and tried to encapsulate the "augmentation" api with abstract methods on one the the types. As all this code is polymorphic the compiler barfed with "would leave scope" and this type of error. After much wasted time I gave up AGAIN on using F# in any other way than a glorified ML language: API signatures are only based on functions.

Then I wrote a generic select like operation that uses augmentations. And finally a "general" map operation: it uses the augmentations to optimize its search, it only transforms nodes that have the selected property AND it allows nodes to be deleted.

There are still a few more things to do to wrap this code up but I again can only admire the speed at which one can write functional style.

I'd love to tell you why you want to have these types of data structures but not everything can be for free.

Saturday, March 19, 2011

Functional Programming eXchange 2011

I am in London and one reason was to go to the Functional Programming eXchange organized by Skill Matters and Robert Pickering. Here are a few highlights and comments:
  • Simon Peyton Jones introduced the day presenting four common ways to bring parallelism into (functional) programming: transactional memory, "synchronous domains" (he did not that term) with messaging with sharable channels, data parallelism (as in vectorized code), and "combinatorial" parallelism (as with a parmap operator). It was very good, Simon is both an expert on the subject as well as a master presenter.
  • Simon Cousins presented a talk on "F# in the Enterprise" in the context of energy trading. It was straightforwards but very relevant and emphasized working "in partnership" with C#.
  • Adam Granicz presented the latest goodies of Websharper: a F# to javascript translator packaged with tons of javascript library bridges. I have said it before but this is really nice technology that allows you to write "programmatic" web development in record time.
  • Antonio Cisternino presented his VSLab Plugin for Visual Studio. It allows your F# scripts to create window forms graphics inside visual studio. Really very nice and handy. I'll give it a try.
  • Tomas Petricek's F# on the Server Side introduced a number of example of F# web server examples. It was introductory but well done and entertaining.
I talked with many and had a good time. One remark I'll make is that the M word is still a major issue for many of these developers; Functional developer truly need to be able to think in monadic terms and swivel around monadic representations in order to get the best out of functional designs. I'll give you an example: twice the notion of agent and actor was presented, but not with a monadic approach. The result is that the designs presented were stunted.

A nice remark is that Simon Peyton Jones did share my misgivings of implicit laziness in the context of inhomogeneity of parallelism. He said he was not sure it really worked, and I agree.

Thursday, March 10, 2011

"Cache oblivious" agile management follow-up

I previously said: "Your overall software development process needs to degrade gracefully. This implies that no impediment should block the development machine. The consequence then is that architecture, scrum master and HR related work need special attention!"

Let's see how we can do that!

First think about keeping the productivity has high as possible.
Then, and this is a key concept: you need to set limits! Or, put differently, it is not possible to degrade gracefully if you do not protect your organization from moving out of its competence area. Therefore the real process you want to support is "be efficient and degrade gracefully" AND "understand your limits and act on this understanding"; And that brings us to the core concept of this posting: focus your inter-team collaboration on the limits of your product development ability. This may all sound painfully inefficient, but do not worry, it is not.

The limits of an agile software process are:
  • The work culture and general agility of the team
  • The business understanding and ability to convert business concepts into a solution
  • The underlying architecture and technology choices
  • The granularity of the product
  • The knowledge and skills of each team
Within these limits, the teams have their independence. Company wide management and inter-team work focus on these limits.

A first comment to make is that these overall limits to your development process are a good thing! They are good because they define the borders of the team which shield the team "from the outside"! Without these limits the teams would be in a state of confusion. Thus a first comment is that much of the job of "upper" management is to manage these limits, and ideally to expand them. But they cannot be changed too much, as this would destabilize the teams and downgrade overall productivity. The magic of any good manager is to understand how to achieve change, and even more important when they cannot!

With these insights, the overall software development process can approached as follows:
  • Have a team oriented agile process in place. (I assume scrum for the details below).
  • Each team needs a local work culture and general agility evangelist. The overall HR effort is to help these evangelist improve the culture of their team. Inter-team work is about sharing experience and growing together. In a scrum process, I would give this role to the scrum masters; Although their ownership is formally limited to the scrum process, it makes things simple to give them this ownership too (but not necessary).
  • Each team needs a business evangilist. The overall product management effort is not only to support the continuous flow of getting business requirements understood by the teams, but to improve this process. Teams need education and clarity in business but at the "level" of the team. The product owner takes this responsibility in a scrum process. Again, the job of head product owner to challenge each product owner in improving their team's business knowledge and efficiency of picking up or working with business concept, not just focus on getting the backlog ticked off.
  • Each teams needs to know what are the rules for architecture and technology. The underlying architecture and technology choices need have their limits and therefore I strongly recommend that you have chief architect and chief technologist (if relevant). The hard part is not only to find people with the deep architecture/technology experience to pick up these roles, but also that they have the right work culture to take on these roles. These roles need to understand that there first role is not to choose solutions but limit solutions (the team chooses solutions!). Choices need to be made but these must hold over long periods with the benefit of providing clear limits to the teams.
  • Marketing, sales, product owners, and relevant team member must get together from time to time to discuss the granularity of the product. How you split your product range can strongly limit how teams can pick up their work; As it can limit your market growth and competitiveness; It often also has strong ties into architectural and technology issues too. Product granularity creates strong limits which need extra management care to tackle!
  • Knowledge and skills of each team and their members strongly limit the software development process. Tactical HR is about hiring enough knowledgeable people mixed with staff that are quick learners. Care also needs to be taken to keep up a culture of education. I have always been shocked by how many developers prefer to get drunk than to improve their skills through personal studying.
Now you may ask what has this to do with "cache oblivious" management?

Absolutely nothing! Or even worse. What I have described is pure cache optimization: keep it packed and use it wisely.

Having now spent two month to try to convince myself that there was something in back of my first post, I need now to concede that the granularity of people means that "all or nothing" is more competitive than a work process that is only about graceful degradation.

Tuesday, March 08, 2011

Scala experience report: part 3

Continuing on my DPLL rewrite I introduced an extra level of mapping, moving from a plain Set to a Map[Boolean,Set[...]]. This rewrite is interesting because it introduces similar patterns as array oriented languages (i.e. APL). As I continue to use only mutable structures, I would like to have a "Map.map(key, data => data):Map" function in order to be able to update the two level storage in one functional statement. But this functionality is unavailable on the maps and forced to used a get and set like pattern. Generally there is a strong "mutable" feeling to the Map and Set methods. Another element that bugged me was no union of maps, or double map iterator. Thus there is this feeling that Scala provides a "higher level of abstraction" with constructions like the for comprehension but does no "pull it through" in being able to offer a smooth experience.

I am sure more needs to be said, but that is what learning is about.

(On a side note, I am into playing with WPF programming with F# these days. I'll keep you updated on that too).

Teaching kids to program

I have received an Arduino kit by the mail today.
My daughter (who is ten) made a market analysis and convinced me that she absolutely needed an Arduino micro-controller to "build things". Having looked at what others have done on youtube and knowing that the Arduino can be programmed in Scratch (which she has already played with), I ordered a breadboard kit (this one).

I'll tell you about it.

Update:
  • The Arduino and kit are great. So is programming it in a C like language.
  • The Modkit/Scratch option does not seem to be available.

Flash crash thinking

An old friend asked me if the flash crash had been caused by "spoofing" of the exchanges with very large flows of order entry/order cancel commands. My feeling was that this was not the case and a quick search on google picked up a confirmation in this article: The Microstructure of the ‘Flash Crash’: Flow Toxicity, Liquidity Crashes and the Probability of Informed Trading” This article supports the explanation that the main cause of the flash crash was simply the lack of liquidity because the market makers had become too edgy due to too much new information coming in to the market. It is an excellent article and brings back the notion that some systems are only sustainable with enough "friction", or said differently, by being under-optimal with regards to certain ways to look at them.

When a group of people need to work together, they nominate a chief, or at least they give specific roles and ownerships to members of the group. These people get an "unfair" advantage as their ownership will most often provide benefits, even if they is not trying to use them for their own gain. Ideally for the group, processes are in place that limit personal advantages. And these advantages cannot be completely removed as that would take away the leverage that allows the overall process to happen AND BE SUSTAINABLE!

The current northern African revolutions are a good example of the whole trickiness of this type of "social process". If people remove all their leadership they end up in chaos. And yet they are upset with the unfair advantages that the leaders have had and therefore want change.

Market makers that cannot make a profit on the bid and ask spread must make a profit on "another spread". This can be a volatility spread, or more generally on an "information spread". Information spread is, for example, "I don't know and you don't know", going to "Now I know, and you don't"; Or it could be "I expect parameter P3 to change in my model, you don't". When a market maker can no longer find a profit making spread, he will limit his exposure; especially when this exposure leads to continuous losses caused by others having their own "better" information spread (often at the limit of inside trading).

The sad conclusion is that market makers become less of a sure thing when rules that diminish their "advantage" are imposed by the exchanges, the SEC or the ESMA. Mathematically you can see this as moving from a continuous to a non-continuous regime. Call it a "percolation" of the market making process past a critical level. The big problem is that in the non-continuous regime, one where parts of the markets simply stop, as no prices are quoted, is "a nightmare". And many people simply ignore it as "impossible". The truth is that it is much more a possibility now than it ever was and that seems to be what the flash crash was all about.

Saturday, March 05, 2011

Scala experience report: part 2

Picking up on my DPLL refactoring work I found myself trying to find the first "Some" mapping of elements of a set. Now I can do that with the "find" method, giving it my mapping function followed by a "isDefined" and to then reapply the mapping on result of the find, the question is then how how not to repeat the mapping function. There is no "findSome" method on sets.

After 2h of searching and learning on the web, I conclude that I want something like:

var found=false
for(if !found; x<-mySet.view) y = myMapping(x); y match{ case Some(_) => found=true; yield y ...}

This is where my "I want to know what is happening" sense is in misery. How do I know that this will really be implemented well? With Scala's high level types comes a need for some fancy compiler work. Many times I have been burnt by compilers not being good enough.

This evening's conclusion is that Scala pushes you to high level concepts which may lead you to poor performance. But I do not know if this is true.

Friday, March 04, 2011

Scala experience report: part 1

I want to learn Scala quickly. The only way to do that is to get your hands dirty.
So I searched the web for some "fun" but "reasonable" code to get my hands around.
googling around and fell on a simple DPLL solver written in scala (by Hossein Hojjat).
I have always liked theorem provers and having a DPLL solver around can come in handy, who knows ;-) (A straight forward but decent book on the subject is "Automated Theorem Proving" by Monty Newborn, (Note: I took a look yesterday and it does not contain the DPLL, so maybe not the best recommendation)).

A little guesswork and sure enough I found the format used by the program (called CNF). It therefore took me under ten minutes to copy the source to my permanently open Eclipse environment and copy an example CNF off the web and get the algorithm to run. I gave me the correct result.

Now for the Scala exercise:
  • The original code used some mutable sets. I refactored the code to only use immutable sets.
  • The original code use "if ... return" patterns which I refactored to "if ... else"
  • The original code returned maps which it merged. I refactored the algorithm into a continuation passing style with a polymorphic return type.
I needed to stop there, although there are a few more changes I will make. I can tell you about one: define the clauses a set of true literals and a separate set of false literals (and not one set with both literals).

My take first Scala project, a two hour effort:
  • I want Eclipse to color code immutability. Initially I was going to say: "Scala is way too mutable". Having immutabilized a piece of code, I am happy that the language can keep away the mutables that are not needed, BUT you need to trust that no one has made a mistake; That is why I would like some help from the editor.
  • Scala is a killer. I was very productive. The web provided all my answers like "what methods on sets" or what does "reduceLeft" do.
  • Eclipse is great. Last time I used it was on a single core laptop. Now it was responsive and very productive.
  • The object/functional integration of scala is really elegant. Now I can think about creating functional closures to provide a immutable basis but having these closures interpreted as objects with their own mutable extensions. A powerful concept.
  • I cringe when I think of the code that might be produced by some of these abstractions.
Conclusion: Think Scala if you need to run on a JVM!

Thursday, March 03, 2011

Looks like I lost most of my February postings!

Only one is visible to me!
I'll delete it and see what happens.

Ha! That does the trick, now they are back.

Wednesday, March 02, 2011

Scala take one: scala the beast?

Scala is really very nice, but ...

I have read the tutorial and am half way through the overview document (see www.scala-lang.org page). But even before finishing this reading I can say that Scala is one hell of a language with its seamless integration of java api compatibility, objects, classes, functions as first class types, generic types with a rich sub-typing constraint management, etc. But... but having managed developers for many years it looks very easy to end up in a mess with scala! I would argue that Scala is a language where your chief architect needs to be very good and very experienced in leveraging the value of others. It is too easy to kill productivity with your developers getting lost in abstractions that they do not understand.

I'll see what more I can learn and get back to you.

Tuesday, March 01, 2011

Scala here I come

I am installing Scala on my machine today. I will tell you about that.

10pm: Not yet a success as Eclipse integration buggy. I'll read carefully which versions work and try again!

Next day: The install now works. I went with Eclipse Classic 3.6.2 and http://download.scala-ide.org/nightly-update-helios-2.8.1.final

Next: to read the scala docs.