Saturday, December 01, 2012

Introduction to state monads

I went to my first Zurich FSharp meeting the other day and made a presentation on how to program with state monads in F#.

The presentation is designed to pull you off the ground, to a higher order programming style, using the state and maybe monad as stepping stones. It finishes with stateful lifting.

Here is a link to a slightly improved version: youtube .

Link to sources are below but I would recommend that if you really do not know this type of programming and want to learn, then you should type it back in and allow yourself some creative liberty. This is usually a much better way to learn, than to just copy code!

You can watch it here if you want:

Wednesday, November 07, 2012

From Chinese famines to agile

A recent economist reviews a book on the "The Great Chinese Famine 1958-1962" and writes the following:

"The Great Leap Forward was the high point of ignorant Maoist folly. Chairman Mao said in 1957 that China could well overtake the industrial output of Britain within 15 years. People left the fields to build backyard furnaces in which pots and pans were melted down to produce steel. The end product was unusable. As farmers abandoned the land, their commune leaders reported hugely exaggerated grain output to show their ideological fervour. The state took its share on the basis of these inflated figures and villagers were left with little or nothing to eat. When they complained, they were labelled counter-revolutionary and punished severely. As the cadres feasted, the people starved. Mr Yang calculates that about 36m died as a result."
When I read this, I could not help thinking that too many software development teams are starved from doing the right thing because they have promised, or someone promised for them, something they cannot deliver. The result and its dynamics are not very different from the story above, let me paraphrase this as follows:
Someone with authority decides that X will be developed in Y amount of time. No one in development dares to contradict the order and work is started. Given the predefined schedule, no solid analysis can be made (aka harvest sowed) but more importantly little effort is made to make the project "self-consistent" across all its requirements, for example that it is defined to be usable from day one (aka making sure the village stays sustainable). As there is not enough resources "to go around" (aka not enough food for all), there is no point thinking about "fixing" the problems as they accumulate (aka farms are deserted, crops rot, nothing is done to fix this), and so developer will "just write code" with the foremost goal of fulfill the initial request (aka pots and pans will be melted) but as design and cohesive goals are lacking, as well as the developer team's understanding of what they are really trying to deliver (aka farmers do not know how to work steel), the effort to fit things together to cover the scope the project just grows and grows... Upon failing to deliver, and there will be a few failed deliveries, moral will plummet, as will productivity, key people may leave (aka farmers and their family starve to death). In the end mush pain and suffering will happen before something is done. One option is to pull the plug on the project, the other is to go agile with it: throw away the promised time line, build a backlog, see what you can do, at which point you can still decide to pull the plug.

This, maybe bleak, example of failure does lead us to an observation: capitalism does put pressure on management to avoid the scenario above. The problem is that many managers, even under pressure to make a profit, do not know how to do so! They do not understand how projects differ, they know only how to do what they did before. Therefore from time to time, you will see them kicking off projects that ultimately will be disastrous.

Agile does not keep you from making mistakes, but applied well, it will alert you of your follies (or others people follies) before they destroy you. Understanding how different types of projects need different style of leadership is helpful (see blog entry on greenfield projects, and on legacy projects).

Tuesday, October 09, 2012

Functional meets Fortran

I decided my life needed a little bit more math, and so to decided to add a few differential equations in some F# code. Math in code means applied math and lots of arrays, but arrays are a pain functional style. So heres my appoach.

I have written many numerial applications, mostly in C++, yet also some in Fortran. I mention Fortran because it has the notion of equivalence, where you can map arrays on to each other.  Equivalence is like C unions for array and there to  reminds us is that arrays do not need to be managed as dynamic memory.

A functional style of programming is a lot about preserving your invariants with immutability, yet immutable vectors are not a solution (yet). Still, numerical code has its own invariants, and therefore some immutability is possible, and should be exploited to write robust code.  In my case, I have vectors and matrices; Their sizes are invariant, and so is the structure of the matrices, they have lots of zeros, they are sparse.  The approach is then to specialize a state monad to maintain size, and structure in an immutable manner, while allocating mutable vectors only within the core of the numerical engine.

Long ago I wrote an matrix assembly algorithm. You can see it as a sparse matrix structure inference code. It was really pretty simple: the final matrix is an assembly of many small sparse matrices, finding the final structure can be seen as solving a set of matrix structure equations, the algorithm defines an order of traversal for all this matrix points and then proceeds to "crawl" across all matrices at once, working out all the local structures. In a functional style of programming, I can use the same newVar like concept you would find in a constraint or inference solver, extend it with an equivalent newEq, to build a monadic environment to support my numerical code. The result is that "application code" just uses variable and defines equations, these are then assembled and only later does the code allocate just a few very large arrays to store the real data associated to the problem

This approach is "easy" because all invariants are built first and then used. Much modern applied math depends on hieararchical and adaptive grids. Hierarchy is probably easy to bring in to a functional style, adaptive grids maybe less obvious.

Wednesday, October 03, 2012

Greenfield project development, or "How I learned to love scrum and good product owners"

There is nothing special to greenfield projects yet teams that can tackle greenfield project have something special! It is not the projects that matter as much as the approach to resolve unknowns when they appear in the projects that matters.  My goal here is to give you a few insights on how do this.

Greenfield projects, or greenfield areas of non-greenfield project, is really about hitting the limits of what you know, it is about meeting unknowns. When that happens, and it does happen decently often, you should not apply a different process, but continue with the one you have, one that already has those “off road tires” that allow you to move in greenfield pastures.  This concept of “one process” fits them all is really the hardest thing to get across to teams that have not really managed to deepen their agility. Yet processes like Scrum works equally well on greenfield projects than on non-greenfield projects.
 Best explain thing in action, here is how it works. We know that software development is a combination of different tasks. I’ll simplify it to the following:
  • Requirements
  • Design
  • Code
  • Test
  • Deliverable
The most efficient way for a team to “develop” is follow the flow of dependencies between these tasks.  With this approach, you start by writing requirements, then you think about the design, then you code, you test and finally you deliver a software package. You can do this the agile way, or the waterfall way.  You can start one phase before ending the previous, but you still follow the dependency order. This works well because although there are cyclic constraints between the different development phases, they are manageable. For example, requirements (stories) are analyzed by the team before the sprint, then during the sprint team members have their independence to pick up work as suites them best. Another example, the team gets together and debates design, when all are aligned each can continue with their code development.

A “non-greenfield” project has "manageable" interdependencies between the different development tasks. By manageable I mean predictable, and by predictable I mean that the team has enough common knowledge that they can organize a process around their unknowns and resolve them together (e.g. as in preparing your next scrum). While, to the contrary, greenfield projects and greenfield areas of a project are, before all else, situations where it is simply harder for the team to efficiently pick up the development tasks. And the number one reason for this breakdown in team productivity is that the tasks no longer are supported by sufficiently clear dependencies, and this results in the team no longer have clear enough objectives when working on the development tasks, and finally too many tasks "just do not go somewhere fast enough"!

You may say: “Ok, so yes sometime we do not know, that is when we can take the time to learn!” But if you do think that, do read on because you are wrong, projects are not the right place to "take time"! Here is the reasoning:
  • It is hard to notice what you do not know.
  • It is hard to learn what you do not know.
  • Having extra time to “observe” and learn will only help you so much
Let’s start with the first one, which leads me to greenfield project rule #1: 
Do not expect to recognize what you do not understand
What you do not understand is what is missing, what is unknown, and unknowns are mostly invisible and therefore hard to notice. That is because they are unknown! (Duh… )

Many of us feel anxious when confronted with unknowns and things we do not understand, yet in a team these feelings are hard to express and hard to use as drivers that can be shared. In fact, in a team negative feelings may distance people, which is the last thing you want. As anxious feelings do not support a team well, we need a surer and easier way to detect the presence of “unknowns” that are affecting development , we want an “unknown detector”.  And we want some automatic way of using this detection tool, so as not “miss” reacting to the team's lack of understanding. And we are in luck, because  nothing is better than an agile iterative process to serve as your unknown detector!  When you fail your development iteration, when your sprint does not deliver, that is when you did not know what you thought you did!  It could be that you quickly understood what you did not know before, for example that a sick team member is the cause of your troubles, but more often, failed deliverable are an indication that you have unknowns and that your team as entered “the greenfield zone”… (add here appropriate music).

Rule #2, for any development, is therefore very important (in scrum language):
Track velocity, review your sprints truthfully 
Velocity tracking and sprint reviews are your team’s unknown detector. The more newness and unknowns you have the more precious they are in alerting the whole team of unknown dangers. The next question is “what should you do when the knowledge alarm bell rings”?  As I forewarned before, the last thing to do is to “go back to school”. It is not the learning that is wrong, it is what you learn, how quickly you learn, how productively you learn that is most often the problem. Or to put it another way, with every development task you can be expected to learn something, the more you learn the better, as long as you get your task done! In a non-greenfield project, there are still lots of unknowns, but these get resolved because your team has a process to conquer these unknowns. In a greenfield project, unknowns accumulate, and when the team fails a sprint, it is in fact the team’s unknown resolution process that is failing. Trying to fix a process by reading up on things rarely does the job. First because, reading is a personal thing, not a group process, then because learning from reading is simply not going to happen fast enough. To keep your team going you need your learning to be part of the team process, not external to it. This bring us to rule #3:
Drive learning within your process 
Here is the idea, you are ending your sprint, things have not happened the way you wanted them to happen, you have delivered half what you hoped you could. Obviously something is wrong! The easy action is to do nothing about it, just push the failed tasks into the next sprint and move on. Yet that is absolutely the wrong thing to do! Each failure needs to bring a learning.  The next level of easy action is to talk about the failure in a sprint retrospective and add future action of improvement list. Beep… Failed again. That list will not be consumed fast enough, and no learning will happen.  Now we are ready for rule #4: 

Realign your backlog to avoid your unknowns but still touch them
I’ll start by saying: “yes, I realize this sound a little bit crazy”.
Here is the reasoning:
One or more unknowns are keeping your team from achieving its highest productivity, specific aspects of the sprint’s tasks have failed.  Your mission, as a team, is to understand what changes can be made to the backlog that will bring your productivity back up AND move your project forwards as much as possible in the right direction. The whole team’s goal is to move around these unknowns that have been blocking you and yet to get as close as possible to them because in many cases that is where the projects added value will be. The added value will be learning but also new deliverables that will make you think differently.

How then to extend your process to tackle these unknowns? The situation is that you have a scrum process, you are normally able to tackle the deeper unknowns in the pre-sprint story grooming and design discussion phases, but now you find yourself starting your sprints neither with good stories, nor good design, nor a clear idea of how you want to develop. Again you are in the greenfield territory. This is when you have team members that make suggestions like: "we should use domain specific languages", or "can we use the GPU for hardware acceleration"? This type of suggestions are signs that your team is “uneasy”, that they are out of their comfort zone and are looking for ways to feel better. As I mentioned above, detecting that subtlety of emotions in a team is not a process, but noting your sprint failure and drop in velocity is. The team needs then to take action, but given their current state of "failure" something different needs to be done. Yet I have already told you what you should do, you should revisit your backlog and rework it. The trick is how to make it systematic! This leads to rule #6:
In case of doubt, the product owner bridges the gaps, not the team
Again, you do not change your scrum process, neither do you change your product owner and his role. Yet the product owner's tactics are now somewhat reversed. Normally, it is the team that tries to meet the expectations of the product owner. In greenfield territory, the tables are turned, it is the product owner's task to meet the team's capabilities. The process is actually pretty simple: the product owner asks the team "can you deliver this" over and over, with different variants, until he has rebuilt the best of possible backlogs. Again, it is not the team that decides that it needs to learn, yet in their answer to the product owner they may say: "yes, we can deliver that, we will need to do some learning, yet we are pretty sure we can deliver what you are asking". This is scrum, so the double checking process is known to all: if the team's independent estimations (sizing) of a task are similar enough (and small enough!), then all have a good confidence that they can deliver that task.

In non-greenfield, the team is charge of delivering what it masters. In greenfield projects, it is the product owner responsibility to maintain a backlog that the team can master. Often this means that the product owner needs to be creative in formulating his desires to meet up with the teams capabilities. Here a few examples how such a process works, to make it obvious I'll imagine the product owner wants to build a car, but for software development, the process is no different:
  • Basic features
    • Example: Deliver a motor connected to wheels
  • Limits, boundaries
    • Example: Work out what is the minimum, maximum, and average weight, power, length, height, etc. of the 50 most popular cars on the market
  • Out of order
    • Example: Design a test track
  • Disconnected, but related
    • Example: Deliver the driver's seat design, the interior sizing, and the design of the windows
In each of these example, the product owner needed to pull back from his initial goal of getting a full car, each time he/she chooses a new team goal that moves the team in the right direction, delivering something of real value, but while staying in the comfort zone of the team. You may note that the out of order example takes the Test Driven Development approach, but out of order with a good product owner can be much more flexible than TDD, especially when the team has no idea of how to hook in the tests into an nonexistent design. For example, let's say the the team needs to develop a cloud app. They will need to use some new APIs, project's often have given technical constraints, in such a case, the product owner owner can focus on asking for the shell of an app that uses those APIs in order to at least tick off the boxes "code calls API", and "deliver code that works in the cloud". Nevertheless, the product owner should always ask that that the app does something which is line with his business. This is an implicit rule: always deliver some form of business value! Also note that the product owner, and the scrum process, continue to strive to deliver deliverables with enough size so that much of the whole team contributes. Fragmentation of knowledge, and new learning is a high risk in new domains and all should work together to keep the knowledge flowing. 

Monday, September 17, 2012

From old to new: dealing with legacy software

A tricky problem is to deal with software legacy.
I think I have never been anywhere where I did not need to deal with legacy some way or another. In fact my first task on my first job was to improve the performance of "old" design rule checker (DRC) by upgrading its geometric store with a KD-tree. That was during a summer job, and later as employee, I was asked to write a new DRC to replace the old one. (A DRC is used to check geometric rules in layouts of integrated circuits). The new DRC was better than the old one, and part of an automated layout tool for analog circuit, which... ultimatly failed. Commercially the project did break even, but other systems worked better. Looking back, with the hindsight of knowing which system succeeded, this CAD sytem failed to approach the problem the right way, which was incrementaly. For example, the requirements for the DRC were: do what the old software does but better and faster, yet they really should have been "make it incremental": a small change in the input should generate only moderate of work to revalidate the rules.

Legacy is about dealing with code that exists already and that causes you trouble. Typically the trouble is that the legacy software is equivalent to a massive traffic jam where it is no longer economic to work on it. Changes to the software are simply too expensive: they take too long and they are risky. Unlike new software, legacy software is already in use, and this simple fact is the root of all problems: it makes you stupid!  Take the example of that DRC project, my boss had said “do what the old software does but better and faster”,  I wrote a great boolean operations on large geometric figures package; It was fast, it was better than the previous package, but… it was not what was needed!  This is what I call  the “dumbifying” effect of legacy software, and the subject of this post.

When a magician performs a trick, you may know the magician is tricking you but mostly unable to do anything than to “fall for it” and think it looks real. Legacy software is a little bit like a nasty magician, it makes things look more real than they deserve. So for example, if your team tells you they can rewrite a legacy system, your tendency will be to believe them, just because the legacy system exists already and this somehow makes you more confident. If you are given analysis of the legacy system and it is incomplete, you will judge it less harshly than analysis for new yet to be written system. If your team tells you they can only test 10% of the features of your legacy system, you might say that is better than nothing, while a proposal to test “only” 10% coverage of a new system would be treated as a joke.   Legacy software is a little bit like a con artist! It puts down your guards and corrupts your sense of proportion. Worse than that, legacy software makes you feel bad when you think of some of its “legacy properties”, the result is like a Jedi mind shield, legacy software actually blocks you from thinking clearly by making you suffer when you look at it "truthfully "!

In a former job, as a development manager, I was continuously saying “no”, “no”, “no”… to the same question which was “can we rewrite it”. The thing is that rewriting legacy software when you have not yet accepted to look at it “truthfully” is often not going to work. So here are my recommendation on how approach you legacy software:
  • Never believe you can rewrite legacy software from scratch without keeping something. Maybe you are keeping a protocol, maybe you are keeping the business functionalities, maybe you are keeping the algorithm, but you need to keep something.
  • Keep what is most precious in your legacy software. Again, it might be the protocol, the business functionalities, .. but as you need to keep something (see above) you want to make sure that you keep what is most valuable. The more the better!
  • Make sure to deeply analyze what you have identified as valuable and that you want to keep in the legacy software. Analyze it to death, until it hurts!  That is because by default, you will shy away from doing this analysis, and that will cost you!  This is the really the number one rule with legacy software: do not let “it” convince you that you do not need to analyze what you will keep! (See dumbifying and con artist remark above).
  • Also, bring in new resources to provide fresh views on your legacy analysis. Again, by default the legacy will bias you, bring in someone that can look you problem and solution without prior experience with the legacy software.
Sometimes what you want to keep is deeply imbedded into your legacy software, for example as an implied data model or API. Do not hesitate to use “tricks” to extract it out at the lowest cost. For example you can extend your types to produce a trace info with each operation, to then analyze the trace files extract an implied structure.
Once I wanted to extract an implied model in C++, so I wrote a “hand crafted” C++ parser. It took about three days, I started with an equivalent yacc and lex and the first C++ file to parse, and proceeded to write the parser to parse exactly what I needed in the first file, extending the parser by running the parser over and over, line after line. When I got to the second file, I was already moving by a few lines at a time with each iteration, and by the end of the effort I had parsed the 50 or so files that had the data I needed.
Finally, do not forget the business: if you are in it to make a profit, never write code unless it is supporting your business strategy. This is probably the hardest part with legacy software: your business and marketing people usually have no idea how to approach your legacy rework. If you not careful your development will effectively hijack your legacy strategy and possibly lead you into business oblivion.

Thursday, September 06, 2012

Where is my state?

People often associated functional programming with statelessness. The reality is that there is always a notion of state, functional programming is about stateful programming. The ambiguity is probably the result of state appearing in different ways than in a procedural style.

In a procedural style, state is associated to memory and you change it by using the writable property of memory.

In a functional style, state is still in memory  but the “classical” approach is to disallow yourself from changing it by writing again over memory. This is the immutable programming style. It goes with functional programming because it is very difficult to build on immutability without some functional programming concepts (I have tried!).  So if you have state and you cannot write over it, the only thing you can do is to read it, and if you want to change the state, you need to write a new memory location with your new state value (a non-functional  near immutable approach is to use arrays and grow them). The problem for beginners is “where should that new state be”?  It needs  to be accessable, and while you maybe knew how to access the old state, where should you put the new state so as to be able to work with it, instead of the old state?
Recursiveness is then the first  way  to manage state; The idea is that if you have a function F that works with state, then if you create a new version of the state you can simply call F recursively  with the new state, and again and again with each new version of the state.

Recursiveness is always somewhere nearby if you are using immutability; Yet you do not need to expose it to manage state. Again, suppose a function F uses state, have it return its state after use, if the state has not changed, then  return it unchanged, if the state changed, then return its new version. The trick is then to have a loop that calls F with the state, then takes the returned value to call F again, and so on. As this is immutable code, the loop is a small tail recursive function.

State can be big and heavy, and not for the wrong reasons, your application may be a big thing. The last thing you want to do is to copy that whole structure just to make a small change. The simple approach is to reuse parts of your old state to make a new state, changing only what needs to be changed. Yet  as the state is usually a tree like structure, if what is changing is deep in the original state, then it will cost you to create a new state because you will need to rebuild much of the hierarchy of the state. One way around this is to take the old state, encapsulate it in a “change of state” structure and use this as the new state  This approach is what I call a differential approach: the state change is the difference. By stacking differences you build a complete state. Usually the differential approach is combined with the “normal” partial copy approach. The reason is that if pursued forever, the “stack” of state changes become too big and starts both to consume memory and take time to access.

In the end your new state will either leave your function “onwards” into recursion or backwards by return, and often the same algorithm may use both methods.
When beginning with the functional style, state seems to be forever a pain to drag around. But with time patterns appear. First you may notice that not all states have the same lifetime, and they do not all have the same read write properties. It is a good idea to try to separate states that have different access and update properties.  You may then note that the hierarchy of functions aligns itself to with the access patterns of your state. Yet if you tie state to function in an layered like fashion that is classical to procedural programming you realize that what you have is not so much state as parameters. That is, while you are working in an inner function you have no ability to modify the outer “state”, so it is more a parameter like than state like. Nevertheless, with situation where states have a strict hierarchy, this approach is ok.

At some point, carrying around state becomes such a standardized routine that  it makes sense to look for help to go to the next level. And the next level is… monads. Here is the deal, I said before that states will either exit your function through a further recursion or by return.  That means that there is a set of compatible patterns that can be used to assemble little bits of code that work on state, so that they nicely fit and the state zips around as an argument or as a return value. This pattern appears naturally when your work with states: it is the idea that the state is passed as last argument and that the stateful functions always return a state. As you want the function to do something useful, you systematically return a tuplet with a state and a function specific return value. Before understanding monads,  I wrote whole applications like this. What monads allow you to do is to standardize so much this method of call pattern that it becomes possible to strip the exposed code of these “peripheral” state going in and state going out. The result is code that looks stateless but that isn’t.  The value of monadic state management is that you make much less mistakes and you can specialize your monad further to give it special properties. Still, from a stateful perspective the main gain of monadic states is productivity: hiding the state mean you can focus on more important things.

Saturday, August 11, 2012

The case of the mysterious distracting type class

I had a magical insight recently about something that was bugging me almost two years. The question was: why the hell was I so unproductive creating type classes!

So here is the deal: when you develop, you combine invariants and transformation properties of your  different pieces to make something new. If you are someone like me, you live your code, and that means that the relations between the different concepts throughout the code drive me "emotionaly". Yesterday, it occured to me that what is special with type classes is that they are "open".  They are open because their properties are not always obvious from the definitions. Worse, new type classes often have unambiguous properties.

I say open because by contrast if I avoid type classes by explictly defining type specific functions or if I create encapsulating types to replace the functional polymorphism by data variants, then I am in effect "closing" the design. With a "closed" design I have the whole design in my head and I "just know" what is best to do next. With an open design I find myself worrying about peripheral design issues which are mostly not so important,  yet I still worry about then... and that really hurts my productivity.

I will therefore adopt a much stricter approach to creating new type classes. The basic rule is NOT to create new type clases. Yes, I love to think about them, but I can't afford to. Then, when I have mentally built all the invariants with the explicit function calls or encapsulating types, and more importantly when I have finished what I was orginally trying to do, I can imagine refactoring and bringing in new type classes.

Friday, August 03, 2012

Easy to make money, hard to keep money, assymetric systems

A market making and brokerage firm, Knight Capital Group, lost all their money (~400M$) when their automated trading system went crazy and it seems repeatedly bought high and sold low, as fast as it could, across many instruments.
My guess is that they turned on a test app on the live market. This would explain why they were not feeding the trades into their portfolios.
An algo to buy the spread makes sense if it is used to act as a counterparty to test other algos such as a market making application.
Then again, a test app normally has hard coded detections of the live market that turn it off. So who knows?!

This event is interesting in a few different ways:
  • What fail-safe code do you want to include in your algo system?
  • What fail-safe development management process do you implement to try to guarantee that those failsafe coding standards are met?
  • Trading is really about keeping your profits, not about making them!
  • Organic systems are assymetric, their behaviors change with the change of direction of flows!
My fail-safe coding rules are the following:
  • Do hard code constants! For example, test applicaiton includes production IP addresses and will refuse to run on these.
  • Too much of a good thing is a bad thing: automatically disable strategies  that have triggered "too many times".
  • Four eyes: have important code reviewed by someone else. 
Fail-safe development rules are much harder to implement:
  • Keep developer that worry about overall consistency. Developers that forget things are expensive.
  • Encourage your developer to expose and maintain a clean stateful design for your algo system.
(to be continued)

Wednesday, July 18, 2012

Managing software development

This blog entry is more than of a little essay than a blog posting, the question I am trying to answer is what are my key learnings in managing software development.
Software development management is about achieving business success, as well as team and individual success, by leveraging skills and process while focusing on productivity and learning. I know this because I have experienced it and applied it as developer, architect, product owner, project management, scrum master, development manager and CTO.
Software development is a self-sustained process: much of what needs to be done depends on what has been done, much of future productivity depends on existing team culture and past investments. It is when starting up greenfield software projects that this “chicken and egg” nature of development is especially noticeable, developers starting in completely new domains are often acutely aware of what they do not know. They may not know, for example, who are their users, what are their requirements, what architecture to use, how they will test their deliverable, or which third party technology to use.  The team itself may not know how to work together, nor how to best invest their efforts to keep productivity high. The sad story is that many development projects fail because they go too far into these “uncharted territories of knowledge. The good story is that applying a good process with enough experienced people will help you stay away from taking unmanageable risk.
Good developers are naturally top of my list of what makes a good development team; Developers bring dexterity of “thinking in code” and also bring broader application and work culture experience with them. You would be surprised how much bad experience a developer can acquire, so beware how you build your team. My experience is that observing a developer working out a solution gives you the best insight of the technical skills they have acquired, as well as their cultural skills; Therefore to hire developers, I prefer case interviews with real coding tasks, while making sure that there is enough people presence and open items to push the candidates to interact and show their soft skills.
Most developers are grown up enough to manage themselves… most of the time. Therefore, although I have run waterfall processes, I really recommend agile development because it gives developers their independence while regularly confronting them with the realities of “the real world”.  A key role to help them in this process of “confronting the facts”, is the requirement manager (e.g. product owner). He/she brings analytical domain expertise but also the product evangelism that aligns the development team around the product features. Getting people to agree is much easier when they already agree about something; Therefore, having your teams agree with your requirement managers makes it much easier for them to agree “together” on how they will design, build and test software.  My experience, as product owner in a high frequency derivatives trading environment, is that the biggest challenge in product management is aligning commercial expectations and development throughput. That brings up the subject of a development’s team productivity and good places to look for it are in architecture, QA  and build management.
Architecture, quality assurance, and build management are magical ingredients to software development: get them right and you rarely need to revisit them, but get them wrong, and they will drag you through hell, especially hell of lost productivity, productivity that often makes the difference between success and failure. One thing these three development processes have in common is that they are not needed to start development. Another is that they are best managed as a “coordinated” solution across development, and not as an assembly of different solutions: you want to have one build management, one architecture, one notion of QA. That does not necessarily mean one “uniform” solution, good solutions often have a granularity of diversity, but there is an optimal balance of overall cohesion versus local choices. And that is what make these particularly tricky to manage, both as a team effort and as part of development management.
There are two key understanding in team management, independently whether it is a top-down style of management or a self-managed team: The first is very tight relation between what can be changed and what should not be changed, the second is the importance of sharing knowledge and beliefs among all team members.  I will not dig deeper in the importance of sharing, there is no notion of team without it, yet first is crucial to understand how to manage cross team activities, especially with regards to productivity.  The reasoning is as follows:
  • To improve productivity it makes sense to change things; It makes sense that you look for a better architecture (e.g. refactor designs), a better build system (e.g. bring in a new tool), a better test coverage (e.g. new tests).
  • Yet making changes makes less sense when near optimum productivity.
The consequence of this is that your number one mission as a development manager is to help your teams track their productivity, stay aligned around their commitments (e.g. encourage discipline and hard work!), and that they maintain a common  and realistic notion of “optimality”. Achieve this, and your development team will know what to change and when to make changes. They will decide if it is worth investing to improve the architecture, the build system or the QA.
Pain point in all of this is achieving that REALISTIC notion of optimality, and getting agreement around a “discount factor” for sustainability, no everything should be 100% sustainable. Give your requirement management the mission to evangelize all with their product oriented view of realism. Getting the right balance in what is sustainable and not really depends on your budget!  

Friday, June 29, 2012

Inquiry driven process to complement your agile teams

Helping agile team with "out of team" work is always a challenge, they naturally tend to rely only on their own resources.  It is really a tricky management problem, and I have had my fair share of failures around teams that do not know that they would be better off with help. Yet my current thinking is that it is possible to set up parallel intra-team processes, but for these to work they need to include a large amount of inquiry activity from the teams; By inquiry I mean that people have questions for which they want answers.  So when team members have questions, and these do not find answers within the team, that is when supporting them with intra-team processes works.

In "out of the book" scrum, the product owner and the scrum of scrums are meant to be the "only" intra-team process. But the bigger the domain, the more both the product owner and the scrum of scrum become bottlenecks. Now the insight, is that you can add more interfaces into your teams but they must be PULL oriented: The team members must be asking for learning.

There is catch, like with any form "leveraging", any misalignment between what is learned and what you are trying to achieve is going to cause to harm. With scrum, it is then the job of the product owner to work with this external process and keep it aligned.

Wednesday, April 25, 2012

Watching TV and programming

Have you ever noticed that some development efforts seem to go nowhere? Maybe because they are just too complicated? Well what I have noticed is that code that I can program while watching TV tend to age much better than code that is complex enough that it needs my full attention!

Friday, April 13, 2012

Distributed Algorithms and Convexity

I like this  paper in CIDR 2011 which shows you how to use the convex properties of functions to implement distributed algorithm with provable properties. The idea is that if functions are logicaly monotonic then they have nice properties. This paper fills in the details of what we already knew but with the exact details of what these good properties are. FYI: logically monotonic implies that inputs only add to outputs. Said differently incremental implementations of functions based on selections, joins and projections all have the logical monotonicity property. And this brings us to convexity because that is exactly why these properties exist: selections, joins and projection is all about preserving the convexity property of the function. And finally this is why I like monadic and stream oriented programming: its about implementing incremental algorithms and with enough care you can preserve nice provable properties!

On a side note: type inference and non-linear optimization all strive on convexity.  Many concepts are built on a basis of convex properties, so your really want to pay attention to this!

Tuesday, March 13, 2012

What future for futures?

I recently read twitter's scala style recommendations and could not help being somewhat unhappy about their recommendation to use futures. They basically say: "Use Futures to manage concurrency.

Fifteen plus years ago I wrote a futures library which I used in a derivative trading system for a long long time. All the basic functionalities were there (trigger on future, future timeouts,  merge futures) and some more advanced like boolean trigger conditions (e.g. trigger if futureA or futureB), as well as futures across processes and networks.  It was a nice library!

Yet ten years later, we removed the use of futures!
Here is the reasoning...

When a future is used, its state can be seen as part of a higher order concurrent logic. The comfort of using futures, is that we do not need to model nor design this higher logic, it is implicitely defined and managed for us by the future library. There are situation where this lack of "bigger" picture is a good thing, one of these is to use futures at the periphery of your system's design. This makes sense because your design stops at the border of your system, so it makes less economic sense to invest in building a model of how the rest of the world will interact with you. Yet as much as it make sense to use futures in boundary interfaces, the deeper you go into your system the less it makes sense to use futures. What happens is that the implicit higher order model created by the futures has no relations with your system's higher order design. And this leads to bad growth.

Developers typically will start to notice this higher order mismatch in two ways: the first is shared resource management, the second is when using futures in streams.

When a future value is set, the associated triggers may be executed synchronously by having the "setter" call directly the trigger, or asychronously at a later time or within another thread.If you want the triggers to have a "DB" like transactional property, then you want to stay within the synchronous trigger model. The tricky part with the sychronous trigger model is that it interferes with your shared resource model: if you use locks, you can easily have unexpected deadlocks, if you use a transactional memory model, your transactions are of unknown size because you do not always know who has been set as triggers, cause large transactions to retry at a performance cost. Granted enough detective work can work around these issues, but these type of problems can happen at the worse of time, such as in the error handling of the future, possibly in a branch of code which is rarely executed and often in difficult to reproduce scenarios. The solution to go "asynchronous" is often not a solution because the asynchronous triggered code is severly handicaped as it happens out of context later.

Another area where futures meet their limits is with streams. Imagine you use a future to hold the return of a computation; so you create the future, set up a trigger, launch the computation. Now if you need to do that again (create, set up, launch), and again, and you try to do that a few million times per second, you find that you meet a performance limit with futures. Futures do not scale well within real time streams. You could push the future concept to achieve high performance streaming, but this could go against creating a true design identity for the stream and would limit the growth of your design.

I am not sure that my examples here are convincing. The reality is that getting your higher order designs right is hard (and expensive in time). So even if futures meet there limit at some level of abstraction, they are definitely mighty confortable to use before you meet those limits.  So maybe my recommendation is "use futures" and then "take them out" when you understand how to do better!

Thursday, February 16, 2012

Double-checked locks to support lockless collaboration

Double-checked locks are normally seen as a form of optimization, they reduce the overhead of their acquisition by first testing their locking criterion. I recently used double-checked locking to support non locking collaboration  between real-time processes. Here is how it works:  all processes agree upon a future collaboration point, each then sets up a lock to protect themselves from going past that point without the right information, they then proceed to exchange enough information in a finite number of transactions, this completes enough their knowledge that when they meet the collaboration point the lock is not activated. Well-tuned and repeated, the processes exchange information in a semi-synchronous manner without ever locking!

Monday, January 02, 2012

Convexity as the one concept to bind them

A little bit more math to help us into the new year.
This article is really very nice:

What Shape is your Conjugate? A Survey of Computational Convex Analysis and its Applications (by Yves Lucet)

This article is beautiful because it brings together many, and I mean really many, algorithms and unifies them all through applied convex analysis. For example, there is something in common among:
  • Computing flows through networks
  • Properties of alloys
  • Image processing for  IRM, ultrasound, ...
  • Dynamic programming
The magic in back of all of these concepts is convexity and how certain  mappings preserve certain properties. The article is proabably a bit fast for many readers, but truly insightful because of the scope of its coverage. It really priovides a common way to approach convexity in its many different forms.