Thursday, December 30, 2010

Differential programming and sandwich structures

I recently commented (to the Swiss F# User Group):
  • The zipper structure and the notion of data differentiation is the key to functional programming. You need to use this concept over and over until you can think of algorithms both in terms of the original type as in terms of its "differential" zipper form (see ).
Yesterday evening I had the great example of this truth: I wanted to to add types to an AST (Abstract Syntax Tree) and I did this by layering the type information on the tree's "edges" not on the nodes. Had I tried to add the type on the nodes I would have needed to change my code or use active pattern matching (which I do not like). Instead I upgraded the AST type to refer to the sub-trees as a generic type. I then instantiate the AST type with sub-trees as a tuplet of a type and an AST type. The AST is then layered, alternating between AST nodes and type nodes: it looks like layers of ham and cheese. If then the algorithms on the AST have been implemented "in a differential form" (which is really about traversal and mappings) then you can compose the traversal of the AST and the traversal of the type node in to one traversal without too much difficulty.

The result is that my latest functional language prototype can now be typed inferred incrementally because now I have the type associated to the AST.

(Having written this, I searched google and found this blog which is somewhat Haskell specific as it uses functors (which F# does not have))

Tuesday, December 21, 2010

Functional abstraction through data differentiation

The zipper and the notion of data differentiation is a key to "higher level" architecture of functional programs.

Here is an example. Suppose we want to design a very simple graphical editor. Have a container that contains the recordings of the users editing; Could be as simple as "mouse down" "move mouse" and "mouse up" commands in sequences. Now a selection on this container would return two containers: the original one with "holes" where the selected material was taken out, the selection containing the content of the "holes". Reassembling the two together again brings up back to the original data. If instead we transform the selected sub-set and then reassemble the the two, we get something new. Try it out, it is a simple concept that goes very far in its usefulness.

Friday, December 17, 2010

Monad composition with state and seq in F#

I've become master of the generic state monad: its a state monad with multiple independent inner states. Pursuing my late night education sessions, I tried out something that I was thinking about for some times: that some of these inner states be themselves monads. In my case, I added one new state of type "sequence" (F# enumerative monad), I use it to trace execution for debugging purpose. And it works very well.

Sunday, December 12, 2010

Modeling software systems: data centric, and then what?

For many years I favored a declarative data centric view to software architecture: Graphs with a compositional hierarchy were my mantra. I once explained that these design were like the one ring in the lord of the ring: “One pattern to rule them all and in the design simply them”.

Yet in the end I have changed my beliefs. I now favor a more “equation solving” view to modeling. I find more and more, and especially in the context of real time automation that the models are inherently “sustained” between a data centric model and what I call a dynamic model, which is easier to describe differently. I find easiest to describe dynamic properties as domain specific type theories in a functional framework. These are expressed as DSL with ad hoc types and type inference solvers.

Tuesday, December 07, 2010

Generic types and type inference (F#, ...)

Do not expect:
match x with | Fail _ as z-> z | ...
to be the same as
match x with | Fail d -> Fail d | ...

This is because the first variant will output the same type as there input while the second variant creates new type variable which may or not end up to be the same type as the input arguments.

Monadic fun: real life state monads

I am deepening my functional programming skills these days and becoming master of the state monad (computational expression) in F#. I had read much about monads but need to conclude that 95% of what is written is not about "real programs". The state monad is always presented within the scope of a single state and rarely with error management. But real life is about having many state dependent algorithms that intertwine and having an error management that keeps you within your main paradigms.

Here are a few things to care about:
1) Assume that the state is carrying multiple payloads, usually one per algorithm. Use accessors to convert that "global" monad state to each algorithmic specific state. You'll need a get and map accessor for each "sub-state". At the top level, when assembling your different modules you instantiate algorithmic specific monads with their accessors; Internally they are working on their specific state, but in reality they are always accessing the single global common state of the monad. A good example: design a parser and type inference algorithm separately, based on the state monad. Then bring the two together to catch type errors during the parsing.
2) Bring all errors into a single application wide error type. Like the global state concept, you have algorithmic specific error types, then use an error specific accessor to bring the local error type into a broader global error management scheme.
3) If an algorithm are not persistent, but only local, then you do not need to keep its state in the global state. In this cases, you locally augment the monad's state with the algorithmic local content, to then revert back to the global state on end of the algorithm. For example, keep the "normal state" in one element of a tuplet, and use the other to store the local state. The following monad "transformers" are used bridge the compatibility between the global state monads and local augmented form:

let rec mapErrState f = function
| ErrMsg(et,s) -> ErrMsg(et, f s)
| ErrOr(e1, e2) -> ErrOr(mapErrState f e1, mapErrState f e2)
| ErrAnd(e1, e2) ->ErrAnd(mapErrState f e1, mapErrState f e2)
| ErrTag(et, e) -> ErrTag(et, mapErrState f e)

let map1M m :_ -> StateR<_,_,_> = fun a -> toState (fun (s,x) ->
match runState (m a) s with
| Success (v,s') ->
Success (v,(s',x))
| Fail e ->
Fail (mapErrState (fun s'->(s',x)) e)

let map2M m :_ -> StateR<_,_,_> = fun a -> toState (fun (x,s) ->
match runState (m a) s with
| Success (v,s') ->
Success (v,(x,s'))
| Fail e ->
Fail (mapErrState (fun s'->(x,s')) e)

let pairM t (m: _ -> StateR<_,_,_>) : _ -> StateR<_,_,_>= fun a -> toState (fun s ->
match runState (m a) (t, s) with
| Success (v, (t',s')) ->
Success ((v,t'), s')
| Fail e ->
Fail (mapErrState (fun (t',s') -> s') e)
You'll note that my failure type stores states (makes easier debugging), this is why I need to map through the errors.

Update: an earlier visitor may have noticed that I have updated this code to work with a "function that returns state monads" and not directly the state monad. There is a reason for this but it will need to wait for a later post.