Thursday, December 30, 2010

Differential programming and sandwich structures

I recently commented (to the Swiss F# User Group):
  • The zipper structure and the notion of data differentiation is the key to functional programming. You need to use this concept over and over until you can think of algorithms both in terms of the original type as in terms of its "differential" zipper form (see http://en.wikipedia.org/wiki/Zipper_(data_structure) ).
Yesterday evening I had the great example of this truth: I wanted to to add types to an AST (Abstract Syntax Tree) and I did this by layering the type information on the tree's "edges" not on the nodes. Had I tried to add the type on the nodes I would have needed to change my code or use active pattern matching (which I do not like). Instead I upgraded the AST type to refer to the sub-trees as a generic type. I then instantiate the AST type with sub-trees as a tuplet of a type and an AST type. The AST is then layered, alternating between AST nodes and type nodes: it looks like layers of ham and cheese. If then the algorithms on the AST have been implemented "in a differential form" (which is really about traversal and mappings) then you can compose the traversal of the AST and the traversal of the type node in to one traversal without too much difficulty.

The result is that my latest functional language prototype can now be typed inferred incrementally because now I have the type associated to the AST.

(Having written this, I searched google and found this blog which is somewhat Haskell specific as it uses functors (which F# does not have))

Tuesday, December 21, 2010

Functional abstraction through data differentiation

The zipper and the notion of data differentiation is a key to "higher level" architecture of functional programs.

Here is an example. Suppose we want to design a very simple graphical editor. Have a container that contains the recordings of the users editing; Could be as simple as "mouse down" "move mouse" and "mouse up" commands in sequences. Now a selection on this container would return two containers: the original one with "holes" where the selected material was taken out, the selection containing the content of the "holes". Reassembling the two together again brings up back to the original data. If instead we transform the selected sub-set and then reassemble the the two, we get something new. Try it out, it is a simple concept that goes very far in its usefulness.

Friday, December 17, 2010

Monad composition with state and seq in F#

I've become master of the generic state monad: its a state monad with multiple independent inner states. Pursuing my late night education sessions, I tried out something that I was thinking about for some times: that some of these inner states be themselves monads. In my case, I added one new state of type "sequence" (F# enumerative monad), I use it to trace execution for debugging purpose. And it works very well.

Sunday, December 12, 2010

Modeling software systems: data centric, and then what?

For many years I favored a declarative data centric view to software architecture: Graphs with a compositional hierarchy were my mantra. I once explained that these design were like the one ring in the lord of the ring: “One pattern to rule them all and in the design simply them”.

Yet in the end I have changed my beliefs. I now favor a more “equation solving” view to modeling. I find more and more, and especially in the context of real time automation that the models are inherently “sustained” between a data centric model and what I call a dynamic model, which is easier to describe differently. I find easiest to describe dynamic properties as domain specific type theories in a functional framework. These are expressed as DSL with ad hoc types and type inference solvers.

Tuesday, December 07, 2010

Generic types and type inference (F#, ...)

Do not expect:
match x with | Fail _ as z-> z | ...
to be the same as
match x with | Fail d -> Fail d | ...

This is because the first variant will output the same type as there input while the second variant creates new type variable which may or not end up to be the same type as the input arguments.

Monadic fun: real life state monads

I am deepening my functional programming skills these days and becoming master of the state monad (computational expression) in F#. I had read much about monads but need to conclude that 95% of what is written is not about "real programs". The state monad is always presented within the scope of a single state and rarely with error management. But real life is about having many state dependent algorithms that intertwine and having an error management that keeps you within your main paradigms.

Here are a few things to care about:
1) Assume that the state is carrying multiple payloads, usually one per algorithm. Use accessors to convert that "global" monad state to each algorithmic specific state. You'll need a get and map accessor for each "sub-state". At the top level, when assembling your different modules you instantiate algorithmic specific monads with their accessors; Internally they are working on their specific state, but in reality they are always accessing the single global common state of the monad. A good example: design a parser and type inference algorithm separately, based on the state monad. Then bring the two together to catch type errors during the parsing.
2) Bring all errors into a single application wide error type. Like the global state concept, you have algorithmic specific error types, then use an error specific accessor to bring the local error type into a broader global error management scheme.
3) If an algorithm are not persistent, but only local, then you do not need to keep its state in the global state. In this cases, you locally augment the monad's state with the algorithmic local content, to then revert back to the global state on end of the algorithm. For example, keep the "normal state" in one element of a tuplet, and use the other to store the local state. The following monad "transformers" are used bridge the compatibility between the global state monads and local augmented form:

let rec mapErrState f = function
| ErrMsg(et,s) -> ErrMsg(et, f s)
| ErrOr(e1, e2) -> ErrOr(mapErrState f e1, mapErrState f e2)
| ErrAnd(e1, e2) ->ErrAnd(mapErrState f e1, mapErrState f e2)
| ErrTag(et, e) -> ErrTag(et, mapErrState f e)



let map1M m :_ -> StateR<_,_,_> = fun a -> toState (fun (s,x) ->
match runState (m a) s with
| Success (v,s') ->
Success (v,(s',x))
| Fail e ->
Fail (mapErrState (fun s'->(s',x)) e)
)

let map2M m :_ -> StateR<_,_,_> = fun a -> toState (fun (x,s) ->
match runState (m a) s with
| Success (v,s') ->
Success (v,(x,s'))
| Fail e ->
Fail (mapErrState (fun s'->(x,s')) e)
)

let pairM t (m: _ -> StateR<_,_,_>) : _ -> StateR<_,_,_>= fun a -> toState (fun s ->
match runState (m a) (t, s) with
| Success (v, (t',s')) ->
Success ((v,t'), s')
| Fail e ->
Fail (mapErrState (fun (t',s') -> s') e)
)
You'll note that my failure type stores states (makes easier debugging), this is why I need to map through the errors.

Update: an earlier visitor may have noticed that I have updated this code to work with a "function that returns state monads" and not directly the state monad. There is a reason for this but it will need to wait for a later post.

Tuesday, November 02, 2010

my Dell Latitude X1

I need to praise my sturdy little Dell Latitude X1 laptop that I've had for the last five years and that has always stood up to its expectations. it's a Pentium M 1.1 GHz, and it only has 60G of hard disk memory and 1G of RAM. Yet I happily run Visual Studio 2010 on it, even in cramped airplane seats. Also I can carry it in a small bag.I recently scanned the market for a new small laptop. I looked at all the brands and Dell's M 101z came up as a potential favorite. But even with its dual core and its 4G, there was not enough new value to make the change.

Dragon NaturallySpeaking professional

I just installed Dragon NaturallySpeaking professional. It takes getting used to the speaking without making mistakes, but so far so good. The pros are that so far in this blog, what I have spoken is exactly what is being typed; and that after the minimal amount of training. Another high point was that I fed it a an MP3 recording that I made on a windy day and it came out 90% correct. The negatives are that even on my six core AMD machine the software lags in reactivity: I purchased professional version which allows me to hear my voice over the text that has been transcribed, and switching between correcting the text and listening to my voice takes a half second and that's too long. Still I'm very happy with the product.

This blog entry for example needed only four minor corrections from its original transcription.

Thursday, August 05, 2010

Do not copy paste, do not use autocompletion

I do not believe in auto-completion help in an IDE. Nor do I believe that you should copy-past code snippet to "be faster".

I believe that your brain must have the purest form of the source code in its memory: the sequence of statement or a visual image of the functions. The last thing you want to do is to "corrupt" this mental image with "grey areas" that you brain has not learned in a compatible manner. Everytime you use auto-complete you push aside your brains and take a short cut. Same for copy past. Copy past may be in your mind because there exists a pattern that maps the orginal understanding of the source to the copied version.. But in such a case, you are introducing redundency, so better not do that.

Wednesday, August 04, 2010

Brilliant web development

I totally believe in tools like WebSharper (http://www.intellifactory.com/Home.aspx) and for many years haXe (http://haxe.org/). Software development is an economic process where the optimium is an smart assembly of concepts.

Tuesday, August 03, 2010

I'm back

Since my last blog:

  • I am been using F# for my functional programming
  • My type inference algorithms have led me into zippers and data differentation
  • My math needs have led me to integrated optimization methods and approximate dynamic programming
  • I've done the CUDA work but found it expensive
  • I have purchased a home computer because I thought it would be fun
  • I still use my dell X1 as a laptop because it has all you need (except a super fast CPU).

I hope to pick up on my blogging activity.