Thursday, February 16, 2012

Double-checked locks to support lockless collaboration

Double-checked locks are normally seen as a form of optimization, they reduce the overhead of their acquisition by first testing their locking criterion. I recently used double-checked locking to support non locking collaboration  between real-time processes. Here is how it works:  all processes agree upon a future collaboration point, each then sets up a lock to protect themselves from going past that point without the right information, they then proceed to exchange enough information in a finite number of transactions, this completes enough their knowledge that when they meet the collaboration point the lock is not activated. Well-tuned and repeated, the processes exchange information in a semi-synchronous manner without ever waiting on the locks!

Monday, January 02, 2012

Convexity as the one concept to bind them

A little bit more math to help us into the new year.
This article is really very nice:

What Shape is your Conjugate? A Survey of Computational Convex Analysis and its Applications (by Yves Lucet)

This article is beautiful because it brings together many, and I mean really many, algorithms and unifies them all through applied convex analysis. For example, there is something in common among:
  • Computing flows through networks
  • Properties of alloys
  • Image processing for  IRM, ultrasound, ...
  • Dynamic programming
The magic in back of all of these concepts is convexity and how certain  mappings preserve certain properties. The article is probably a bit fast for many readers, but truly insightful because of the scope of its coverage. It really priovides a common way to approach convexity in its many different forms.

Wednesday, November 16, 2011

The beauty and the beast: explicit and implicit types

Working with Haskell has brought on an unexpected question:

  • Is it more productive to work with implicit types or explicit types?
By this I mean: “Should major types be mostly inferred or mostly specified with type signature”?

The bizarre fact is that you may well use two different parts of your brain to work with inferred types or with “defined” types!  Here is the reasoning:

Programming is an incremental process: changes are made locally and then overall consistency is assured. When programming around types, changes are made to type specifications or to expressions which affect type. The two kinds of changes lead to two styles of programming: the first is to start by changing a type definition or type annotation, the second is to start by changing an expression, which results in a change to the inferred types, and possibly needing a fix to a type signature.

The bare minimum of type annotation is not even to provide argument to “top level” functions, such as in let f = fun a b -> … , where we only know that the function f has two arguments because it is inferred from its body. This style may sound contrived but is a way to work around the type class restriction in F#.

So how does this affect productivity? Mental work is much about having a good inner dialog; Your mind is taking a very static view of the program’s design when you worry about type signatures. While when you worry about the result of the type inference, you are effectively “re-running” the program in your head. It is not the same thing! And I find in many cases the second thought process, the reliance on inferred types, the more productive way to work.

Addendum:
This study goes very much in this direction: https://news.mit.edu/2022/your-brain-your-brain-code-1221