Saturday, November 3, 2012

Asymptotic Productivity of a Language Feature

Good news: our big data visualization DSL has a new name (Superconductor!), runs in a browser, and even more unusual, a per-feature asymptotic analysis.

We built a synthesizer for Superconductor that automatically maps widget code down into parallel algorithms, but synthesis isn't magic: the more you want it to do, the longer it has to run. Synthesis is essentially a search that automates manual thinking a programmer would have performed. This introduces a new way of thinking about the benefit of language features. Features have a cost, and instead of being in runtime complexity, it is now in compilation speed, which is arguably a formal proxy for developer time. The new language design question is how to tame these costs by isolating them, and when pricey features are still desirable, how to provide levers.

Consider two of the following features in our language:


  • Automatic full tree traversals. For a particular top-down parallel traversal over a document, what parts of the (declarative) computation can be safely mapped into it? Not forcing the programmer to figure this out ends up being O(A^2) in the number of static object fields being computed. It took a bit of work (iterative refinement and topological sorting), but well worth it!
  • Automatic subtree traversals. Sometimes, different parts of a tree have different types of data dependencies. For example, text in CSS is represented as a subtree of the document, and several computations over it are inherently sequential such as line wrapping. The tree traversal that encounters text does not need to be sequential, however: for each traversal, we can run different types of subtrees with different types of traversals, and thereby exploit parallelism within some subtrees and even by running disjoint subtrees concurrently. The feature to automatically infer subtree traversals blows up in the number of types of nodes (classes). Basically, each partitioning of the classes is a different way of statically characterizing a different subtree (and matched with a traversal type, a prospective parallelization scheme). 


Synthesis is fast enough that automatic parallelization of full tree traversals are effectively free. However, specializing for per-subtree traversal schedules is only fast for programs with few types of nodes. Subtree traversals are useful in practice (such as if you enjoy stylized text on a webpage!), so we still need to support it. What is a language designer to do?


  1. Deprioritization. Of parallelization can occur without using a slow feature, it will. In this case, the search will look for a full tree traversal before trying subtree ones. O(A^2) asymptotics are actually preserved, even if the full tree traversal is impossible: if there is no full tree traversal, we guarantee there is no subtree traversal, and therefore can stop compilation and terminate with an error. Enumerating all parallelizations will reveal a subtree traversal if it exists, this will not be exercised during most development.
  2. Sketching. We use sketching for targeted opt-in prioritization of slow features. If you know the third tree traversal of the algorithm should use subtree traversals, say so, and even better, give a hint as to how to group the classes. If you do that, synthesis will still handle mapping individual computations into the traversals, including the subtree ones, and compilation is still O(A^2)!


Asymptotics of synthesis is an intriguing way to characterize language features, and perhaps a step towards representing developer effort. It defines how much is 'unsaid' in a precise way, how hard is the 'exercise for the reader'. There is still more to be desired -- the asymptotics is subject to the synthesis algorithm and translation of the features into it -- but I found it to be a neat, and an insightful indicator for productivity.






No comments: