Actually, it is not a knife and is known under the name of Finger Tree.
Created by Ralf Hinze and Ross Paterson in 2004, and based to a large extent on the work of Chris Okasaki on Implicit Recursive Slowdown and Catenable Double-Ended Queus, this data structure, to quote the abstract of the paper introducing Finger Trees, is:
"a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece. Representations achieving these bounds have appeared previously, but 2-3 finger trees are much simpler, as are the operations on them. Further, by defining the split operation in a general form, we obtain a general purpose data structure that can serve as a sequence, priority queue, search tree, priority search queue and more."
Why the finger tree deserves to be called the Swiss knife of data structures can best be explained by again quoting the introduction of the paper:
"The operations one might expect from a sequence abstraction include adding and removing elements at both ends (the deque operations), concatenation, insertion and deletion at arbitrary points, finding an element satisfying some criterion, and splitting the sequence into subsequences based on some property. Many efficient functional implementations of subsets of these operations are known, but supporting more operations efficiently is difficult. The best known general implementations are very complex, and little used.
This paper introduces functional 2-3 finger trees, a general implementation that performs well, but is much simpler than previous implementations with similar bounds. The data structure and its many variations are simple enough that we are able to give a concise yet complete executable description using the functional programming language Haskell (Peyton Jones, 2003). The paper should be accessible to anyone with a basic knowledge of Haskell and its widely used extension to multiple-parameter type classes (Peyton Jones et al., 1997). Although the structure makes essential use of laziness, it is also suitable for strict languages that provide a lazy evaluation primitive."
Efficiency and universality are the two most attractive features of finger trees. Not less important is simplicity, as it allows easy understanding, straightforward implementation and uneventful maintenance.
Stacks support efficient access to the first item of a sequence only, queues and deques support efficient access to both ends, but not to an randomly-accessed item. Arrays allow extremely efficient O(1) access to any of their items, but are poor at inserting, removal, splitting and concatenation. Lists are poor (O(N)) at locating a randomly indexed item.
Remarkably, the finger tree is efficient with all these operations. One can use this single data structure for all these types of operations as opposed to having to use several types of data structures, each most efficient with only some operations.
Note also the words functional and persistent, which mean that the finger tree is an immutable data structure.
In .NET the IList<T> interface specifies a number of void methods, which change the list in-place (so the instance object is mutable). To implement an immutable operation one needs first to make a copy of the original structure (List<T>, LinkedList<T>, …, etc). An achievement of .NET 3.5 and LINQ is that the set of new extension methods (of the Enumerable class) implement immutable operations.
What about a C# implementation? In February Eric Lippert had a post in his blog about finger trees. The C# code he provided does not implement all operations of a Finger Tree and probably this is the reason why this post is referred to by the Wikipedia only as "Example of 2-3 trees in C#", but not as an implementation of the Finger Tree data structure. Actually, he did have a complete implementation at that time (see the Update at the start of this post), but desided not to publish it.
My modest contribution is what I believe to be the first published complete C# implementation of the Finger Tree data structure as originally defined in the paper by Hinze and Paterson (only a few exercises have not been implemented).
Programming a Finger Tree in C# was as much fun as challenge. The finger tree structure is defined in an extremely generic way. At first I even was concerned that C# might not be sufficiently expressive to implement such rich genericity. It turned out that C# lived up to the challenge perfectly. Here is a small example of how the code uses multiple types and nested type constraints:
// U — the type of Containers that can be split
// T — the type of elements in a container of type U
// V — the type of the Measure-value when an element is measured
public class Split<U, T, V>
where U : ISplittable<T, V>
where T : IMeasured<V>
Another challenge was to implement lazy evaluation (the .NET term for this is "deferred execution") for some of the methods. Again, C# was up to the challenge with its IEnumerable interface and the ease and finesse of using the "yield return" statement.
The net result: it was possible to write code like this:
public override IEnumerable<T> ToSequence()
ViewL<T, M> lView = LeftView();
yield return lView.head;
foreach (T t in lView.ftTail.ToSequence())
yield return t;
Another challenge, of course, was that one definitely needs to understand Hinze’s and Ross’ article before even trying to start the design of an implementation. While the text should be straightforward to anyone with some Haskell and functional programming experience, it requires a bit of concentration and some very basic understanding of fundamental algebraic concepts. In the text of the article one will find a precise and simple definition of a Monoid. My first thought was that such academic knowledge would not really be necessary for a real-world programming task. Little did I know… It turned out that the Monoid plays a central role in the generic specification of objects that have a Measure.
I was thrilled to code my own version of a monoid in C#:
public class Monoid<T>
public delegate T monOp(T t1, T t2);
public monOp theOp;
public Monoid(T tZero, monOp aMonOp)
theZero = tZero;
theOp = aMonOp;
public T zero
Without going into too-much details, here is how the correct Monoids are defined in suitable auxiliary classes to be used in defining a Random-Access Sequence, Priority Queue and Ordered Sequence:
public static class Size
public static Monoid<uint> theMonoid =
new Monoid<uint>(0, new Monoid<uint>.monOp(anAddOp));
public static uint anAddOp(uint s1, uint s2)
return s1 + s2;
public static class Prio
public static Monoid<double> theMonoid =
public static double aMaxOp(double d1, double d2)
return (d1 > d2) ? d1 : d2;
public class Key<T, V> where V : IComparable
public delegate V getKey(T t);
// maybe we shouldn’t care for NoKey, as this is too theoretic
public V NoKey;
public getKey KeyAssign;
public Key(V noKey, getKey KeyAssign)
this.KeyAssign = KeyAssign;
public class KeyMonoid<T, V> where V : IComparable
public Key<T, V> KeyObj;
public Monoid<V> theMonoid;
public V aNextKeyOp(V v1, V v2)
return (v2.CompareTo(KeyObj.NoKey) == 0) ? v1 : v2;
public KeyMonoid(Key<T, V> KeyObj)
this.KeyObj = KeyObj;
Yet another challenge was to be able to create methods dynamically, as currying was essentially used in the specification of finger trees with measures. Once again it was great to make use of the existing .NET 3.5 infrastructure. Below is my simple FP static class, which essentially uses the .NET 3.5 Func object and a lambda expression in order to implement currying:
public static class FP
public static Func<Y, Z> Curry<X, Y, Z>
(this Func<X, Y, Z> func, X x)
return (y) => func(x, y);
And here is a typical usage of the currying implemented above:
public T ElemAt(uint ind)
FP.Curry<uint, uint, bool>(theLessThanIMethod2, ind)
Now, for everyone who have reached this point of my post, here is the link to the complete implementation.
Be reminded once again that .NET 3.5 is needed for a successful build.
In my next posts I will analyze the performance of this Finger Tree implementation and how it fares compared to existing implementations of sequential data structures as provided by different programming languages and environments.