May 14, 2013
May 12, 2013
May 07, 2013
May 04, 2013
Teaching an introductory course to “compilation” this semester (actually it was called Virtual Machines, but it was really about compiling expressions to stack machines), I realized something I hadn’t heard before, and wish I had been told when I first learned OCaml many years ago. Here it is: as soon as you are programming in a functional language with physical equality (i.e. pointer equality, the
(==) operator in OCaml), then you are actually working in a “weakly impure” language, and you can for example implement a limited form of
gensym is this classic “innocuously effectful” function returning a different symbol—usually a string—each time it is called. It is used pervasively to generate fresh variable names, in compilers notably. How? well, you actually don’t have much to do, except let the runtime call
malloc: it will return a “fresh” pointer where to store your data.
malloc and the garbage collector together ensures this freshness condition, and you can then compare two pointers with
(==). As a bonus, you can even store data along your fresh symbol.
In this post, I’ll exploit that simple idea to develop an assembler for a little stack machine close to that of OCaml.
In OCaml, something as simple as this is a
type 'a sym = C of 'a let gensym x = C x
Each call to say
gensym () will allocate one new data block in memory; you can then compare two symbols with the physical equality
(==).What we care about here is not the content of that memory span, but its address, which is unique.
A few warnings first: in OCaml, the constructor must have arguments, otherwise the compiler optimizes the representation to a simple integer and nothing is allocated. Also, don’t replace the argument
C by a constant, say
(), in the function code: if you do so, the compiler will place value
C () in the data segment of the program, and calling
gensym will not trigger an allocation either. There is an excellent and already classic series of blog post about OCaml’s value representation here.
Another way of saying the same thing is that (non-cyclic) values in OCaml are not trees, as they can be thought of considering the purely functional fragment, but DAGs, that is trees with sharing.
I think that not many beginner/intermediate OCaml programmers realize the power of this, so I’d like to show a cool application of this remark. We will code a small compiler from a arithmetic language to a stack machine. Bear with me, it’s going to be fun!
An application: compiling expressions to a stack machine
The input language of expressions is:
type expr = | Int of int | Plus of expr * expr | If of expr * expr * expr
Its semantics should be clear, except for the fact that
If are like in C: if their condition is different than 0, then their first branch is taken; if it is 0, then the second is taken. Because we have these conditionals, the stack machine will need instructions to jump around in the code. The instructions of this stack machine are:
ion the stack;
Addpops two values off the stack and pushes their sum;
Haltstops the machine and returning the (supposedly unique) stack value;
Branch oskips the next
oinstructions in the code;
Branchif oskips the next
oinstructions if the top of the stack is not
0, and has no effect otherwise
For instance, the expression 1 + (if 0 then 2 else (3+3)) is compiled into:
[Push 1; Push 0; Branchif 3; Push 3; Push 3; Add; Branch 1; Push 2; Add; Halt]
and evaluates of course to
7. Notice how the two branches of the
If are turned around in the code? First, we’ve got the code of expression 2, then the code of 3+3. In general, expression if e1 then e2 else e3 will be compiled to [c1;
Branchif (|c3|+1); c3;
Branch |c2|; c2; ...] where ci is the compiled code of ei, and |l| is the size of code l. But I’m getting ahead of myself.
Now, compiling an
expr to a list of instructions in one pass would be a little bit messy, because we have to compute these integer offset for jumps. Let’s follow instead the common practice and first compile expressions to an assembly language where some suffixes of the code have labels, which are the names referred to by instructions
Branchif. This assembly language
asm will then be well… assembled into actual
code, where jumps are translated to integer offsets. But instead of generating label names by side-effect as customary, let’s use our trick: we will refer to them by a unique pointer to the code attached to it. In other words, the arguments to
Branchif will actually be pointers to
asm programs, comparable by
To represent the
asm data structures, we generalize over the notion of label:
type 'label instr = | Push of int | Add | Branchif of 'label | Branch of 'label | Halt
An assembly program is a list of instruction where labels are themselves assembly programs (the
-rectypes option of OCaml is required here):
type asm = asm instr list
For instance, taking our previous example,
Plus (Int 1, If (Int 0, Int 2, Plus (Int 3, Int 3)))
is compiled to the (shared) value:
Push 1 :: Push 0 :: let k = [Add; Halt] in Branchif (Push 2 :: k) :: Push 3 :: Push 3 :: Add :: k
See how the suffix
k (the continuation of the
If) is shared among the
Branchif and the main branch? In call-by-value, this is a value: if you reduce it any further by inlining
k, you will get a different value, that can be told apart from the first by using
(==). So don’t let OCaml’s pretty-printing of values fool you: this is not a tree, the sharing of
k is important! What you get is the DAG of all possible execution traces of your program; they eventually all merge in one point, the code suffix
k = [Add; Halt].
The compilation function is relatively straightforward; it’s an accumulator-based function:
let rec compile e k = match e with | Int i -> Push i :: k | Plus (e1, e2) -> compile e1 (compile e2 (Add :: k)) | If (e1, e2, e3) -> compile e1 (Branchif (compile e2 k) :: compile e3 k) let compile e = compile e [Halt]
The sharing discussed above is realized here in the
If case, by compiling its two branches using the accumulator (continuation)
k twice. Again, many people think of this erroneously as duplicating a piece of value. Actually, this is only mentioning twice a pointer to an already-allocated unique piece of value; and since we can compare pointers, we have a way to know that they are the same. Note also that this compilation function is purely compositional: to each subexpression corresponds a contiguous span of assembly code.
Now, real code for our machine is simply a list of instructions where labels are represented by (positive) integers:
type code = int instr list
Why positive? Well, since we have no way to make a loop, code can be arranged such that all jumps are made forward in the code.
The assembly function took me a while to figure out. It “linearizes” the assembly, a DAG, into a list by traversing it depth-first. The tricky part is that we don’t want to repeat the common suffixes of all branches; that’s where we use the fact that they are at the same memory address, which we can check with
(==). If a piece of input code has already been compiled n instructions ahead in the output code, instead of repeating it we just emit a
So practically, we must keep as an argument an association list
k mapping already-compiled suffixes of the input to the corresponding output instruction; think of it as a kind of “cache” of the function. It also doubles as the result of the process: it is what’s eventually returned by
assemble. For each input
is, we first traverse that list
k looking for the pointer
is; if we find it, then we have our
Branch instruction; otherwise, we assemble the next instruction. This first part of the job corresponds to the
let rec assemble is k = try (is, Branch (List.index (fun (is', _) -> is == is') k)) :: k with Not_found -> assem is k
List.index p xs returns the index of the first element
xs such that
p x is
Now the auxiliary function
assem actually assembles instructions into a list of pairs of source programs and target instruction:
and assem asm k = match asm with | (Push _ | Add | Halt as i) :: is -> (asm, i) :: assemble is k | Branchif is :: js -> let k = assemble is k in let k' = assemble js k in (asm, Branchif (List.length k' - List.length k)) :: k' | Branch _ :: _ -> assert false |  -> k
Think of the arguments
k as one unique list
asm @ k that is “open” for insertion in two places: at top-level, as usual, and in the middle, between
k part is the already-processed suffix, and
asm is what remains to be processed. The first case inserts the non-branching instructions
Push, Add, Halt at top-level in the output (together with their corresponding assembly suffix of course). The second one,
Branchif, begins by inserting the branch
is at top-level, and then inserts the remainder
js in front of it. Note that when assembling this remainder, we can discover sharing that was recorded in
k when compiling the branch. Note also that there can’t be any
Branch in the assembly since it would not make much sense (everything after a
Branch instruction would be dead code), hence the
Finally, we can strip off the “cached” information in the returned list, keeping only the target instructions:
let assemble is = snd (List.split (assemble is ))
That’s it, we have a complete compilation chain for our expression language! We can execute the target code on this machine:
let rec exec = function | s, Push i :: c -> exec (i :: s, c) | i :: j :: s, Add :: c -> exec (i + j :: s, c) | s, Branch n :: c -> exec (s, List.drop n c) | i :: s, Branchif n :: c -> exec (s, List.drop (if i<>0 then n else 0) c) | [i], Halt :: _ -> i | _ -> failwith "error" let exec c = exec (, c)
The idea of using labels that are actual pointers to the code seems quite natural and seems to scale well (I implemented a compiler from a mini-ML to a virtual machine close to OCaml’s bytecode). In terms of performance however,
assemble is quadratic: before assembling each instruction, we look up if we didn’t assemble it already. When we have real (string) labels, we can represent the “cache” as a data structure with faster lookup; unfortunately, if labels are pointers, we can’t really do this because we don’t have a total order on pointers, only equality
This is only one example of how we can exploit pointer equality in OCaml to mimick a name generator. I’m sure there are lots of other applications to be discovered, or that I don’t know of (off the top of my head: to represent variables in the lambda-calculus). The big unknown for me is the nature of the language we’ve been working in, functional OCaml + pointer equality. Can we still consider it a functional language? How to reason on its programs? The comment section is right below!
With Pierre Boutillier, we have been working on a new Coq tactic lately, called
invert. From my point of view, it started as a quest to build a replacement to the
inversion is a pain to use, as it generates sub-goals with many (dependent) equalities that must be substituted, which force the use of
subst, which in turns also has its quirks, making the mantra
inversion H; clear H; subst quite fragile. Furthermore,
inversion has efficiency problems, being quite slow and generating big proof terms. From Pierre's point of view, this work was a good way to implement a better
destruct tactic, based on what he did during an internship (report in French (PDF)).
In a nutshell, the idea behind a destruction and an inversion is quite similar: it boils down to a case analysis over a given hypothesis. And there are quite a few tactics that follow this scheme:
discriminate (it is true that the last two tactics are quite specialized, but fit the bill nevertheless). Why on Earth would we need to add a new element to this list?
Well, it turns out that building on ideas by Jean-Francois Monin to make so called "small inversions", one can unify the inner-working of most of the aforementioned list: it suffices to build the right return clause for the case analysis.
Let's take an example.
Variable A : Type. Inductive vector: nat -> Type := | nil : vector 0 | cons: forall n (h:A) (v: vector n), vector (S n). Inductive P : forall n, vector n -> Prop := | Pnil : P 0 nil | Pcons: forall n h v, P n v -> P (S n) (cons n h v). Lemma test n h v (H: P (S n) (cons n h v)) : P n v. Proof.
At this point, doing
inversion H generates 4 new hypotheses:
H2 : P n v0 H0 : n0 = n H1 : h0 = h H3 : existT (fun n : nat => vector n) n v0 = existT (fun n : nat => vector n) n v ============================ P n v
H1 are just cruft. Then, the goal isn't very palatable, because the equality
v0 is defined in terms of a dependent equality: in order to go further, one need to assume axioms about dependent equality1, equivalent to Streicher's axiom K. (Just to keep tabs, note that running the
Show Proof command in Coq outputs a partial proof term that is already 73 lines long at this point.)
If we use
dependent destruction H instead of inversion, we get the expected hypothesis
H: P n v (which is far better from an usability point of view). Yet, there is no magic here: dependent destruction simply used a dependent equality axiom internally to get rid of the dependent equality, and generates a 64 lines long proof term that is not very pretty.
At this point, one may wonder: what should the proof term look like? and, is it necessary to use the K axiom here?
A black belt Coq user versed in dependent types could write the following one.
let diag := fun n0 : nat => match n0 as n' return (forall v0 : vector n', P n' v0 -> Prop) with | 0 => fun (v0 : vector 0) (_ : P 0 v0) => True | S m => fun v0 : vector (S m) => match v0 as v1 in (vector m0) return (P m0 v1 -> Prop) with | nil => fun _ : P 0 nil => True | cons p x v1 => fun _ : P (S p) (cons p x v1) => P p v1 end end in match H as H' in (P x y) return (diag x y H') with | Pnil => I | Pcons n0 h0 v0 Pv => Pv end.
Wow, 15 lines long. Let's demystify it a bit.
First, recall that the return type of a match is dictated by its return clause (the
as ... in ... return ... part). This is basically a function that binds the arguments of the inductive (
S n as x,
cons n h v as y in our case),
H' of type
P x y, and which body is the return part. Usually, the return part is a constant (e.g., nat for the match in the List.length), but it is not mandatory. Here, the
diag term packs some computations, such that
diag (S n) (cons n h v) H reduces to
P n v, the conclusion of the goal. (In general, this kind of return clauses make it possible to eliminate impossible branches in a match, as done here by marking them with the trivial return type
True; we direct the interested readers to the online CPDT book by Adam Chlipala for more informations on this, especially this chapter.)
Then, what is
diag? Well, it is a function that follows the structure of the arguments of
P to single out impossible cases, and to refine the context in the other ones using dependent pattern matching, in order to reduce to the right type (the type of the initial conclusion of the goal). The idea behind such "small-scale inversions" was described by Monin in 2010 and is out of the scope of this blog post. What is new here is that we have mechanized the construction of the
diag functions as a Coq tactic, making this whole approach practical.
All in all, using our new tactic, we can just use the following proof script:
invert H; tauto.
At this point,
Show Proof. outputs the following complete proof term (where
invert_subgoal is the type of the subgoal solved by
let diag := fun n0 : nat => match n0 as n1 return (forall v0 : vector n1, P n1 v0 -> Prop) with | 0 => fun (v0 : vector 0) (_ : P 0 v0) => False -> True | S x => fun v0 : vector (S x) => match v0 as v1 in (vector n1) return (match n1 as n2 return (vector n2 -> Type) with | 0 => fun _ : vector 0 => False -> True | S x0 => fun v2 : vector (S x0) => P (S x0) v2 -> Prop end v1) with | nil => fun H0 : False => False_rect True H0 | cons n1 h0 v1 => fun _ : P (S n1) (cons n1 h0 v1) => P n1 v1 end end in (fun invert_subgoal : forall (n0 : nat) (h0 : A) (v0 : vector n0) (H0 : P n0 v0), diag (S n0) (cons n0 h0 v0) (Pcons n0 h0 v0 H0) => match H as p in (P n0 v0) return (diag n0 v0 p) with | Pnil => fun H0 : False => False_rect True H0 | Pcons x x0 x1 x2 => invert_subgoal x x0 x1 x2 end) (fun (n0 : nat) (_ : A) (v0 : vector n0) (H0 : P n0 v0) => H0))
Some of the differences with the proof term above come from the fact that we generate it interactively, rather than writing it once at all.
A legitimate question: how do we compare to destruct and inversion and dependent destruction? First, we aim at producing a "better" destruct: that is, we might resolve the situation in which
destruct fails, in order to avoid producing ill-typed terms. Then, the situation with respect to inversion and dependent destruction is less clear. Right now, we would rather not assume the K axiom (the right thing to do if homotopy is the future). In that case, we would fail for inversion problems that require K, and inversion and dependent destruction would be more powerful than our tactic. For problems that do no require to use K,
invert would be equivalent to
dependent destruction with better looking proof terms2.
We are still working on our prototype, but we are quite confident that we got the main thing right: mechanizing the construction of the return clause. We will come back to this blog when we need beta-testers!
See the following FAQ question (Can I prove that the second components of equal dependent pairs are equal?). You may also be interested in this other question (What is Streicher's axiom K?). The EqdepFacts standard library module, that has the equivalence proofs between all those subtle notions. Finally, if you want to finish this proof using these axioms, you can use
Require Import Eqdep.then the
inj_pair2lemma. Once you're done,
Print Assumptions test.will let you check that you relied on an additional axiom -- or
Print Assumptions inj_pair2.↩
Moreover, since our proof terms are less cluttered, it seems less likely than recursive definitions made in "proof mode" with
invertwill fail to pass the termination check once Coq's guard condition deals properly with such commutative cuts, another part of Pierre's thesis work.↩
May 03, 2013
I've been thinking about code review a lot recently.
Code review is a key part of our dev process, and has been from the beginning. From our perspective, code is more than a way of getting things done, it's a way of expressing your intent. Code that's easy to read and understand is likely to be more robust, more flexible, and critically, safer. And we care about safety a lot, for obvious reasons.
But the importance of code review doesn't mean that we've always done a good job of organizing it. I'll talk a bit more about how we used to do code review, how we do it now, and the impact that those changes have had.
The bad old world
Our old code review process was what you might call batch-oriented. We'd prepare a set of changes for a release, and then, after someone gave it a quick look-over, combine these changes together in a branch. We'd then read over these changes very carefully, with multiple people reading each file, making comments, requesting changes, and fixing those changes, until the code was in a releasable state.
This was a big and expensive process, involving many people, and quite a lot of work and coordination. Given the time it took, we focused our code review on our so-called critical path systems, i.e., the ones that are involved in sending orders to the market.
The management task was complex enough that we wrote a tool called
cr for managing and tracking the reading of these diffs, parceling
out responsibility for different files to different people. We've
actually blogged about this before,
Batch-oriented review worked well when we and our codebase were smaller, but it did not scale. By combining multiple changes into a single branch, you were stuck reading a collection of unrelated changes, and the breadth of the changes made fitting it all into your head harder. Even worse, when you throw a bunch of changes together, some are going to take longer than others, so the release is blocked until the slowest change gets beaten into shape.
The end result is that, while we found code review to be indispensable in creating high quality code that we felt comfortable connecting to the markets, the overhead of that review kept on getting higher and higher as we grew.
We needed a better solution.
Another approach to code review, and a more common one, is patch-based review. In patch review, someone proposes a change to the current release in the form of a patch, and it is the patch itself that is reviewed. Once it passes review, it can be applied to the tree. Patch-based review is great in that it gives you independence: one patch taking a while doesn't block other patches from getting out.
We avoided patch-based review initially because we were worried about the complexities of dealing with conflicting patches. Indeed, one issue with patch-based review is that the state of the tree when the patch is reviewed is likely not the state of the tree when the patch is applied. Even when this doesn't lead to textual conflicts, this should leave you a little nervous, since a patch that is correct against one version of the tree is not necessarily correct against a changed tree.
And then, what do you do when there's a conflict and the patch no longer applies cleanly? You can rebase the patch against the new state of the tree, and then re-review the patch from scratch. But humans are really bad at investing mental energy in boring work, and carefully re-reviewing a patch you've already mostly read is deadly dull.
Moreover, when do you decide that there's a conflict? When dealing with patches that involve file moves and renames, even deciding what it means for a patch written previously to still apply cleanly is a tricky question.
Also, squaring patch-based review with a git or hg-based workflow can be tricky. There's something quite nice about the github-style pull-request workflow; but the semantics of merging are pretty tricky, and you need to be careful that what you read corresponds with the actual changes that are made by the merge.
For all the problems, the virtues of patch-based review are clear, and
so about six months ago we started a project to revamp our
to make it suitable for doing patch-like review. The new version of
cr is now organized around what we call features, which are
essentially hg bookmarks (similar to git branches) augmented with some
associated metadata. This metadata includes
- An English description of the change
- A base-revision that the changes should be read against
- An owner
- A collection (usually just one other than the owner) of full-feature reviewers.
The workflow for a developer goes something like this:
- create a new feature by running
cr feature create. You'll select a name for the feature and write the initial description. The base-revision will automatically be chosen as the most recent release.
- Write the code, using
hgin the ordinary way, making commits as you go and pushing the result to a shared, multi-headed repo that has all of the features people are working on.
- When you think the code is in a good state, get the feature enabled for review. At this point, you'll need to get a full-feature reviewer selected. It's this person's job to read every change that goes into this feature.
- The full feature reviewer then reads the diff from the base-revision to the tip, adding comments, requesting fixes, and reading diffs forward until they're happy, at which point, it's seconded.
- Once it's seconded, the feature is enabled for review by anyone who is signed up for review for the specific files you touched. How many file reviewers you are depends on the nature of the project. In our most safety-critical systems, every file has three reviewers. In some other systems, there are no file reviewers at all.
The remaining work needs to be done by the release manager. A release manager can create a new release based on a set of features that:
- are fully reviewed, and have no outstanding reviewer complaints to be resolved.
- compile cleanly on their own and pass their automated tests
- have as their base revision the previous release
- can be merged together cleanly
Checking that things "can be merged together cleanly" is actually
tricky, since you can't just trust hg's notion of a merge.
its own merge logic that is more conservative than what
do. The biggest worry with
hg is that it tries to guess at a
reasonable base-point for the 3-way merge (usually the greatest common
ancestor of the two heads to be merged). Usually this works well, but
it's easy to construct crazy cases where on one branch you make
changes that are just silently dropped in the merge. There is also
some rather surprising behavior that can come into play when files are
moved, copied or deleted as part of the mix.
cr, on the other hand, will always choose the base-point of the
features to be merged as the base-point for the 3-way merge. This
way, the diffs that are reviewed are also the diffs that are used for
constructing the merged node. Also,
cr has some extra sanity
conditions on what merges it's willing to try. This all greatly
reduces the scope for surprising results popping out the other side of
If the base-revision of a given feature is against an older release, then you need to rebase the review before it can be released, i.e., update the base-revision to the most recent release. Among other things requires you to merge your changes with tip. If there are conflicts, you then either need to review the resolution of the conflicts, or you simply need to reread the diffs from scratch. The last bit is pretty rare, but it's an important escape hatch.
How'd it go?
The new code-review process has had a dramatic effect on our world. The review process for our main repository used to take anywhere from a couple of weeks to a three months to complete a cycle. Today, those releases go out every week, like clockwork. Everyone knows that if they can get their feature cleaned up and ready, they can always get it out that week. Indeed, if you're following our open-source releases on github, you'll see that new packages have shown up once a week for the last 16 weeks.
Feature-baaed review has led to a significant increase in the rate of change of our critical-path systems. Code review is now considerably less painful, and most importantly, it's easier than ever to say no to a feature that isn't ready to go. In old-style batch review, there was a lot of pressure to not hold up a release polishing some small bit, which sometimes lead you to release code that wasn't really ready. Now, that problem has largely vanished.
The barrier to entry for people who want to contribute to critical path systems has also been lowered. This has also contributed to us being able to get projects out the door faster.
But the most striking result I've seen is from our post-trade group, which operates outside of the review process used for the critical-path systems. The post-trade team is responsible for our infrastructure that handles everything after a transaction is done, like tracking the clearing and settlement of our trades or managing our trading books and records.
Post-trade has historically had a more relaxed approach to code review --- they do it, but not on all parts of the system, and not in a particularly strict way. In the last few months, however, they switched over to using the new feature-based workflow, and even though they're doing a lot more code review (which takes serious time), their overall process has become faster and more efficient. We think that's largely do to having a well-managed workflow for managing and merging independent features, even without whatever the benefits of review itself.
I'm now pushing to get feature-based review adopted throughout the firm. Obviously, not all code needs to be scrutinized to the same level --- having three reviewers for every file is sometimes sensible, sometimes overkill --- but ensuring that no change can get in unless one other person reads it and thinks it's reasonable is a good baseline rule. Review has a lot of benefits: it improves the quality of the code, gives you a great opportunity for training, and helps spread knowledge. Those benefits make sense everywhere we have people programming.
Maybe the biggest lesson in this for me is the importance of thinking through your processes, focusing on the biggest bottlenecks, and doing what you can to fix them.
May 01, 2013
April 26, 2013
April 24, 2013
April 22, 2013
This post aims at summarizing the activities of OCamlPro for the past month. As usual, we worked in three main areas: the OCaml toolchain, development tools for OCaml and R&D projects.
Our multi-runtime implementation of OCaml had gained stability. Luca fixed a lot of low-level bugs in the "master" branch of his OCaml repository, which were mainly related to the handling of signals. There are still some issues, which seem to be related to thread-switching (ie. when using OS level mutli-threading).
We made great progress on improved inlining strategy. In the current OCaml compiler, inlining, closure conversion and constant propagation are done in a single pass interleaved with analysis. It has served well until now, but to improve it in a way which is easily extensible in the future, it needs a complete rewrite. After a few prototypes, Pierre is now coming up with a suitable intermediate language (IR) more suited for the job, using a dedicated value analysis to guide the simplification and inlining passes. This IR will stand between the lambda code and the C-lambda and is designed such that future specialized optimization can be easily be added. There are two good reasons for this IR: First, it is not as intrusive and reduces the extent of the modifications to the compiler, as it can be plugged between two existing passes and turned on or off using a command-line flag. Second, it can be tweaked to make the abstract interpretation more precise and efficient. For instance, we want the inlining to work with higher-order functions as well as modules and functors, performing basic defunctorization. It is still in an experimentation phase, but we are quickly converging on the API and hope to have something we can demo in the next months.
Our frame-pointer patch has also been accepted. Note that, as this patch changes the calling sconvention of OCaml programs, you cannot link together libraries compiled with and without the patch. Hence, this option will be provided as a configuration switch (
Regarding memory profiling, we released a preliminary prototype of the memory profiler for native code. It is available in Çagdas repository. We are still in the process of testing and evaluating the prototype before making it widely available through OPAM. As the previous bytecode prototype, you need to compile the libraries and the program you want to profile as usual in order to build a table associating unique identifier to program locations (.prof file). Then, for each allocated block, we have then patched the runtime of OCaml to encode in the header the identifier of the line which allocated it. To be able to dump the heap, you can either instrument your program, or send a signal, or set the appropriate environment variable (
OCAMLRUNPARAM=m). Finally, you can use the profiler which will read the .prof and .cmt files in order to generate a pdf file which is the amount of memory use by type. More details on this will come soon, you can already read the README file available on github.
Finally, we organized a new meeting with the core-team to discuss some of the bugs in the OCaml bug tracker. It was the first of the year, but we are now going to have one every month, as it has a very positive impact on the involvement of everybody in fixing bugs and helps focus work on the most important issues.
Development Tools for OCaml
Since the latest release of ocp-indent, Louis continued to improve the tool. We plan to release version 1.2.0 in the next couple of days, with some bug fixes (esp. related to the handling of records) and the following new features: operators are now aligned by default (as well as opened parentheses not finishing a line) and indentation can be bounded using the
max_indent parameter. We are also using the great cmdliner which means
ocp-indent now has nice manual pages.
We are also preparing a new minor release of OPAM, with a few bug fixes, an improved solver heuristic and improved performance. OPAM statistics seem to converge towards round numbers, as OcamlPro/opam repository has recently reached 100 "stars" on Github, OCamlPro/opam-repository is not very far from being forked 100 times, while the number of unique packages on opam.ocamlpro.com is almost 400. We are also preparing the platform release, with a cleaner and simpler client API to be used by the upcoming "Ocamlot", the automated infrastructure which will test and improve the quality and consistency of OPAM packages.
Last, we released a very small - but already incredibly useful tool: ocp-index. This new tool provides completion based on what is installed on your system, with type and documentation when available. Similarly to
ocp-indent, the main goal of this tool is to make it easy to integrate in your editor of choice. As a proof of concept, we also distribute a small curses-based tool, called
ocp-browser, which lets you browse interactively the libraries installed on your system, as well as an emacs binding for
auto-complete.el. Interestingly enough, behind the scene
ocp-index uses a lazy immutable prefix tree with merge operations to efficiently store and load cmis and cmt files.
Other R&D Projects
We continued to work on the Richelieu project. We are currently adding basic type-inference for Scilab programs to our tool scilint, to be able to print warnings on possible programers mistakes. A first part of the work was to understand how to automatically get rid of some of the
eval constructs, especially
evalstr primitives that are often used. After this, Michael manually analyzed some real-world Scilab programs to understand how typing should be done, and he is now implementing the type checker and a table of types for primitive functions.
We are also submitting a new project, called SOCaml, for funding by the French government. In 2010, ANSSI, the French agency for the security of computer systems, commanded a study, called LAFOSEC, to understand the advantages of using functional languages in the domain of security. Early results of the study were presented in JFLA'2013, with in particular recommandations on how to improve OCaml to use it for security applications. The goal of the SOCaml project would be to implement these recommandations, to improve OCaml, to provide additional tools to detect potential errors and to implement libraries to verify marshaled values and bytecode. We hope the project will be accepted, as it opens a new application domain for OCaml, and would allow us to work on this topic with our partners in the project, such as LexiFi and Michel Mauny's team at ENSTA Paristech (the project would also contribute to their ocamlcc bytecode-to-c compiler).