04 June 2010

Loop-Unrolling Macros in Clojure

I've been fooling around with clojure a bit lately. Something fun: a dotimes-like macro, but it completely unrolls your loop at compile time:
(defmacro dotimes-unroll
  "Just like dotimes, but completely unrolls the given loop.  The
number of iterations must read as a number."
  [[i n] & body]
  (assert (number? n))
  (if (= n 0)
    `(let [~i ~(- n 1)]
       (dotimes-unroll [~i ~(- n 1)] ~@body)
Or, unroll by a given number of repetitions:
(defmacro dotimes-partial-unroll
  "Like dotimes, but partially unrolls the loop.  The number of
  repetitions attempted is the first argument to the macro, which must
  read as a literal number."
  [nstep [i nexpr] & body]
  (assert (number? nstep))
  `(let [n# (int ~nexpr)
         nmain# (quot n# ~nstep)
         nextra# (rem n# ~nstep)
         extrainc# (* nmain# ~nstep)]
     (loop [iouter# (int 0)]
       (if (>= iouter# nmain#)
         (dotimes [j# nextra#]
           (let [~i (int (+ j# extrainc#))]
         (let [inc# (int (* iouter# ~nstep))]
           (dotimes-unroll [j# ~nstep]
             (let [~i (int (+ j# inc#))]
           (recur (int (+ iouter# 1))))))))
I have no idea whether (or to what extent) this sort of loop unrolling is done by the JVM. So, these macros may be completely useless and redundant, but they were fun to write.

04 March 2010

Distribution for the Ratio of Two Uniform Deviates

Weird: the distribution for the ratio of two uniform deviates is constant for ratios smaller than 1, and falls like r^2 for larger ratios. p(r) = 1/2 if r < 1, and 1/(2r^2) if r >= 1 when r = a/b, where a,b ~ U(0, A). What's up with that?

09 January 2010

Another Library: 1-D Root Finding in Ocaml

...and here's another library for 1-D root finding (non-linear equation solving) in OCaml. Get it with
git clone git://github.com/farr/ocaml-solve.git
I'm particularly proud of this one for two reasons:
  1. All the algorithms, even the ones requiring derivatives, use bracketing. If a step in the algorithm would fall outside the brackets for the root, the algorithms perform a bisection step instead, and then try again. In this way, all algorithms are guaranteed to converge on a root. (It's always annoyed me that the GSL doesn't do this with its "algorithms that require derivatives".)
  2. One of the algorithms in the library is a higher-order Newton's method. Given an arbitrary number of derivatives, this algorithm takes advantage of the additional information to implement higher-order convergence. I learned this trick from Danby's book, where he is trying to solve Kepler's equation using a fourth-order method.

Polynomials in OCaml

I just posted some code on GitHub to manipulate polynomials in one variable in OCaml. It's pretty simplistic, and only works for polynomials whose coefficients are floating-point real numbers (so beware of roundoff error!), but I have found it to be useful in many quick-and-dirty situations. To grab it, use
git clone git://github.com/farr/ocaml-poly.git

08 January 2009

Profiling Using MzScheme and Errortrace

Lately I've needed to do some profiling in PLT Scheme, and the usual process in DrScheme hasn't been working for me. If you don't know, here's the usual process:

  1. Language --> Choose Language
  2. Show Details
  3. Debugging and Profiling
  4. Run Program
  5. View --> Show Profile
Robby suggested using the errortrace library directly from mzscheme. It worked perfectly. Here's how you can duplicate my success. (More information is available in the errortrace docs.)

First, get your program (or the parts of it you want to profile) in shape to be run easily from the REPL. I already have a run-tests.ss module which runs all my tests and exercises the code in a pretty realistic way, so I just entered (require "run-tests.ss"); you should do something similar. Then:

  1. Fire up mzscheme:
    mzscheme -i -l errortrace
    The -i tells mzscheme to start in interpreter mode; the -l errortrace says "load errortrace first thing".
  2. Set a couple of parameters for errortrace:
    (instrumenting-enabled #t)
    (profiling-enabled #t)
    (profiling-record-enabled #t)
  3. Run your program. For example:
    (require "run-tests.ss")
  4. Output the profile information:
    (output-profile-results #t #t)
    (with-output-to-file "profile.dat"
      (lambda () (output-profile-results #t #t)))
    The two #ts tell errortrace to output the path-to-function-source information and to sort in terms of total time taken.
And that's it! I now know where my slow program is spending its time (and I think I even understand why). Fixing it is another matter, though....

03 December 2008

Interpolation in OCaml

I just wrote some code to do polynomial interpolation in O'Caml. The full darcs repository is at http://web.mit.edu/farr/www/onumerics/; here's the guts of the code. Eventually (depending on my time and level of continued interest), this may turn into a more general O'Caml numerical library. As usual, it's released under the GPL.

(*  interp.ml: Interpolation.   
    Copyright (C) 2008 Will M. Farr 

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see .

let interp xs ys x = 
  let n = Array.length xs in 
  assert(Array.length ys = n);
  let ytab = Array.make n ys in 
  for i = 1 to n - 1 do 
    let m = n - i in 
    let last_col = ytab.(i-1) and 
        this_col = Array.make m 0.0 in 
    for j = 0 to m - 1 do 
      let xlow = xs.(j) and 
          xhigh = xs.(j+i) and 
          ylow = last_col.(j) and 
          yhigh = last_col.(j+1) in 
      this_col.(j) <- (x-.xlow)*.yhigh/.(xhigh-.xlow) +. (x-.xhigh)*.ylow/.(xlow-.xhigh)
    ytab.(i) <- this_col

Here's how it works. Initially, we have two arrays of points, xs and ys, and a point at which we want to evaluate the interpolated polynomial, x. This means we have an array of 0-order interpolating polynomials (the constants ys). We can use a linear interpolation between neighboring points to construct the values of first-order interpolating polynomials at x. Then we can linearly interpolate between the neighboring first-order polynomials to construct second-order interpolating polynomials. We iterate this process until we have a single value, which is the value of the n-point interpolating polynomial at x. In the case of four points, here is a graphical representation of the procedure:

x0: y0
x1: y1     y012
       y12      y0123
x2: y2     y123
x3: y3 
The y's subscripts indicate the points involved in the interpolation. Each y is a linear interpolation of the y values above and below it to the left at the corresponding x points.

11 June 2008

Very Basic Matrix Library for PLT Scheme

I just uploaded to PLaneT a very simple matrix library. The library provides a matrix datastructure, iterators over matrices and vectors, algebraic operations on matrices, and matrix-vector and matrix-matrix products. There is also an initial module import language which provides all of scheme except the +, -, * and / operations, which instead are generalized for all reasonable combinations of scalar, vector and matrix operands.

I feel like there is a need for this sort of library in PLT Scheme, even though its set of operations is pretty impoverished. Maybe people can build on it to do actual linear algebra in Scheme. I always find myself re-implementing these sorts of trivial matrix operations (since they're simple, it doesn't take very long); maybe now I'll just grab the PLaneT library instead.

By the way, I'm particularly proud of the sequence nature of the matrix struct, and the for/vector, for/matrix, and in-matrix forms. These take advantage of the new machinery for user-defined iterations and comprehensions in PLT Scheme version 4 (have a look at the source code if you're interested in how it's done). (Version 4, by the way, represents a tremendous improvement from version 3, in every part of the system, but most particularly in the documentation tools---have a look at the new documentation for my library here to see what I mean.)

07 September 2007

Quick Note on Continuations to comp.lang.scheme

I just posted a quick note on continuations to c.l.s. I'm afraid it's pretty basic, and doesn't really do the topic justice, but maybe the original poster will find it helpful. I'm noting it here because no ever seems to explain continuations the way I did in my note (at least, I haven't seen any explanations like this), which uses "return points" to illustrate what continuations do. I don't fully understand continuations (in particular, I have a lot of trouble with the delimited continuations that PLT Scheme has recently acquired), but thinking of them the way I talk about in the c.l.s. post is the way I manage to muddle through using them. How do others think about call/cc? (i.e. what images are going through your head while you figure out how to do what you want with call/cc in code, not how do you formalize call/cc, or how you explain it to someone else, or ....)