Sometimes imperative works out better?

Submitted by metaperl on Tue, 04/18/2006 - 8:45pm.

From the ocaml book we have the following

Certain algorithms are easier to write in this (imperative) programming style. Take for instance the computation of the product of two matrices. Even though it is certainly possible to translate it into a purely functional version, in which lists replace vectors, this is neither natural nor efficient compared to an imperative version.

Submitted by Cale Gibbard on Wed, 04/19/2006 - 7:21am.

In a lazy language, that example is no longer an example, since lists are essentially what loops were in your imperative language (they serve the same purpose, and have basically the same properties wrt complexity). Also, there's nothing which says that pure functional code can't use proper arrays. (You just don't get mutable arrays without some monad being involved.)

As for naturality,

matrixMultiply a b =
   [[sum (zipWith (*) x y) | y <- transpose b] | x <- a]

looks rather nice to me, and gets to the heart of what the matrix multiplication algorithm is doing.

With arrays, you can write something rather similar to the imperative version without any imperative constructs at all. It's a lot of index-wrangling, but then, it is with the imperative case too:

matrixMultiply a b
    | (ar2 - ar1) /= (bc2 - bc1) = error "matrixMultiply: incompatible sizes"
    | otherwise =
        array ((ac1,br1),(ac2,br2))
          [((i,j), sum $ zipWith (*) [a ! (i,k) | k <- [ar1 .. ar2]]
                                     [b ! (k,j) | k <- [bc1 .. bc2]])
          | i <- [ac1..ac2], j <- [br1..br2]]
  where ((ac1,ar1),(ac2,ar2)) = bounds a
        ((bc1,br1),(bc2,br2)) = bounds b                      

(Note that this code allows for different indexing systems, it would simplify things to insist that matrices use 0-based or 1-based indices, but I didn't do that.)

Submitted by Cale Gibbard on Wed, 04/19/2006 - 7:32am.

Of course, there are certain algorithms, particularly those which are stated in terms of a large or undetermined number of mutable cells, where one really does want an imperative language. I think a good example is something like Knuth's Dancing Links, which involves a toroidally (doubly-circularly) linked matrix, and lots of updates to pointers in order to hide certain rows, and later restore them.

That's the point at which you want to break out the ST monad, as it would be quite hard to translate that into a pure setting. However, in general it's quite surprising how often there's a clean and easy functional equivalent to imperative algorithms in a lazy language.

In a strict functional language, this happens significantly less, and so quite often the elegant solution which uses higher order functions and lists ends up being of worse asymptotic complexity than a carefully written imperative solution, because it ends up computing things it never needs.

Submitted by Greg Buchholz on Wed, 04/19/2006 - 6:25pm.
Well, this might be straying off-topic, but the code below seems pretty neat. I don't think you can do it this concisely in O'Caml...
instance Num a => Num [a] where
    (+) = zipWith (+)

main = do print $  [1,2,3]+[4,5,6]
          print $ [[1,2,3],[4,5,6]] + [[7,8,9],[0,1,2]]
          print $ [[[[[[1,2,3]]]]]] + [[[[[[4,5,6]]]]]]

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.