Relax ordering constraints in monadic computation

here is some food for thought.

When I write monadic code, the monad imposes ordering on the operations done. For example, If I write in the IO monad:

do a <- doSomething
   b <- doSomethingElse
   return (a + b)

I know doSomething will be executed before doSomethingElse.

Now, consider the equivalent code in a language like C:

return (doSomething() + doSomethingElse());

The semantics of C don't actually specify what order these two function calls will be evaluated, so the compiler is free to move things around as it pleases.

My question is this: How would I write monadic code in Haskell that also leaves this evaluation order undefined? Ideally, I would reap some benefits when my compiler's optimizer looks at the code and starts moving things around.

Here are some possible techniques that don't get the job done, but are in the right "spirit":

  • Write the code in functorial style, that is, write plus doSomething doSomethingElse and let plus schedule the monadic calls. Drawbacks: You lose sharing on the results of the monadic actions, and plus still makes a decision about when things end up being evaluated.
  • Use lazy IO, that is, unsafeInterleaveIO, which defers the scheduling to the demands lazy of evaluation. But lazy is different from strict with undefined order: in particular I do want all of my monadic actions to get executed.
  • Lazy IO, combined with immediately seq'ing all of the arguments. In particular, seq does not impose ordering constraints.

In this sense, I want something more flexible than monadic ordering but less flexible than full-out laziness.

24
задан Don Stewart 5 May 2011 в 17:10
поделиться