In Haskell, if I write
fac n = facRec n 1
where facRec 0 acc = acc
facRec n acc = facRec (n-1) (acc*n)
and compile it with GHC, will the result be any different than if I used
fac 0 = 1
fac n = n * fac (n-1)
I could easily do fac n = product [1..n]
and avoid the whole thing, but I'm interested in how an attempt at tail recursion works out in a lazy language. I get that I can still get a stack overflow because thunks are building up, but does anything actually happen differently (in terms of the resulting compiled program) when I use an accumulator than when I just state the naive recursion? Is there any benefit to leaving out the tail recursion other than improved legibility? Does the answer change at all if I'm using runhaskell
to run the computation instead of compiling it first?