What is the lifetime of a memoized value in a functional language like Haskell?

In a pure functional language with lazy semantics (such as Haskell), results of computations are memoized so that further evaluations of a function with the same inputs do not recompute the value but get it directly from the cache of memoized values.

I am wondering if these memoized values get recycled at some point in time?

  1. If so, it means that the memoized values must be recomputed at a later time, and the benefits of memoization are not so exiting IMHO.
  2. If not, then ok, this is clever to cache everything... but does it mean that a program - if run for a sufficient long period of time - will always consume more and more memory ?

Imagine a program performing intensive numerical analysis: for example to find roots of a list of hundred of thousands mathematical functions using a dichotomy algorithm.

Every time the program evaluates a mathematical function with a specific Real Number, the result will be memoized. But there is only a really small probability that exactly the same Real Number will appear again during the algorithm, leading to memory leakage (or at least, really bad usage).

My idea is that maybe memoized values are simply "scoped" to something in the program (for example to the current continuation, call stack, etc.), but I was unable to find something practical on the subject.

I admit I don't have looked deeply at the Haskell compiler implementation (lazy?), but please, could someone explain to me how it works in practice?


EDIT: Ok, I understand my mistake from the first few answers: Pure semantics implies Referential Transparency which in turn does not imply automatic Memoization, but just guarantees that there will be no problem with it.

I think that a few articles on the web are misleading about this, because from a beginner's point of view, it seems that the Referential Transparency property is so cool because it allows implicit memoization.

16
задан sclv 5 May 2011 в 16:44
поделиться