5

In my previous question ( Can Turing machines be converted into equivalent Lambda Calculus expressions with a systematic approach? ), I got the answer that it is indeed possible.

And as I have read before, every program written in all programming languages is convertible to a Turing Machine. And of course, since there are no side effects and no order in calculating a lambda expression, parallelization is infinitely possible, and it can break down to computing one lambda function on a separate machine.

So with having these three facts in mind, An interesting question comes to mind. Since every program written in every programming language has an equivalent Turing machine, Turing machines are convertible to Lambda Calculus expression through an algorithm, and Lambda expressions are infinitely parallelizable, can every program be parallelized automatically and infinitely?

EDIT : I think I have to clear out one thing. By infinitely parallelizing, I mean to parallelize till the point where it benefits us, so the arguments about the size of parallelizations are not valid. For example, by having five cores of cpu, one can utilize all of his\her cores by these approach.

Ashkan Kazemi
  • 270
  • 1
  • 10

3 Answers3

6

If you're working in the strict lambda calculus, everything can be automatically parallelized. In particular, when evaluating a function application, the function and the argument can always be evaluated in parallel.

However, it cannot be infinitely parallelized. There are inherent data dependencies: the result of a function application can't be determined until both the argument and the function have been evaluated, meaning that you need to wait for your threads to both finish, then synchronize.

This is still relevant with your clarified definition of infinitely. For example, if you have 5 processors, it's possible that a particular program can only ever use 4 processors, because of the data dependencies.

Moreover, while this is automatic, it is not "performance for free." In practice, there is non-trivial overhead to creating and synchronizing threads. Moreover, it's difficult to do this in a way that scales only to the current number of processors: if you have 5 cores, the automatic parallelization might generate 6 threads, and in general, it's not possible to know at compile-time how many threads will be active at a given time.

So, you can automatically make a program that runs massively parallel, but with the current state of affairs, it will likely be slower than your original.

It's also worth mentioning that, in practice, this becomes difficult with shared access to resources and IO. For example, a real world program might write to a disk, which can cause problems if done in parallel without control. This is something that can't be modeled by the standard lambda calculus.

Joey Eremondi
  • 30,277
  • 5
  • 67
  • 122
3

No, lambda expressions are not "infinitely parallelizable" (whatever that means).

For instance, consider the lambda expression $\lambda x . x$. This is the identity function. It takes $1$ step of computation to compute it. It cannot be parallelized; you can't somehow speed up the computation by using a parallel computer, and you certainly can't speed it up "infinitely".

So no, it's not true that every lambda expression can be "infinitely parallelized", and it's not true that every program can be "infinitely parallelized".

D.W.
  • 167,959
  • 22
  • 232
  • 500
1

"Infinite parallelization" is not studied too much in CS because processors like other computational resources eg "time/ space" are almost always regarded as finite but here are at least two contexts where it shows up.

In the case of parallelization, Amdahl's law states that if P is the proportion of a program that can be made parallel (i.e., benefit from parallelization), and $1 − P$ is the proportion that cannot be parallelized (remains serial), then the maximum speedup that can be achieved by using $N$ processors is $S(N) = ...$ . In the limit, as $N$ tends to infinity, the maximum speedup tends to $\frac{1}{1 − P}$.

So it is more of a theoretical concept or abstraction possibly sometimes used in mathematical theorems that does not really have direct applications and may break down somewhat where continuous math is used to represent discrete math problems.

In parallel computing there is an informal concept of "granularity". fine grained problems are somewhat analogous to many sand grains and can be split among many processors. roughly, these are also known as embarassingly parallel problems. But as an analogy, one cannot split a grain of sand in computation. There are still atomic-level operations that cannot be split further. Less fine grained problems have bigger "grains" that cannot be split as easily.

peterh
  • 468
  • 4
  • 19
vzn
  • 11,162
  • 1
  • 28
  • 52