Monday, December 23, 2013

Concurrency vs. Parallelism

Over the past few years there seems to have been an increasing number of discussions of the difference between concurrency and parallelism.  These discussions didn't seem very convincing at first, but over time a useful distinction did begin to emerge.  So here is another attempt at trying to distinguish these two:
  • Concurrent programming constructs allow a programmer to simplify their program by using multiple (heavy-weight) threads to reflect the natural concurrency in the problem domain.  Because restructuring the program in this way can provide significant benefits in understandability, relatively heavy weight constructs are acceptable.  Performance is also not necessarily as critical, so long as it is predictable.
  • Parallel programming constructs allow a programmer to divide and conquer a problem, using multiple (light-weight) threads to work in parallel on independent parts of the problem.  Such restructuring does not necessarily simplify the program, and as such parallel programming constructs need to be relatively light weight, both syntactically and at run-time.  If the constructs introduce too much overhead, they defeat the purpose.
Given sufficiently flexible parallel programming constructs, it is not clear that you also need traditional concurrent programming constructs.  But it may be useful to provide some higher-level notion of process, which forgoes the single-threaded presumption of a thread, but brings some of the benefits from explicitly representing the natural concurrency of the problem domain within the programming language.

ParaSail is focused on providing parallel programming constructs, but concurrent objects are useful for more traditional concurrent programming approaches, particularly when coupled with explicit use of the "||" operator.  It is an interesting question whether some higher-level process-like construct might be useful.

7 comments:

  1. I think your definitions make the mistake of defining in terms of possible low-level implementation, rather than in terms of intent, design, and actual structure of a program. I like Bob Harper's definitions here: http://existentialtype.wordpress.com/2011/03/17/parallelism-is-not-concurrency/

    For me, concurrency exists even if there is only one thread. In that sense, concurrency has been around for a long time. The popularity of node.js testifies to the popularity of single-threaded concurrency!

    Other terms that need careful definition also are "asynchronous" and "non-blocking".

    ReplyDelete
    Replies
    1. I think Bob Harper and I generally agree. My goal was to explain the distinction in a couple of sentences. I'm not sure I see this as low-level vs. high-level. I don't happen to believe that determinism vs. non-determinism is such a black-and-white thing. In particular, I believe there are clearly *parallel* search algorithms which are not deterministic. Presume you are just trying to find one element in a large search graph that satisfies a given predicate. If you don't care which element is chosen, so long as it satisfies the predicate, then a parallel search of the graph might return any appropriate element, which is fundamentally non-deterministic.

      Delete
    2. There are also sequential algorithms that are not deterministic. Picking individual elements out of a set is a sort of constrained non-determinism similar to your example since the semantics of sets don't impose any order on the elements. So a parallel search in a set has the same semantics as a sequential search --both of which are not deterministic.

      I would agree with Robert Harper that parallelism is about asymptotic efficiency of an algorithm but would add that it preserves the semantics of the algorithm. So if an algorithm is deterministic then the parallel version is determinisitc and if it is nondeterministic then the parallel version is nondeterministic.

      Delete
    3. Good point. Choosing an "arbitrary" element from a set is effectively non-deterministic, and is still fundamentally sequential. Dijkstra in his "Discipline of Programming" book actually preferred non-deterministic semantics for his "if" and "do" guarded constructs, and found that it often simplified proofs. I know many folks are uncomfortable with non-determinism, and it can make debugging more painful. One option is to have a "repeatable" mode for a non-deterministic algorithm to simplify debugging.

      Delete
  2. Why does it matter whether the threads are heavyweight or lightweight? Maybe your definitions of those terms differ from mine?

    ReplyDelete
    Replies
    1. "Heavyweight" threads are more expensive to create and to switch between, often involving kernel system calls. "Lightweight" threads are less expensive to create, are managed in user space rather than kernel space, and often don't have their own stack, but instead "piggyback" on the stack provided by a "server" process. Parallel programming constructs have more recently been standardizing on "work stealing" for scheduling of light-weight threads onto a smaller number of heavyweight server processes, each server having its own double-ended queue of light-weight threads, managed generally in a LIFO fashion, but using FIFO when a server runs out of light-weight threads server and "steals" from another server's queue. Concurrent programming approaches tend to use a priority-based, preemptive or round-robin scheduling approach, with each thread having its own stack, allowing threads to be preempted at almost any time, but potentially requiring significant context-switch time. Lightweight threads tend to run to completion without preemption.

      Delete
  3. Parallelism is when you toss two balls in the air, one with each hand, and catch them with the same hand. Concurrency is when you toss three balls in the air with only two hands.

    ReplyDelete