In some sense, a return statement in a concurrent loop is easier to understand, if we presume we are in a function searching (in parallel) for some item in a data structure, and we want to return the item as soon as any thread finds it. An exit statement in a concurrent loop is a bit harder to understand, since we would need a way to record the result of the winning thread somehow. Perhaps we would want to allow some code associated with the exit statement that occurs after all of the other threads have been aborted. Perhaps "exit loop then record winner;" might be a syntax indicating that the record winner code is to be executed after the loop has effectively been stopped, but still in the context of the winning thread, with visibility on the local variables of the loop body containing the exit loop then ... construct.
For example, using the generalized for loop from the previous posting:
var Winner : Node := null; for T := Root then T.Left || T.Right while T != null concurrent loop if T.Key == Desired_Key then exit loop then Winner := T; end if; end loop;At this point Winner is either still null if no node in the tree has the Desired_Key, or is equal to some node whose Key equals the Desired_Key. If there is more than one such node, it would be non-deterministic which such node Winner designates.
Clearly this kind of brute force search of the tree would not be needed if the tree were organized based on Key values. This example presumes the Key is not controlling the structure of the tree. Also note that if this were a function whose whole purpose was to find the node of interest, the exit loop then ... statement could presumably be replaced by simply return T and we wouldn't need the local variable Winner at all.
The "exit loop then Winner := T;" syntax sounds a bit awkward. Perhaps better would be "exit loop with Winner := T;".
ReplyDeleteThis also brings to mind the idea of a "continue loop with ..." statement. Rather than putting the next values up into the loop header, you could imagine that in some cases it would be useful to decide on the next value(s) for the loop variable during the loop body, using, say, "continue loop with T := T.Left;" rather than unconditionally walking both Left and Right. Similarly, if the nodes are not all the same kind (e.g. in a parse tree), then how to walk the subtree beneath a node may depend on the kind of node. In such cases, you would need to use something like a "continue loop with ..." construct to specify the next node(s) to be walked. Presumably in such a case you would omit the "then ..." part from the loop header, and it would not be meaningful to specify that the loop is concurrent, since you wouldn't know the next value(s) until reaching the "continue loop with ..." statement.
Actually, if we presume that the loop body involves multiple statements running in parallel, then the "continue loop with ..." statement could run in parallel with the rest of the loop body. For example:
ReplyDeletefor T := Root while T != null loop
Process(T);
|| continue loop with T.Left;
|| continue loop with T.Right;
end loop;
That example in your comment ("continue loop with ...") looks like a "tail recursion syntax" -- like Clojure's "loop / recur," for example:
ReplyDeletehttp://clojure.org/special_forms
Clojure is strict about "recur" syntax -- it takes exactly the same number of arguments as the loop body. Your "continue with..." syntax means less work for programmers (they don't have to write out all the loop variables that don't change values in child iterations) but perhaps a bit more work for you (since you would have to deduce that information, esp. for parallel tasks).
It is nice to see very similar approaches in other languages. This tends to increase the sense that this is a good underlying model. The fact that clojure relies on position in the "recur" expressions to handle multiple loop variables, rather than some kind of named notation, seems to be a bit of a limitation and is somewhat more error prone, one would think. And as you point out, using a named notation allows only a subset of the loop variables to be rebound on the next iteration of the loop.
ReplyDeleteWhy do you prefer this approach over simple recursion? It seems that you could implement the intentional race by just having a primitive operator (say, '|') where 'a | b' evaluates a and b in parallel, waits for one to yield a non-null value, and returns that value, aborting the other if it's still going. And obviously, if both finish with null results, it returns null. (The same idea could apply if a and b had Maybe types, so it's not specific to pointers.)
ReplyDeleteThen you could implement your tree search example as a local function... (forgive the syntax errors)
function Search(T : Node) -> Node is
if T == null or T.Key == Desired_Key then
T
else
Search(T.Left) | Search(T.Right)
Winner = Search(Root)
Or if you want to execute the comparison in parallel with the child searches...
function Search(T : Node) -> Node is
if T != null then
(if T.Key == Desired_Key then T else null) |
Search(T.Left) | Search(T.Right)
else
null
There's obviously a strong parallel between using recursion and using your branching for loop. Maybe it's just a matter of preference, but the recursive approach seems clearer to me. It could just be because I'm not as familiar with your approach and syntax, but it's worth considering whether or not it's worth increasing the complexity of the language with extra constructs and syntax when a similar approach is available using simpler building blocks. Do you see cases where your method is a clear win over recursion?
It is certainly true that you can replace all loops with recursion. However, from a readability point of view, at least some folks find loops easier to read, and they avoid the need to create a recursive function for each loop body. And it seems that most LISP-like languages have adopted the notion of a "do" or "for" loop which is (usually) defined in terms of recursion, which seems a pretty strong argument in favor of providing such syntax.
ReplyDeleteThere are two reasons I've seen to rely on loops, rather than to use recursion and rely on tail-call optimization (TCO):
ReplyDelete1. Performance transparency
2. Debugging transparency
#1 means that loops don't consume stack, and recursive calls do. If you care about consuming stack then you use loops. If you rely on TCO, then users who care have to pay attention to the optimizer's warnings about whether it was able to apply TCO.
#2 means that it's harder to debug recursive function calls to which TCO was applied. Turning off TCO eats stack space. Of course debugging deep levels of recursion is hard anyway, so the importance of #2 is not clear to me.
Lisp dialects differ in their view of loops. Common Lisp (representing the state of Lisp implementations at the time of the formulation of the ANSI standard) tends to use loops because it doesn't have tail-call optimization (TCO) for recursion. Scheme (as a from-scratch minimalist Lisp dialect) has TCO, because / so the preferred idiom in that language is recursion rather than loops.
My personal view is that "for" loops are harder to read and debug than sequence / explicit vector operations (e.g., "map (function, vector) -> vector"). "For" loops are also harder to optimize. C compilers, for example, are notoriously conservative about loop-carried dependencies. They often require the programmer to add annotations (e.g., "#pragma ivdep") to reassure the compiler that the loop iterations are independent. This affects both single-processor and parallel performance.
The cases where vector-style operations are less useful, are also the cases where recursion is likely more natural (like the above "for" loop over a tree).
The trick is to use proper tail calling rather than tail-recursion optimization in your language, as Scheme and Haskell do. This means that every call (not just recursive calls) that is syntactically in tail position is a tail call. Then you don't have to wonder about what the optimizer does: if a function body is "a(); b()" then B is tail-called and A is not tail called, period. This cleans up some powerful techniques like state machines: instead of having a variable that holds an integer representation of the state and a case statement that dispatches, a state procedure simply tail-calls the next state procedure. In effect, the variable is now the program counter.
DeleteI realize now that the paragraph i wrote about "for" loops being harder to optimize shouldn't apply, given that your "for" loops are nonsequential by default. Silly me ;-)
ReplyDeletePython, of course, opted to replace "map" and ilk with list comprehensions and generators. There is something to be said for paring away superfluous syntax and committing to a single programming idiom in one's language. When I said that "for" loops are harder to read and debug, i meant the C or C++ kind of "for" loops, rather than the Python kind.