var Seen : Array<Atomic<Boolean>, Indexed_By => Node_Id> := [for I in 1 .. |DGraph.G| => Create(#false)]; *Outer* for Next_Set => Root_Set loop // Start with the initial set of roots for N in Next_Set concurrent loop // Look at each node in set if not Value(Seen[N]) then Set_Value(Seen[N], #true); if Is_Desired_Node(DGraph.G[N]) then // Found a node return N; end if; // Continue outer loop with successors of this node continue loop Outer with Next_Set => DGraph.G[N].Succs; end if; end loop; end loop Outer; // No node found return null;
It has been a bit hard to explain what it means for an inner concurrent loop to continue an outer loop. More recently, we have developed an alternative explanation. The basic idea is to appeal to the notion of a worklist algorithm. Various (sequential) algorithms are organized around the use of a list of work items to be processed, with the overall algorithm continuing until the worklist is empty, or, for a search, until an item of interest is found. For example, the work-list approach to data flow analysis algorithms is described here:
We used to refer to such ParaSail's parallel iterators as a bag of threads, but now it seems easier to describe each iterator as having its own list of work items (i.e. a worklist) and the loop keeps going until the worklist is empty, or until we encounter a return or exit (aka break) statement.
One interesting detail at this point is that for a breadth-first search, you want the work list/queue to be processed in a FIFO manner, while for a depth-first search, you want to process the work list/queue as a stack, in a LIFO manner. The work-stealing scheduler we use does some of both, using FIFO when stealing, and LIFO when the work-items are served by the same core that generated them. This argues for perhaps giving more control to the programmer when using an explicit continue statement, to be able to specify FIFO or LIFO. However, it is not entirely clear if that added complexity is worthwhile.