Parallelization Workflow
Parallelization Workflow pattern runs subtasks in parallel to improve the performance.
Depends on whether these tasks return the same structure of data, there are two types of parallelization workflows.
Subtasks return different types of data
This kind of parallelization workflows decompose a large task into smaller subtasks. These subtasks are run in parallel. The final result of this agent will be assembled from execution results of subtasks.
A typical example is writing reports. The agent creates different subtasks to gather information for different areas, then assembles results from subtasks to create the final report.
Subtasks return same type of data
This kind of parallelization workflows use multiple subtasks for voting or confirmation. These subtask give different results for the same input. The agent uses these results to determine its final result.
For the code generation example described in Evaluator-Optimizer pattern, the agent for evaluation can run three parallel subtasks to evaluate the code using three different models. Each subtask returns the result of passed or not passed. The agent can use the result with majority as its final result.
Implementation
This pattern consists of a main task and a flexible number of subtasks. The main task and each subtask are implemented using Task Execution pattern.
Result Types of Subtasks
Subtasks may return results of different types or the same type.
Different Types
If subtasks return different types of results, they typically require different types of inputs. In this case, before executing a subtask, the original task input needs to be transformed into the type required by a subtask.
Same Type
If all subtasks return the same type of results, then the original input can be passed to subtasks directly.
Assembling Strategy
When all subtasks finish execution, there are two strategies to assemble the results.
The first assembling strategy doesn't use an LLM. It simply takes the results of all subtasks and assemble them using code logic.
The second assembling strategy does use an LLM. Assembled result is passed to an LLM for further generation.
Reference Implementation
See this page for reference implementation and examples.