Skip to main content

Task Execution

Task Execution is the simplest agentic pattern and the building block of other patterns. By using this pattern, an LLM is instructed to execute a task and returns structured output.

Implementation

This pattern is implemented using Prompt Template and Structured Output.

The agent provides a prompt template with variables. This template includes instructions about the task for an LLM to finish. When executing the task, the agent prepares a context object with values for those template variables to populate the prompt. The prompt is sent to an LLM with additional instruction of generating structured output. The structured output can be used directly as the task execution result, or further processed as the final result.

The flow chart below shows the basic steps.

Guides

The implementation of this pattern is straightforward. The prompt is sent to an LLM and the response from the LLM is the output of this task.

Scope of Objective

When applying this pattern, the scope of objective to be achieved by the agent is very important. If the scope is too big, the LLM cannot reliably generate meaningful results. If the scope is too small, then we'll need too many agents for different small tasks.

Modern LLMs are very powerful to handle complicated tasks. We can start from a large task scope. If the LLM cannot generate meaningful result, then we can narrow down the scope by breaking down into multiple smaller tasks.

LLM Options

For task execution, we want the results from LLM to be predictable and reproducible. So the temperature is usually set to 0.

If the expected output has a relatively fixed length, we can set the maximum number of output tokens of LLM.

Models

When selecting the model used by a task execution agent, we should use normal models, not those reasoning models. Reasoning models are powerful, but tasks executed by a task execution agent should be simple and straightforward. This makes task execution agents easy to implement and test, and also enables reuse and composition of these agents.

For OpenAI, gpt-4o or gpt-4o-mini should be used. For DeepSeek, DeepSeek V3 should be used.

Prompt Template

The task to be executed is defined in the prompt template. To make sure that the LLM can generate meaningful results, the prompt template needs to be carefully crafted. Since our goal is to execute a task and get the result, we can clearly define the objective and ask the LLM to follow detailed instructions.

  • Simply list all requirements.
  • Be restrictive when possible.
  • If there are steps to finish this task, list those steps.

The prompt template below shows sections can be included in the prompt template. A prompt template doesn't need to include all of these sections.

Sample prompt template
Background: // background information for complicate tasks

Goal: // Short summary of the task objective

Requirements: // Constraints of the output
- Requirement 1
- Requirement 2
- ....

Thinking steps: // Steps to execute the task
- Step 1
- Step 2
- ....

Example

The task we want LLM to execute is generating sample users for testing. The system already has an internal model for users. We want LLM to generate testing users based on the model.

Below is the User model defined in the system. It uses nested records to describe a complicate structure.

User model
package com.javaaidev.agenticpatterns.examples.taskexecution;

import java.util.List;

public record User(String id,
String name,
String email,
String mobilePhone,
List<Address> addresses) {

public enum AddressType {
HOME,
OFFICE,
OTHER,
}

public record Address(
String id,
AddressType addressType,
String countryOrRegion,
String provinceOrState,
String city,
String addressLine,
String zipCode) {

}
}

Below is the prompt template to generate users. count is a template variable to specify the number of users to generate. This template uses Requirements section to describe constraints of generated users. This template lists requirements for each property of the User model.

Prompt template for user generation
Goal: Generate {count} users

Requirements:
- Id should be a version 4 random UUID.
- Name should be using the format "$firstName $lastName".
- Email address should be using the format "$firstName.$lastName@$domain".
- For an address,
- Country or region must use ISO 3166 alpha-2 code.
- For province/state/city, they should be generated based on the country or region.
- Address line can be fake.
- Zip code should use the format based on the country or region.
- When generating multiple users, choose different countries or regions for those users.
- For a user, generate 1 to 3 addresses. At least one address has the type HOME.

When executing the task, the output looks like below:

[
{
"id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"name": "John Doe",
"email": "john.doe@example.com",
"mobilePhone": "+1234567890",
"addresses": [
{
"id": "c9bf9e57-1685-4c89-bafb-ff5af830be8a",
"addressType": "HOME",
"countryOrRegion": "US",
"provinceOrState": "NY",
"city": "New York",
"addressLine": "123 Fake Street",
"zipCode": "10001"
}
]
}
]

Reference Implementation

See this page for reference implementation and examples.