Troubleshoot payload and gRPC message size limit errors
Temporal enforces size limits on the data that passes between the Temporal Client, Workers, and the Temporal Service. There are two distinct limits, each producing different error messages and behaviors, and they require different solutions:
Payload size limit
The Temporal Service enforces a size limit on individual payloads. This limit is 2 MB on Temporal Cloud, but is configurable on self-hosted deployments with a default of 2 MB. A payload represents the serialized binary data for the input and output of Workflows and Activities.
Error messages
The error message depends on which operation carried the oversized payload and which SDK version is in use. Examples include:
WORKFLOW_TASK_FAILED_CAUSE_PAYLOADS_TOO_LARGE[TMPRL1103] Attempted to upload payloads with size that exceeded the error limit.BadScheduleActivityAttributes: ScheduleActivityTaskCommandAttributes.Input exceeds size limitWORKFLOW_TASK_FAILED_CAUSE_BAD_UPDATE_WORKFLOW_EXECUTION_MESSAGE
Error behavior
The behavior when a payload exceeds the size limit depends on the SDK version.
Python SDK 1.23.0+: The SDK fails the Workflow Task with cause WORKFLOW_TASK_FAILED_CAUSE_PAYLOADS_TOO_LARGE. The
Workflow is not terminated and remains open, so you can deploy a fix and allow the Workflow to continue.
All other SDK versions: The Temporal Service rejects the request and terminates the Workflow. You'll need to resolve the issue and restart the Workflow.
How to resolve
- Offload large payloads to an object store to reduce the risk of exceeding payload size limits:
- Pass references to the stored payloads within the Workflow instead of the actual data.
- Retrieve the payloads from the object store when needed during execution.
This is called the claim check pattern. You can implement your own claim check pattern by using a custom Payload Codec, or use External Storage.
This is the most reliable way to avoid hitting payload size limits. Consider implementing the claim check pattern for Workflows and Activities that have the potential to receive or return large payloads, even if they are currently within the limit.
External Storage is currently in Pre-release. All APIs are experimental and may be subject to backwards-incompatible changes. Join the #large-payloads Slack channel to provide feedback or ask for help.
- Use compression with a custom Payload Codec for large payloads. This addresses the immediate issue, but if payload sizes continue to grow, the problem can arise again.
gRPC message size limit
All communication between the Temporal Client, Workers, and the Temporal Service uses gRPC, which enforces a 4 MB limit on each request. This limit applies to the full request, including all payload data and command metadata. For example, when a Workflow schedules multiple Activities in a single Workflow Task, the Worker sends one request containing all those commands to schedule the Activities and their inputs.
A Workflow can hit this limit even when every individual payload is under 2 MB. Scheduling several Activities with moderate-sized inputs, or hundreds of Activities with tiny inputs in the same Workflow Task can push the combined request past 4 MB. Activity results are also subject to this limit.
Error messages
The error message depends on which operation carried the oversized gRPC message and which SDK version is in use.
WORKFLOW_TASK_FAILED_CAUSE_PAYLOADS_TOO_LARGE[TMPRL1103] Attempted to upload payloads with size that exceeded the error limit.WORKFLOW_TASK_FAILED_CAUSE_GRPC_MESSAGE_TOO_LARGEStartToCloseTimeout(Activities only, see error behavior below)
Error behavior
The behavior when a gRPC message exceeds the size limit depends on the SDK version.
Python SDK 1.23.0+: The SDK fails the Workflow Task with cause WORKFLOW_TASK_FAILED_CAUSE_PAYLOADS_TOO_LARGE. The
Workflow is not terminated and remains open, so you can deploy a fix and allow the Workflow to continue. For Activities,
the Activity fails with an explicit error instead of timing out silently.
All other SDK versions: The behavior depends on where the oversized message originates:
-
Workflow Tasks: When a Workflow Worker completes a Workflow Task, it sends all the commands the Workflow produced (such as Activity schedules and their inputs) back to the Temporal Service. If the combined size of the commands and their inputs exceeds 4 MB, the SDK catches the gRPC error and sends a failed Workflow Task response with cause
WORKFLOW_TASK_FAILED_CAUSE_GRPC_MESSAGE_TOO_LARGE. Because replay produces the same oversized request, the Workflow gets stuck in a retry loop that isn't visible in the Event History. -
Activity Tasks: When an Activity Worker completes an Activity Task, it sends the Activity result back to the Temporal Service. If that result exceeds 4 MB, the gRPC call fails and the Worker can't deliver it. The server never receives the completion and eventually times out the Activity, surfacing as a
StartToCloseTimeout. The actualResourceExhaustedgRPC error only appears in Worker logs.
How to resolve
-
Break larger batches of commands into smaller batch sizes:
- Workflow-level batching:
- Modify the Workflow to process Activities or Child Workflows in smaller batches.
- Iterate through each batch, waiting for completion before moving to the next.
- Workflow Task-level batching:
- Execute Activities in smaller batches within a single Workflow Task.
- Introduce brief pauses or sleeps between batches.
- Workflow-level batching:
-
If the request is large because of payload sizes rather than the number of commands, refer to the Payload size limit section for solutions.