All Products
Search
Document Center

MaxCompute:Limits

Last Updated:Aug 21, 2025

This topic describes the limits of MaxCompute MapReduce. Your business may be affected if these limits are violated.

The following table summarizes the limits of MaxCompute MapReduce.

Border Name

Value range

Classification

Configuration item

Default value

Configurable

Description

Memory occupied by an instance

[256 MB, 12 GB]

Memory limit

odps.stage.mapper(reducer).mem and odps.stage.mapper(reducer).jvm.mem

2,048 MB + 1,024 MB

Yes

The memory occupied by a single map or reduce instance. The memory consists of two parts: the framework memory, which is 2,048 MB by default, and Java Virtual Machine (JVM) heap memory, which is 1,024 MB by default.

Number of resources

256

Quantity limit

-

N/A

No

A single job can reference a maximum of 256 resources. Each table or archive is counted as one resource.

Number of inputs and outputs

1,024 and 256

Quantity limit

-

N/A

No

The number of inputs for a single job cannot exceed 1,024, and the number of outputs cannot exceed 256. A partition of a table is counted as one input. The total number of different tables cannot exceed 64.

Number of counters

64

Quantity limit

-

N/A

No

The number of custom counters in a single job cannot exceed 64. The Group Name and Counter Name cannot contain the number sign (#). The combined length of the two names cannot exceed 100 characters.

Number of map instances

[1, 100000]

Quantity limit

odps.stage.mapper.num

N/A

Yes

The number of map instances for a single job is calculated by the framework based on the split size. If no input table is specified, you can set the odps.stage.mapper.num parameter to specify the number of map instances. The value must be in the range of [1, 100000].

Number of reduce instances

[0, 2000]

Quantity limit

odps.stage.reducer.num

N/A

Yes

By default, the number of reduce instances for a single job is one-fourth of the number of map instances. You can set a value in the range of [0, 2000] as the final number of reduce instances. In some cases, a reduce instance may process much more data than a map instance, which slows down the reduce phase. A job can have a maximum of 2,000 reduce instances.

Number of retries

3

Quantity limit

-

N/A

No

A failed map or reduce instance is retried a maximum of three times. Some non-retriable exceptions cause the job to fail directly.

Local debug mode

A maximum of 100 instances

Quantity limit

-

N/A

No

In local debug mode:

  • The number of map instances is 2 by default and cannot exceed 100.

  • The number of reduce instances is 1 by default and cannot exceed 100.

  • The number of downloaded records for one input is 100 by default and cannot exceed 10,000.

Number of times a resource is repeatedly read

64

Quantity limit

-

N/A

No

A single map or reduce instance can read the same resource a maximum of 64 times.

Resource size

2 GB

Length limit

-

N/A

No

The total size of resources referenced by a single job cannot exceed 2 GB.

Split size

Greater than or equal to 1

Length limit

odps.stage.mapper.split.size

256 MB

Yes

The framework determines the number of map instances based on the specified split size.

Content length of a STRING column

8 MB

Length limit

-

N/A

No

The content length in a STRING column of a MaxCompute table cannot exceed 8 MB.

Worker execution timeout

[1, 3600]

Time limit

odps.function.timeout

600

Yes

The timeout period for a map or reduce worker when the worker does not read or write data and does not send heartbeats using context.progress(). The default value is 600 seconds.

Supported field types for table resources referenced by MapReduce

BIGINT, DOUBLE, STRING, DATETIME, and BOOLEAN

Data type limit

-

N/A

No

When a MapReduce task references a table resource, an error is reported if the table contains fields of unsupported data types.

Can MapReduce read data from OSS?

-

Feature limit

-

N/A

No

MapReduce cannot read data from OSS.

Support for new data types in MaxCompute V2.0

-

Feature limit

-

N/A

No

MapReduce does not support the new data types in MaxCompute V2.0.

Note

MaxCompute MapReduce jobs are not supported in projects that have schemas enabled. If you upgrade your project to support schemas, you can no longer run MaxCompute MapReduce jobs.