All Products
Search
Document Center

Object Storage Service:Implement intelligent tiered storage using last access time-based lifecycle rules

Last Updated:Sep 17, 2025

Use lifecycle rules with a last access time policy to automatically monitor data access patterns and identify cold data. Then, transition the cold data to a different storage class. This process implements intelligent tiered storage and reduces your storage costs.

Scenario description

A multimedia website needs to determine if its data is hot or cold based on the last access time. The traditional method requires you to manually analyze logs. However, using last access time-based lifecycle rules lets you automatically identify and tier data.

In this scenario, data is categorized and stored in different paths within the examplebucket bucket. The goal is to transition some data to a lower-cost storage class after a specified period.

Storage path

Storage scenario

Lifecycle policy

Result

data/

Stores WMV live streaming videos. They are accessed infrequently in the first two months after upload and rarely accessed afterward.

Transition to the Infrequent Access storage class 200 days after the last access. (The data remains in the Infrequent Access tier even if accessed again.)

The storage class is transitioned after the specified number of days. Data that does not match the rule remains in the Standard storage class.

Stores MP4 movie video data. Most files are frequently accessed within each six-month period.

log/

Stores a large amount of log data. A few files are accessed several times within the last three months. All files have almost no access records six months after upload.

  • Transition to the Infrequent Access storage class 120 days after the last access. (The data remains in the Infrequent Access tier even if accessed again.)

  • Transition to the Archive storage class 250 days after the last access.

Note
  • With a last access time-based lifecycle policy, OSS automatically identifies and tiers hot and cold data. For example, frequently accessed MP4 videos in the data/ path remain in the Standard storage class. MP4 videos that are not accessed for six months are transitioned to the Infrequent Access storage class after 200 days. If you configure a lifecycle rule based on the last modification time, data in the data/ path can only be transitioned or deleted based on its last modification time. You cannot implement intelligent tiered storage based on data access frequency.

  • The lifecycle policies and recommended actions in the preceding scenario are for reference only. Configure lifecycle rules based on your specific business needs.

Prerequisites

  • Enable access tracking.

  • To transition objects from the Standard or Infrequent Access storage class to Archive, Cold Archive, or Deep Cold Archive, submit a ticket to request this feature.

    Important

    After your application is approved, if you use a lifecycle rule based on the last access time to move an object from Standard or IA to Archive, Cold Archive, or Deep Cold Archive, the last access time of the Archive, Cold Archive, or Deep Cold Archive object is the time when access tracking was enabled for the bucket.

Procedure

Use the OSS console

  1. Enable access tracking.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

    3. In the navigation pane on the left, choose Data Management > Lifecycle.

    4. On the Lifecycle page, turn on the Enable Access Tracking switch.

      Note

      After you enable access tracking, OSS sets the last access time for all objects in the bucket to the time when access tracking was enabled.

  2. Configure lifecycle rules.

    1. On the Lifecycle page, click Create Rule.

    2. In the Create Lifecycle Rule panel, configure lifecycle rules for the data/ prefix and the log/ prefix as described in the following sections.

      Lifecycle rule for the data/ prefix

      Configure the required parameters for the data/ prefix lifecycle rule as follows. Keep the default settings for other parameters.

      Configuration item

      Description

      Status

      Select Enable.

      Applied To

      Select Match By Prefix.

      Prefix

      Enter data/.

      Object Lifecycle

      Select Specify Days.

      Lifecycle-based Rules

      From the drop-down list, select Last Access Time. Enter 200 days. The data is automatically transitioned to Infrequent Access (data Remains In The IA Tier After Being Accessed).

      Lifecycle rule for the log/ prefix

      Configure the required parameters for the log/ prefix lifecycle rule as follows. Keep the default settings for other parameters.

      Configuration item

      Description

      Status

      Select Enable.

      Applied To

      Select Match By Prefix.

      Prefix

      Enter log/.

      Object Lifecycle

      Select Specify Days.

      Lifecycle-based Rules

      • From the drop-down list, select Last Access Time. Enter 120 days. The data is automatically transitioned to Infrequent Access (data Remains In The IA Tier After Being Accessed).

      • From the drop-down list, select Last Access Time. Enter 250 days. The data is automatically transitioned to Archive.

    3. Click OK.

Use Alibaba Cloud SDKs

Only the Java, Python, and Go SDKs support creating lifecycle rules based on the last access time. Before you create a last access time-based lifecycle rule, you must enable access tracking for the specified bucket.

  1. Enable access tracking.

    Java

    import com.aliyun.oss.*; import com.aliyun.oss.common.auth.*; import com.aliyun.oss.common.comm.SignVersion; import com.aliyun.oss.model.AccessMonitor; public class Demo { public static void main(String[] args) throws Exception { // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. String endpoint = "https://oss-cn-hangzhou.aliyuncs.com"; // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider(); // Specify the name of the bucket. Example: examplebucket. String bucketName = "examplebucket"; // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to cn-hangzhou. String region = "cn-hangzhou"; // Create an OSSClient instance. // Call the shutdown method to release associated resources when the OSSClient is no longer in use. ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration(); clientBuilderConfiguration.setSignatureVersion(SignVersion.V4); OSS ossClient = OSSClientBuilder.create() .endpoint(endpoint) .credentialsProvider(credentialsProvider) .clientConfiguration(clientBuilderConfiguration) .region(region) .build(); try { // Enable access tracking for the bucket. If you want to change the access tracking status of a bucket from Enabled to Disabled, make sure that the bucket does not have lifecycle rules configured based on the last access time of the objects in the bucket. ossClient.putBucketAccessMonitor(bucketName, AccessMonitor.AccessMonitorStatus.Enabled.toString()); } catch (OSSException oe) { System.out.println("Caught an OSSException, which means your request made it to OSS, " + "but was rejected with an error response for some reason."); System.out.println("Error Message:" + oe.getErrorMessage()); System.out.println("Error Code:" + oe.getErrorCode()); System.out.println("Request ID:" + oe.getRequestId()); System.out.println("Host ID:" + oe.getHostId()); } catch (ClientException ce) { System.out.println("Caught an ClientException, which means the client encountered " + "a serious internal problem while trying to communicate with OSS, " + "such as not being able to access the network."); System.out.println("Error Message:" + ce.getMessage()); } finally { if (ossClient != null) { ossClient.shutdown(); } } } }

    Python

    # -*- coding: utf-8 -*- import oss2 from oss2.credentials import EnvironmentVariableCredentialsProvider # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider()) # Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. endpoint = "https://oss-cn-hangzhou.aliyuncs.com" # Specify the ID of the region that maps to the endpoint. Example: cn-hangzhou. This parameter is required if you use the signature algorithm V4. region = "cn-hangzhou" # Specify the name of the bucket. bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region) # Enable access tracking for the bucket. If you want to change the access tracking status of a bucket to Disabled after you enable access tracking for the bucket, make sure that the bucket does not have lifecycle rules that are configured based on the last access time. bucket.put_bucket_access_monitor("Enabled")

    Go

    package main import (	"context"	"flag"	"log"	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials" ) // Define global variables. var (	region string // Region in which your bucket is located.	bucketName string // Name of the bucket. ) // Specify the init function used to initialize command line parameters. func init() {	flag.StringVar(&region, "region", "", "The region in which the bucket is located.")	flag.StringVar(&bucketName, "bucket", "", "The name of the bucket.") } // The main function used to enable access tracking for the bucket. func main() {	// Parse command line parameters.	flag.Parse()	// Check whether the name of the bucket is specified.	if len(bucketName) == 0 {	flag.PrintDefaults()	log.Fatalf("invalid parameters, bucket name required")	}	// Check whether the region is specified.	if len(region) == 0 {	flag.PrintDefaults()	log.Fatalf("invalid parameters, region required")	}	// Load the default configurations and specify the credential provider and region.	cfg := oss.LoadDefaultConfig().	WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).	WithRegion(region)	// Create an OSS client.	client := oss.NewClient(cfg)	// Create a request to enable access tracking for the bucket.	request := &oss.PutBucketAccessMonitorRequest{	Bucket: oss.Ptr(bucketName),	AccessMonitorConfiguration: &oss.AccessMonitorConfiguration{	Status: oss.AccessMonitorStatusEnabled, // Enable access tracking.	},	}	// Enable access tracking.	putResult, err := client.PutBucketAccessMonitor(context.TODO(), request)	if err != nil {	log.Fatalf("failed to put bucket access monitor %v", err)	}	// Display the result.	log.Printf("put bucket access monitor result: %#v\n", putResult) } 
  2. Configure a lifecycle rule based on the last access time for the prefixes data/ and log/.

    Java

    import com.aliyun.oss.ClientException; import com.aliyun.oss.OSS; import com.aliyun.oss.common.auth.*; import com.aliyun.oss.OSSClientBuilder; import com.aliyun.oss.OSSException; import com.aliyun.oss.model.*; import java.util.ArrayList; import java.util.List; public class Lifecycle { public static void main(String[] args) throws Exception { // The Endpoint is set to China (Hangzhou) as an example. Set it to your actual region. String endpoint = "https://oss-cn-hangzhou.aliyuncs.com"; // Obtain access credentials from environment variables. Before running this sample code, make sure the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are set. EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider(); // Specify the bucket name, for example, examplebucket. String bucketName = "examplebucket"; // Create an OSSClient instance. // When the OSSClient instance is no longer needed, call the shutdown method to release resources. OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider); try { String ruleId1 = "rule1"; String ruleId2 = "rule2"; // Specify the prefix as data/. String matchPrefix = "data/"; // Specify the prefix as log/. String matchPrefix2 = "log/"; SetBucketLifecycleRequest request = new SetBucketLifecycleRequest(bucketName); // In lifecycle rule 1, specify that all files with the data/ prefix are transitioned to the Infrequent Access storage class 200 days after the last access. When files with the data/ prefix are accessed again, they remain in the Infrequent Access storage class. List<LifecycleRule.StorageTransition> storageTransitions = new ArrayList<LifecycleRule.StorageTransition>(); LifecycleRule.StorageTransition storageTransition = new LifecycleRule.StorageTransition(); storageTransition.setStorageClass(StorageClass.IA); storageTransition.setExpirationDays(200); storageTransition.setIsAccessTime(true); storageTransition.setReturnToStdWhenVisit(false); storageTransitions.add(storageTransition); LifecycleRule rule = new LifecycleRule(ruleId1, matchPrefix, LifecycleRule.RuleStatus.Enabled); rule.setStorageTransition(storageTransitions); request.AddLifecycleRule(rule); // In lifecycle rule 2, specify that all files with the log/ prefix are transitioned to the Infrequent Access storage class 120 days after the last access. When files with the log/ prefix are accessed again, they remain in the Infrequent Access storage class. List<LifecycleRule.StorageTransition> storageTransitions2 = new ArrayList<LifecycleRule.StorageTransition>(); LifecycleRule.StorageTransition storageTransition2 = new LifecycleRule.StorageTransition(); storageTransition2.setStorageClass(StorageClass.IA); storageTransition2.setExpirationDays(120); storageTransition2.setIsAccessTime(true); storageTransition2.setReturnToStdWhenVisit(false); storageTransitions2.add(storageTransition2); // In the same rule, specify that all files with the log/ prefix are transitioned to the Archive storage class 250 days after the last access. LifecycleRule.StorageTransition storageTransition3 = new LifecycleRule.StorageTransition(); storageTransition3.setStorageClass(StorageClass.Archive); storageTransition3.setExpirationDays(250); storageTransition3.setIsAccessTime(true); storageTransition3.setReturnToStdWhenVisit(false); storageTransitions2.add(storageTransition3); LifecycleRule rule2 = new LifecycleRule(ruleId2, matchPrefix2, LifecycleRule.RuleStatus.Enabled); rule2.setStorageTransition(storageTransitions2); request.AddLifecycleRule(rule2); VoidResult result = ossClient.setBucketLifecycle(request); System.out.println("Status code: "+result.getResponse().getStatusCode()+" set lifecycle succeed"); } catch (OSSException oe) { System.out.println("Caught an OSSException, which means your request made it to OSS, " + "but was rejected with an error response for some reason."); System.out.println("Error Message:" + oe.getErrorMessage()); System.out.println("Error Code:" + oe.getErrorCode()); System.out.println("Request ID:" + oe.getRequestId()); System.out.println("Host ID:" + oe.getHostId()); } catch (ClientException ce) { System.out.println("Caught an ClientException, which means the client encountered " + "a serious internal problem while trying to communicate with OSS, " + "such as not being able to access the network."); System.out.println("Error Message:" + ce.getMessage()); } finally { if (ossClient != null) { ossClient.shutdown(); } } } }

    Python

    # -*- coding: utf-8 -*- import oss2 from oss2.credentials import EnvironmentVariableCredentialsProvider from oss2.models import LifecycleRule, BucketLifecycle, StorageTransition # Obtain access credentials from environment variables. Before running this sample code, make sure the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are set. auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider()) # Set yourEndpoint to the region where the bucket is located. For example, if the bucket is in China (Hangzhou), set Endpoint to https://oss-cn-hangzhou.aliyuncs.com. # Specify the bucket name, for example, examplebucket. bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket') # In lifecycle rule 1, specify that all files with the data/ prefix are transitioned to the Infrequent Access storage class 200 days after the last access. When files with the data/ prefix are accessed again, they remain in the Infrequent Access storage class. rule1 = LifecycleRule('rule1', 'data/', status=LifecycleRule.ENABLED) rule1.storage_transitions = [StorageTransition(days=200, storage_class=oss2.BUCKET_STORAGE_CLASS_IA, is_access_time=True, return_to_std_when_visit=False)] # In lifecycle rule 2, specify that all files with the log/ prefix are transitioned to the Infrequent Access storage class 120 days after the last access. When files with the log/ prefix are accessed again, they remain in the Infrequent Access storage class. # In the same rule, specify that all files with the log/ prefix are transitioned to the Archive storage class 250 days after the last access. rule2 = LifecycleRule('rule2', 'log/', status=LifecycleRule.ENABLED) rule2.storage_transitions = [StorageTransition(days=120, storage_class=oss2.BUCKET_STORAGE_CLASS_IA, is_access_time=True, return_to_std_when_visit=False), StorageTransition(days=250, storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE, is_access_time=True, return_to_std_when_visit=False)] lifecycle = BucketLifecycle([rule1, rule2]) # Set the lifecycle rule. result = bucket.put_bucket_lifecycle(lifecycle) print('Lifecycle rule set successfully. Status code: ' + str(result.status))

    Go

    package main import (	"context"	"flag"	"log"	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"	"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials" ) // Define global variables. var (	region string // Region in which the bucket is located.	bucketName string // Name of the bucket. ) // Specify the init function used to initialize command line parameters. func init() {	flag.StringVar(&region, "region", "", "The region in which the bucket is located.")	flag.StringVar(&bucketName, "bucket", "", "The name of the bucket.") } func main() {	// Parse command line parameters.	flag.Parse()	// Check whether the name of the bucket is specified.	if len(bucketName) == 0 {	flag.PrintDefaults()	log.Fatalf("invalid parameters, bucket name required")	}	// Check whether the region is specified.	if len(region) == 0 {	flag.PrintDefaults()	log.Fatalf("invalid parameters, region required")	}	// Load the default configurations and specify the credential provider and region.	cfg := oss.LoadDefaultConfig().	WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).	WithRegion(region)	// Create an OSS client.	client := oss.NewClient(cfg)	// Create a request to configure lifecycle rules for the bucket.	request := &oss.PutBucketLifecycleRequest{	Bucket: oss.Ptr(bucketName), // Name of the bucket.	LifecycleConfiguration: &oss.LifecycleConfiguration{	Rules: []oss.LifecycleRule{	{	// Configure rule1 to change the storage class of objects whose names contain the data/ prefix to IA 200 days after they are last accessed. Specify that the objects remain in the IA storage class when they are accessed again.	ID: oss.Ptr("rule1"),	Status: oss.Ptr("Enabled"),	Prefix: oss.Ptr("data/"),	Transitions: []oss.LifecycleRuleTransition{	{	Days: oss.Ptr(int32(200)),	StorageClass: oss.StorageClassIA,	IsAccessTime: oss.Ptr(true), // Set this parameter to true to specify that the storage classes of objects are converted based on the last access time.	ReturnToStdWhenVisit: oss.Ptr(false),	},	},	},	{	// Configure rule2 to change the storage class of objects whose names contain the log/ prefix to IA 120 days after they are last accessed. Specify that the objects remain in the IA storage class when they are accessed again.	// Change the storage class of objects whose names contain the log/ prefix to Archive 250 days after they are last accessed.	ID: oss.Ptr("rule2"),	Status: oss.Ptr("Enabled"),	Prefix: oss.Ptr("log/"),	Transitions: []oss.LifecycleRuleTransition{	{	Days: oss.Ptr(int32(120)),	StorageClass: oss.StorageClassIA,	IsAccessTime: oss.Ptr(true), // Set this parameter to true to specify that the storage classes of objects are converted based on the last access time.	ReturnToStdWhenVisit: oss.Ptr(false),	},	{	Days: oss.Ptr(int32(250)),	StorageClass: oss.StorageClassArchive,	IsAccessTime: oss.Ptr(true),	ReturnToStdWhenVisit: oss.Ptr(false),	},	},	},	},	},	}	// Configure lifecycle rules for the bucket.	result, err := client.PutBucketLifecycle(context.TODO(), request)	if err != nil {	log.Fatalf("failed to put bucket lifecycle %v", err)	}	// Display the result.	log.Printf("put bucket lifecycle result:%#v\n", result) } 

Use the ossutil command-line tool

ossutil 2.0

  1. Enable access tracking.

    1. Configure access tracking in a local file named config1.xml.

      <?xml version="1.0" encoding="UTF-8"?> <AccessMonitorConfiguration> <Status>Enabled</Status> </AccessMonitorConfiguration>
    2. Set the access tracking status for the target bucket.

      ossutil api put-bucket-access-monitor --bucket bucketname --access-monitor-configuration file://config1.xml
  2. Configure last access time-based lifecycle rules for the data/ and log/ prefixes.

    1. Configure the following lifecycle rules in a local file named config2.xml.

      <?xml version="1.0" encoding="UTF-8"?> <LifecycleConfiguration> <Rule> <ID>rule1</ID> <Prefix>data/</Prefix> <Status>Enabled</Status> <Transition> <Days>200</Days> <StorageClass>IA</StorageClass> <IsAccessTime>true</IsAccessTime> <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit> </Transition> </Rule> <Rule> <ID>rule2</ID> <Prefix>log/</Prefix> <Status>Enabled</Status> <Transition> <Days>120</Days> <StorageClass>IA</StorageClass> <IsAccessTime>true</IsAccessTime> <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit> </Transition> <Transition> <Days>250</Days> <StorageClass>Archive</StorageClass> <IsAccessTime>true</IsAccessTime> <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit> </Transition> </Rule> </LifecycleConfiguration>
    2. Set the lifecycle rules for the target bucket.

      ossutil api put-bucket-lifecycle --bucket bucketname --lifecycle-configuration file://config2.xml

ossutil 1.0

  1. Enable access tracking.

    1. Configure access tracking in a local file named config1.xml.

      <?xml version="1.0" encoding="UTF-8"?> <AccessMonitorConfiguration> <Status>Enabled</Status> </AccessMonitorConfiguration>
    2. Set the access tracking status for the target bucket.

      ossutil access-monitor --method put oss://examplebucket/ config1.xml
  2. Configure last access time-based lifecycle rules for the data/ and log/ prefixes.

    1. Configure the following lifecycle rules in a local file named config2.xml.

      <?xml version="1.0" encoding="UTF-8"?> <LifecycleConfiguration> <Rule> <ID>rule1</ID> <Prefix>data/</Prefix> <Status>Enabled</Status> <Transition> <Days>200</Days> <StorageClass>IA</StorageClass> <IsAccessTime>true</IsAccessTime> <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit> </Transition> </Rule> <Rule> <ID>rule2</ID> <Prefix>log/</Prefix> <Status>Enabled</Status> <Transition> <Days>120</Days> <StorageClass>IA</StorageClass> <IsAccessTime>true</IsAccessTime> <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit> </Transition> <Transition> <Days>250</Days> <StorageClass>Archive</StorageClass> <IsAccessTime>true</IsAccessTime> <ReturnToStdWhenVisit>false</ReturnToStdWhenVisit> </Transition> </Rule> </LifecycleConfiguration>
    2. Set the lifecycle rules for the target bucket.

      ossutil lifecycle --method put oss://examplebucket config2.xml

Use REST APIs

If your program requires a high degree of customization, you can make REST API requests directly. This requires you to manually write code to calculate signatures. For more information, see PutBucketAccessMonitor and PutBucketLifecycle.

References