What is the maximum number of files that can be replicated from a volume?

There are limits for storage objects that you should consider when planning and managing your storage architecture.

Limits are often platform dependent. Refer to the Hardware Universe to learn the limits for your specific configuration.

Limits are listed in the following sections:

  • Volume limits

  • FlexClone file and FlexClone LUN limits

Storage limits for Cloud Volumes ONTAP are documented in the Cloud Volumes ONTAP Release Notes.

Volume limits

Storage objectLimitNative storageStorage arrays

Array LUNs

Minimum size for root volume1

N/A

Model-dependent

Files

Maximum size

16 TB

16 TB

Maximum per volume3

Volume size dependent, up to 2 billion

Volume size dependent, up to 2 billion

FlexClone volumes

Hierarchical clone depth4

499

499

FlexVol volumes

Maximum per node1

Model-dependent

Model-dependent

Maximum per node per SVM5

Model-dependent

Model-dependent

Minimum size

20 MB

20 MB

Maximum size1

Model-dependent

Model-dependent

FlexVol volumes for primary workloads

Maximum per node2

Model-dependent

Model-dependent

FlexVol root volumes

Minimum size1

Model-dependent

Model-dependent

LUNs

Maximum per node5

Model-dependent

Model-dependent

Maximum per cluster5

Model-dependent

Model-dependent

Maximum per volume5

Model-dependent

Model-dependent

Maximum size

16 TB

16 TB

Qtrees

Maximum per FlexVol volume

4,995

4,995

Snapshot copies

Maximum per volume6

255/1023

255/1023

Volumes

Maximum per cluster for NAS

12,000

12,000

Maximum per cluster with SAN protocols configured

Model-dependent

Model-dependent

Notes:

  1. In ONTAP 9.3 and earlier, a volume can contain up to 255 Snapshot copies. In ONTAP 9.4 and later, a volume can contain up to 1023 Snapshot copies.

  2. Beginning with ONTAP 9.7, the maximum supported number of FlexVol volumes on AFF platforms with at least 128 GB of memory has increased to 2,500 FlexVol volumes per node; however, only 1,000 volumes per node can be active (primary workloads) at one time.

    For platform-specific information and for the latest support details, see Hardware Universe.

  3. 2 billion = 2 × 109.

  4. The maximum depth of a nested hierarchy of FlexClone volumes that can be created from a single FlexVol volume.

  5. This limit applies only in SAN environments.

    SAN Configuration

  6. You can use a SnapMirror cascade deployment to increase this limit.

FlexClone file and FlexClone LUN limits

LimitNative storageStorage arrays

Maximum per file or LUN1

32,767

32,767

Maximum total shared data per FlexVol volume

640 TB

640 TB

Note:

  1. If you try to create more than 32,767 clones, ONTAP automatically creates a new physical copy of the parent file or LUN.

    In both cases, the file systems are allocated block shared storage volumes, which you create in advance.

    If you need more space on a file system, you can add additional storage volumes to the deployment using the Attach Storage Volumes operation.

    Adding shared volumes to a mirrored configuration

    When you deploy a Primary configuration and attach Mirror and Tiebreaker deployed configurations, you must ensure that the space allocated on both the Primary and Mirror configurations for a certain shared file system is identical.

    For example, you would not allocate 10 GB of shared volume storage to the Primary configuration file system, and only 5 GB to the Mirror configuration for the same file system, because when the size of the data exceeds 5 GB, replication would no longer work. You should also keep in mind that replication in this type of mirrored scenario can use up twice the storage due to the replication that occurs.

    Adding shared volumes to a passive configuration

    When you deploy a Primary configuration and attach a Passive deployed configuration, you must ensure that the space allocated on both the Primary and Passive configurations is identical.

    You can add volumes to both sides as needed by using the Attach Storage Volumes operation on both the Primary and Passive deployments. Attach any additional volumes to the Passive deployment prior to attaching additional volumes to the Primary deployment.

    Creating file sets

    File sets are created and linked under the file system, for example:

    /gpfs/filesystem1/fileSet1
    /gpfs/filesystem1/fileSet2
    /gpfs/filesystem2/fileSet1
    /gpfs/filesystem2/fileSet2
    ...

    File set names must adhere to the following conventions:

    • Names are character strings and must be less than 256 characters in length.
    • Names must be unique within a file system.
    • Names must use only the following characters: a-z, A-Z, 0-9, -, _
    • The name root is reserved for the file set of the files system root directory.
    • The name must contain no spaces (this is specific to the IBM Spectrum Scale pattern).
    • Quotas must have as their final character the unit of measure (g or G for GB, m or M for MB, k or K for KB, t or T for TB)

    File sets do have maximum size quotas, and an error occurs if users consume more data than the quota allows. These quotas are set by using IBM Spectrum Scale Client operations. Note, however, that quotas are enforced only if the user is non-root. In this case, directory access must be configured within the client virtual machines (and potentially server virtual machines) by using standard AIX® ACL and permission commands.

    Some workloads use the methodology of creating the file set on every client virtual machine. This practice is tolerated, and a warning message is generated if a second attempt is made to create the same quota. The quota limits of the second and subsequent attempts are ignored.

    The warning message is displayed as follows:

    WARNING: fileset  already exists

    You can check for existing file set names that are already in use by using either of the following methods:

    • Run the Status operation, which identifies file sets that are associated with a specified file system.
    • Run the following command to see which file sets exist on a virtual machine in the Server cluster:
      /usr/lpp/mmfs/bin/mmlsfileset 

    As a best practice, check the logs after running the IBM Spectrum Scale Client Policy (or alternatively, after running the IBM Spectrum Scale Client script packages) to see if the warning message is written to the log.

    Be careful to create file sets using unique names. If you enter the same name as another user, you might mistakenly overwrite their data, because you could both be linked to the same file set.

    Isolating file sets

    File sets serve as a shared folder between tenants of the file system. Within a single server deployment, this is considered a trusted tenant model, with no isolation between tenants.

    For example, suppose a file system administrator creates file systems A and B. A pattern deployment is configured to use File System A with File set 001.

    When the pattern is deployed, a connection to File System A is made, and File set 001 is created if it is not already present. Now, other deployments with this configuration can share data within that file set. If another system also had a deployment requesting File System A and File set 001, it could alter the data in the file set.

    To control the second system and to isolate the data in File System A from the second system, the second deployment should be configured to use File System B on the same server.

    As another option, you might deploy a second server with File System A. Then define a separate cloud group mapped to the new server deployment, to isolate the associated deployments.

    Removing shared volumes from the file system

    Be careful when removing shared volumes from the file system. This action should be taken only after careful analysis of the problem which requires the volume to be removed.

    There are two ways to remove a shared volume:

    • Use the Remove Network Shared Disk(s) operation, to remove the disk by the disk NSD name.
    • Use the Detach Shared Volumes operation, to remove the disk by the shared volume name.

    In certain cases you might need to unmount file systems and shut down nodes before disks can be removed successfully. You can use either the operations in the Administrative section, which should perform these tasks for you, or you can use specific IBM Spectrum Scale commands to perform the same steps.

    What is the maximum size of all replicated files allowed within a replication group?

    What are the supported limits of DFS Replication? Size of all replicated files on a server: 100 terabytes. Number of replicated files on a volume: 70 million. Maximum file size: 250 gigabytes.

    What is the default capacity of the staging folder?

    The default size of each staging folder is 4,096 MB. This is not a hard limit, however. It is only a quota that is used to govern cleanup and excessive usage based on high and low watermarks (90 percent and 60 percent of staging folder size, respectively).

    What is storage replica in Windows Server 2022?

    Storage Replica is Windows Server technology that enables replication of volumes between servers or clusters for disaster recovery. It also enables you to create stretch failover clusters that span two sites, with all nodes staying in sync.

    How often does DFSR replicate?

    By default, DFS Replication schedules replication 24 hours per day, 7 days per week with full bandwidth as the recommended configuration.