Mar 11, 2025
Advantages of SSD Cache and Auto-Tiering in Enterprise Applications
In enterprise storage architectures improving read/write performance while controlling costs is crucial. Enterprises often need to adopt high-cost, scaled-out, all-flash storage to meet performance and capacity demands of applications and compute nodes. ASUS VS320D storage utilizes two key functions—SSD Cache and Auto-Tiering—to optimize performance, capacity, and cost. Below, we detail some of the differences and advantages related to the SSD cache and auto-tiering technologies used in the ASUS VS320D solution.
Two Key Differences of the ASUS Solution
SSD cache
ASUS SSD cache is a large, second-level cache that utilizes enterprise-level SSDs that sit between the main memory cache and the HDDs. The SSD read and write cache improves the random IOPS of the system's read and write I/O by copying frequently accessed random data to the SSDs. Using an SSD enables performance that’s faster than an HDD.
Auto-tiering
ASUS auto tiering is a cost-effective way to dynamically place “hot” data on an SSD or a fast hard drive and place “cold” data on a lower-cost high-capacity drive. This allows you to optimize application performance without straining your budget or sacrificing capacity.
In the ASUS VS320D solution, a proprietary algorithm analyses the system continuously, monitoring data usage and ranking data based on how often it is accessed. The system uses this information to make decisions regarding where the data should be stored.
Key Advantages of the ASUS Solution
SSD Cache
High-Speed Response—SSDs have very low read/write latency; time needed to cache hot data is greatly reduced, improving application performance
Easy Deployment—SSD read/write cache can integrate with existing storage architecture and is relatively simple to configure
Random Access—Perfect for applications needing random access, such as VM, VDI, database and web services
Auto-Tiering
Cost Efficiency—Balances performance and cost by storing hot data on high-performance SSDs and cold data on cheaper HDDs
Dynamic Data Management—Automates data analysis and movement, reducing manual intervention and management costs
Scalability—Maintains performance and cost optimization as data volume grows, and supports capacity expansion for cold tier with SAS JBOD
How the SSD Cache and Auto-Tiering Systems Work
SSD Cache
SSD cache technology leverages the strengths of both HDDs and SSDs to meet the capacity and performance requirements of enterprise applications in a cost-effective way. Data is primarily stored on HDDs while SSDs serve as an extended HDD memory cache for many I/O operations. One of the major benefits of using an SSD cache is improved application performance, especially for workloads with frequent I/O activity. The read data of an application that is frequently accessed is copied to the SSD cache; the write data is stored to the SSD cache temporarily and then flushed to an HDD when a certain threshold is met. This process results in the application receiving an immediate performance boost due to the faster speeds that the SSD enables. ASUS SSD cache enables applications to deliver optimal performance by absorbing bursts of read/write loads at SSD speeds.
1. A host requests to write data.
2. Data is written to the HDD volume.
3. The status is returned to the host.
4. The SSD cache is populated if the write threshold is reached.
1. A host requests to write data.
2. Data is written to the SSD cache.
3. The status is returned to the host.
4. Data is flushed to the HDD volume at the appropriate time.
Another important benefit of the cache system is improved TCO (Total Cost of Ownership). The SSD cache copies the “hot” (frequently-used) data to the SSDs in large sets. Because this process can enhance performance of IOPS after the SSD caching, users can satisfy the remainder of their storage needs with low-cost, high-capacity HDDs. This ratio of a small amount of SSD paired with a lot of HDD offers the best performance at the lowest cost with optimal power efficiency.
Auto-Tiering
Auto-tiering is a completely automated feature that implements a set of tiering polices to ensure the best storage performance in various environments. Tiering policies determine how new allocations and ongoing relocations should apply within a volume. An algorithm makes data relocation decisions based on the activity level of each unit. It ranks the order of data relocation across all volumes within each separate pool. The system uses this information in combination with the tiering policy per volume to create a candidate list for data movement.
The ASUS V320D solution provides a thick provisioning pool or a thin provisioning pool transfer to auto-tiering one. Thick or thin provisioning pools without disk groups of mixed types can be transferred to the auto-tiering pool by the Add Disk Group option.
Tier Categories
Auto-tiering involves at least two tiers. The automated tiering pool segregates disk drives into three categories for dual controllers.
Tier 1: SSD drives for extreme performance tier
Tier 2: SAS drives (15K or 10K rpm SAS HDD) for performance tier
Tier 3: Nearline SAS drives (7.2K or lower rpm SAS HDD) for capacity tier
Thick Provisioning vs Thin Provisioning
The thick provisioning pool is a preconfigured the space. After transferring to the auto-tiering phase, the original disk group in the thick provisioning pool will be the lowest tier. When the auto-tiering process is running, hot data are copied to higher tier but still occupy the space of the original block. If the data is cold, it will return to the original block space. So, the total capacity of the pool space does not change—even if adding the capacities of higher tiers.
Thin provisioning involves dynamic allocation of space. If hot data is moved up to the higher tier, it will release the original block space. So, the total capacity remains constant as it is the sum of all tiers.
SSD Cache + Auto-Tiering
The SSD cache and auto-tiering features can work together and complement each other. A key difference between tiering and cache is that tiering moves data to an SSD instead of simply caching it. Tiering can also move data from slower storage to faster storage and vice versa; but the SSD cache is essentially a one-way process. When the cache is done with the data it was accelerating, it simply nullifies it instead of copying it back to HDD. The important difference between moves and copies is that a cache does not need to have the redundancy (SSD read cache only) that tiering does. Tiering stores the only copy of data (potentially a considerable period of time), so it needs to have full data redundancy like RAID or mirroring.
VS320D Benefits
SSD Cache Uses
Database Acceleration—For applications needing fast query and read/write speeds, such as relational databases
Virtualization Environments—For workloads with high IOPS requirements in environments like VMware vSphere, MS Hyper-V and Proxmox
Web Servers—For fast responses to user requests for an enhanced user experience
Auto-Tiering Uses
Enterprise Applications—For tiered storage of data (since the frequency of accessing most data decreases over time)
Big Data Analytics—For environments with massive data volumes and varying data access frequencies, where cost-effective storage is essential
Mixed Workloads—For applications that need to balance read/write performance and capacity, such as enterprise media servers and backup systems
Conclusion
The ASUS VS320D series solutions provide SSD cache and auto-tiering functions, each addressing different needs and scenarios. While SSD cache primarily solves the challenge of high-speed access and is ideal for small, frequently accessed data environments, auto-tiering offers significant advantages in large-scale data management by balancing performance and cost through automated tiering technology.