unRAID Cache

unRAID Cache

The cache drive feature of unRAID provides faster data capture. Generally speaking, by using a cache alongside an array of 3 or more devices, you can achieve up to 3x write performance. When data is written to a user share that has been configured to use the cache device, all of that data is initially written directly to the dedicated cache device. Because this device is not a part of the array, the write speed is unimpeded by parity calculations. Then an unRAID process called “the mover” copies the data from the cache to the array at time and frequency of your choosing. Once the data has been successfully copied, the space on the cache drive is once again freed up to front-end other write operations to cache-enabled user shares.
With a single cache device, data captured there is at risk, as a parity device doesn’t protect it. However, you can build a cache with multiple devices both to increase your cache capacity as well as to add protection for that data. The grouping of multiple devices in a cache is referred to as building a cache pool. The unRAID cache pool is created through a unique twist on traditional RAID 1. Here are just some of the benefits:
  • Improved Data Protection – With a single cache device, there’s a possibility that you can lose your data if the device fails before the data gets moved to the array. With a cache pool, however, all write operations are replicated across two separate disks to ensure that the loss of any one drive in the pool does not cause a system outage.
  • Increased System Uptime – If a cache pool device fails, the system will continue to operate as normal. No need to drop everything to deal with a system outage. You can simply change the device when convenient.
  • Better Scalability – Add more devices of different sizes to your pool as you need to and grow on-demand.
  • Optimized for SSDs¹ – unRAID now has native support for TRIM, which can substantially reduce the number of write operations when used as a cache device. Benefits of SSDs vs. HDDs:
    • They don’t require time to ‘spin up’ or consume a lot of power to operate (they are fast and efficient);
    • They are also smaller, so you can fit more of them into a smaller space for highly compact, crazy fast storage.
    • When used for storing large quantities of smaller files (e.g., metadata), SSDs can provide a faster response time for these files to the application compared to spinning hard disks; and
    • SSDs are most ideal for supporting virtual machines.  VM performance benefits on an SSD are comparable to what a user would experience with them on a desktop PC vs. a spinning disk.
  • Optimized for Virtualization – Virtual machines and applications can have their data reside on the cache pool permanently for overall improved performance, while keeping mass-storage content on the array still accessible to those virtual instances using VirtFS (for KVM Virtual Machines) and Docker (for Containers). Given the desire for “fast-as-you” responsiveness in application and machine performance, using the cache pool for virtual machine/application storage is a no-brainer. Use of SSDs in a cache pool extends this benefit even further.
unRAID Cache Pool and Array

Keine Kommentare:

Kommentar veröffentlichen