Glossary


Applications and Data Storage

The nature of applications today combined with user expectations is that more and more is expected of data storage infrastructures. Often, applications cannot be taken offline for backup or for data to be moved from one device to another.

SharePoint, VDI, SQL, Oracle, etc. are all applications that demand faster reads and writes (I/O operations per second or IOPS) from the storage. This means that organisations need to look at capacity and performance now when investigating storage requirements.

Tiering

Tiering is a data storage method where data is stored on media based on the response times and performance required. For example, a heavily used application will need its data stored on high performing disk drives while data that needs only to be archived will be stored on tape.

If high speed, high capacity storage can be afforded for all data within the organisation then tiered storage is not an issue. However, in many businesses this is not the case, so some form of tiering can be used to balance cost and performance.

Clearly there is a wide range in between these extremes, which is where today's tiered storage infrastructures fit. These can involve two or more tiers, each with differing performance characteristics.

The mechanism by which data is "scored" and subsequently assigned to the most appropriate tier is key to making these systems work well. Many systems recognise that moving data between tiers involves large quantities of data. In such systems these transitions are executed perhaps just once per day, so minimising the negative impact on performance.

Hybrid Storage

Hybrid storage is a term used to describe a system which blends solid state disk (SSD) and hard disk drives (HDD) in order to offer high speed data access at a low cost.

SSDs provide high IOPS but at a relatively high cost, whereas HDDs provide much lower IOPS but their lower cost means that high data capacity is affordable and practical.

Hybrid storage arrays use SSDs as a high speed tier or cache. This way the data which is most likely to be accessed is provided from the highest speed hardware. These arrays automate the process of making data available in this cache layer so that applications - and administrators - can view the array simply as a black box that delivers a defined amount of storage.

Auto or Hot Tiering

Auto-tiering represents the ability of the tiering system to automatically transition data between storage tiers based on some policy and/or algorithm. This approach minimises the complexity of administration, freeing up much valuable management time.

With the costs of SSDs (solid state drives) reducing while the capacity of HDDs (hard disk drives) increases, this kind of approach is attractive. However without careful application of the tiering algorithms auto-tiering can lead to the thrashing of data being copied constantly between tiers.

Caching vs. Tiering

Auto tiering is the technique used in almost all hybrid storage products. They are typically controlled by some form of policy that looks at how data has been used historically. This leads from the fact that data is typically used very frequently in the first few days, but thereafter it tends to be accessed less, and can therefore afford to move to a slower storage tier.

Tiering is therefore a reactive technique which attempts to predict data usage patterns.

Caching comes in two forms – write-back and write-through caching. The cache will be some form of solid state memory, whether a small amount built into a hard drive or a large SSD.

Write-back caching takes ownership of the write, acknowledging it to the application. Policy then dictates when the data is written to disk and at what stage it is removed from cache. Write-back caching is suited to highly random work-loads such as VDI (virtual desktop infrastructure).

Write-through caching writes the data to HDD first, then returns it to the cache for future access. Again, policy dictates the lifespan of the data in cache. This form of caching is well suited to sequential work-loads.

Write-back caching is less common than write-through where SSDs are concerned. This is because write-back caching hits the SSD with many more writes, increasing the number of write-erase cycles and thus shortening the SSDs' life.

Both types of caching generally have lower capacities than auto tiering. This leads to more cache misses because read requests must go back to the HDDs for their data.

FC vs iSCSI

Fibre Channel (FC) storage is designed from the start to handle storage traffic. Seen as a high cost solution, FC delivers low latency and high throughput while enabling the application server's CPUs to handle applications rather than dealing with storage.

FC is normally implemented with dedicated Host Bus Adapters (HBAs) and switches. The HBAs take on all the processing of the Fibre Channel Protocolo (FCP), which is itself dedicated solely to storage, delivering low switching latency. FC is available at speeds from 1Gbps to 20Gbps.

In addition to higher costs, FC and FCP are expensive to support over long distances. If you need to configure alternate arrays at secondary sites then you'll need fibre links.

iSCSI is a storage networking protocol built on top of TCP/IP, meaning that you don't need additional networking hardware to support this kind of storage system. This makes it comparatively inexpensive.

In terms of performance, iSCSI is slower than FC due to the overhead required to encapsulate SCSI commands within TCP/IP. However, implemented properly this overhead represents a small additional latency which can often be acceptable. In high transactional I/O environments this may of course be an issue.

The server takes on more responsibility with iSCSI, dealing with creating and processing storage commands. However, where long distance transactions are required, iSCSI's use of standard TCP/IP means that this is easy and inexpensive to deploy.

Fibre Channel over Ethernet (FCoE) runs over Ethernet but using its own protocol. This delivers lower end to end latency since a TCP/IP header is not being created and interpreted. However for long distance a bridge is required to connect to the remote LAN.

Storage Virtualisation

Virtualisation is all about abstraction – adding a layer between, say, the application and the device so that many device properties that previously had to be managed, no longer need to be considered.

In the case of storage, device properties that can be challenging to manage include physical space, performance, location and the magnitude of IOPS. By virtualising storage these issues can be managed in a much more practical way.

Storage virtualisation comes in two forms – file level and block level. Block level virtualisation either replaces or enhances the existing disk controllers, delivering virtualised storage to the server's operating system. File level virtualisation uses server level software controlling file-level determination of data usage.

Storage Virtualisation Methods

Host based storage virtualisation installs software on the server which intercepts I/O requests. These are then managed by the software before passing on to the operating system. The File Area Network (FAN) uses this technique.

Network based storage virtualisation employs a fibre channel switch that sits between the storage and the server, virtualising I/O requests. The server operating system knows nothing of this so there is no configuration or management at the host.

Array based storage virtualisation uses storage arrays with a master array that controls all I/O for all other arrays. Clearly this master array has to have sufficient performance to manage all I/O requests, while, as a potential single point of failure, must be made robust, probably employing redundant arrays for resilience. The array-based approach represents the most benefit with its centralised management and seamless data migration.

Why Storage Virtualisation

With centralised management, better visibility, more flexibility and better utilisation of data storage devices there's a lot of reasons to embrace storage virtualisation.

Without changing how the application operates your data can be moved between devices, giving an apparently endless supply of storage. And Thin Provisioning lets you allocate much less storage than the application thinks it has available – giving far more flexibility and better use of finite resource.

Asynchronous or Synchronous Replication

Replication deals with automatically creating a copy of data on both a local and a replica store. The idea is that should a disaster occur applications can quickly swap from one store to the other with little or no data loss. Synchronous replication is the gold standard giving a guarantee that a write operation is complete only when both the local and replicated stores have committed the operation.

Asynchronous replication offers potentially higher performance through only considering the write operation complete when the local write is committed. The write to the replica store then occurs without holding up the application. The trade-off is that, should something go wrong, it is possible for data not yet committed to the replica to get lost.

For synchronous replication, a relatively short distance, low latency network is required in general. Asynchronous replication can occur over much longer distances with less concern over achieving optimum latency.

The choice of asynchronous or synchronous depends on the application. If instant fail-over without data loss is required then synchronous is the choice. However if some delay and loss of transactions can be accepted, particularly in recognition of the lower cost, then asynchronous could be the way to go.

Compression and de-duplication technologies are often used with replication systems in order to improve performance while minimising the amount of data stored.

Storage Snapshot

A storage snapshot is a set of pointers that together can reference a data set at a single point in time. Why is this useful? It's a way of getting around problems with traditional backups.

With a backup, the data being backed up must not be in the process of change. This means that either the applications have to be shut down for the duration of the backup, or some form of locking API must be used to temporarily guarantee read-only access to data while it's being backed up. This is fine for applications that do not need 24x7 access.

For applications that cannot be shut down, a snapshot provides a "backup" solution. Data remains on the disk with the snapshot simply pointing to the version at the snapshot's point in time.

Snapshots can either be "copy on write" or "split-mirror". "Copy on write" snapshots take an initial snapshot, then thereafter only record changed or new data – much like an incremental backup. This offers the quick recovery required, but all snapshots from the initial one must be available.

A "split mirror" snapshot always records all data in the volume. The snapshot is therefore always the same size and recovery only requires

Backup Technologies

Tape always used to be the king of backup. It is still used a great deal today, but disk is taking over.

Disk (HDD) costs have fallen dramatically while capacity has increased. Arrays of disks have much better access times than tape, and disks can be accessed randomly compared with serial tape access. If an application has to be taken offline to perform a backup then performance is key. Thus the faster disk backup is attractive in this kind of environment.

However, tape is cheap – it has no on-going running costs except for a climate controlled environment. In addition, the purchase of extra space involves only new tapes rather than more drive hardware. However it is increasingly more suited to an archive than a backup role.

Disk to Disk (D2D)

The backup application simply backs up to a low cost appliance or SATA disk array. This approach moves completely away from the tape mode, enabling multiple streams of data directed to the backup device, and the recovery of single files without the need to scan an entire archive.

Using disk cartridges is possible too, enabling backups to be removed and transported to alternative sites if necessary

Virtual Tape Library (VTL)

To gain extra speed, the VTL uses disk in place of tape. As far as the backup application is concerned it is still talking to a tape drive, but an extremely fast one.

This can be taken a step further with "disk-to-disk-to-tape" or D2D2T, where the disk is used more like a cache. D2D2T enables the backup process to be fast, while the copying from disk to tape can be done without disrupting any live applications.

Continuous data protection (CDP)

CDP or sometimes called Real-Time Data Protection captures and records data changes in real-time. Depending on implementation and settings this can have an impact on response times for data, but does enable the recovery of data at more or less any point in time.

Deduplication

This technique can only be used on disks where a backup set can be examined for repeating data. In this event, only one copy of the data is stored, leading to less storage requirement.

Choice of backup/recovery method
This depends largely on two factors - RPO or recovery point objective, and RTO or recovery time objective. RPO refers to the amount of data you can afford to lose, or how much you can realistically re-create. RTO is the length of time it takes to recover the data before the business is in trouble.

If you can afford to lose 8 hours of data and wait 8 hours for data recovery then tape backups might do the job. However if data loss over minutes would be significant then a disk-based continuous data protection solution would make more sense.

Testimonials

Social

About Us