Object Storage is a relatively recent development in the field of unstructured data management. Organisations that are first to adopt new technologies can reap great rewards, but innovation can sometimes come with a certain amount of risk. If the risk is managed well, Object Storage can become integral to the operations of companies worldwide.
Object storage is built on the same foundations as traditional file systems and similarly stores data on solid-state drives. Object Storage, however, differs in operation — a far cry from the familiar files and folders of the desktop computer that have been the hallmark for the way people have managed data since the late 1960’s.
Limitations of Traditional Storage Methods
File systems operate as a tree of folders where hidden index files point to where specific information is stored on a disk. Over the years, since the development of file-based storage, the sheer amount of data to be processed and stored has increased dramatically. As the amount of data stored in a file-based system has grown, the indexing file sizes have also grown. This increases the time taken to process that indexing file which impacts the use of any data on the system.
Functionally, data stored as objects have no tiered file structure and have no indexing file. Object Storage bypasses almost any size limitation. Vast amounts of data are stored in a flat namespace, allowing for the storage of a massive amount of data in one virtualized store. Data volume requirements are increasing by more than 50% every year, and only Object Storage truly holds the key to unlocking that value at scale. Services like YouTube, iCloud and Netflix rely heavily on this new unstructured data architecture to effectively store and allow access to vast sums of data.
Primary Uses of Object Storage
Object Storage is primarily used for:
- New innovative backup solutions
- Big Data Repositories
- Large archives
- Large static content – (e.g. Web Sites).
Dimensions of Object Storage
Object Storage is well suited to managing data lakes and other kinds of long-term unstructured data. By differentiating itself from other file storage architecture it has surpassed some of the fundamental limitations older storage architecture uses. While Object Storage excels in its Big Data and web-services design centre, it may not be the best choice for smaller workloads, or where performance and full compatibility with older applications are primary considerations. Object Storage excels with massive amounts of data, and dominates in terms of durability, making Object Storage an industry standard in backup and recovery, but some solutions can be pricey.
Breaking up data into segments and spreading them throughout multiple systems, combined with how metadata is used to verify and replicate data, creates an ultra-durable storage system. Consistency can be an issue, as it can take time to connect to and update data across hosts that may be distributed around the globe. This is offset by how durable Object Storage is, with the addition of layered erasure coding, services like NetApp’s StorageGRID achieve 11 nines availability and 15 nines durability.
How Object Storage is provided can also have a great impact on how effective it can be for any given use. Access fees can be prohibitive as Object Storage offerings tend to be accessed through external providers rather than internal IT teams. The pricing of accessing or recovering data from Object Storage can be high depending on several aspects of service, and in some cases, if a full recovery is required for the recovery of a small portion of the data, the cost can be excessive.
CAS Platforms for Transactional Data
Object Storage has been designed around exceeding the capacity limitations that come with other storage architectures,
but it should only be used as part of a broader catalogue of data management solutions. The industry has shown over the last 20 years that storage devices can either be large, fast or expensive.
However, the industry is beginning to commonly adopt Object Compatible Flash backed platforms that automatically provision ‘cold’ or less used blocks to an object store. This type of solution is becoming commonplace, with services like NetApp FabricPool being deployed across a range of data-heavy industries, especially those reliant on transactional data. If data is transactional and small, it would be assigned to Flash storage until it becomes ‘cold’ and slow, archival data is then provisioned to the object store.
Object stores or Content Addressable Storage (CAS) platforms are great solutions but to be sure you understand the pros and cons of your data solution.
While Object Storage is not a ‘one size fits all’ solution, the architecture has fewer true limitations than older storage methods. Overcoming scalability is valuable enough in itself but Object Storage is seeing constant innovation, and as Object Storage continues to develop and grow and more organisations transition to the Cloud, we may see it become the standard for use in a wider range of use cases and applications.