Saturday, April 5, 2014

Top Five Hot Data Storage Trends

As the speed of change in businesses continues to accelerate, the pressure for organizations to provide the most cost-effective and highest performing supportive IT services around the clock has never been greater. Information is an organization’s most valuable asset—and often the most costly to maintain. Users are accessing data more frequently from more devices than ever before. As a result, new solutions (including flash storage and converged infrastructures) are better suited for a world where digital data creation is growing by around 50 percent per year and organizations work to keep pace and stay within budgets.

Today’s organizations understand that quickly storing, securing, accessing and analyzing data, while managing it securely and cost-effectively, can mean the difference between business success and failure. Enterprises will continue to demand IT infrastructures that allow them to rapidly and efficiently deliver quality services.

Here are five enterprise storage trends that Dell expects to see heat-up this year.

Flash Storage Economics
Flash storage has been gathering momentum as use cases increase, technologies advance, and, frankly, overall costs decrease. Flash storage’s ability to handle data at much faster rates than traditional spinning disk has organizations weighing the options of performance versus cost. While performance elevates flash over traditional spinning disk in the storage hierarchy, its cost has remained the number one barrier to adoption – until now. According to a 451 Research survey in 2013, hybrid flash arrays (arrays that combine flash and disk drives) are the leading choice for organizations, followed by server-side flash and all-flash, which has traditionally been the most expensive variant.

More and more, organizations are seeking vendors that break through the traditional cost boundaries to deliver flash at significantly lower prices. For example, combining technologies such as various flash drive types (e.g. MLC, or multi-level cell, and SLC, or single-level cell) with automated tiering (autonomously assigning data and applications to the most appropriate storage medium) is a proven way for customers to get all-flash performance at economics equal to disk prices today.

Flash at the Server
In today’s world, instantaneous results are being demanded by consumers across the globe, and there is no better example of this immediate need for speed than looking at the appetite for rapid online transactions. Flash cache technology brings the most frequently accessed data closer to compute resources by placing flash on the server system bus thereby minimizing data travel from the server to storage through the network, improving response time and accelerating both read and write performance.

While point products for flash at the server exist today, organizations will gain more value from integrated server and SAN flash technology in the server – such as Dell’s Fluid Cache for SAN becoming available this year – to vastly increase response times without sacrificing availability for consumers in industries such as healthcare, finance and retail where instant transactions can redefine the customer experience. For applications like databases, server-based flash cache can reduce data access latency by as much as 90 percent. With point products, users are forced into a “silo-ed” management approach or one that sacrifices traditional SAN data protection features, such as snapshots and replication. With an integrated approach, users can treat flash in the server like another tier of storage and benefit from the traditional SAN features and the cost advantages of automated tiering when server-side flash becomes managed as a “Tier 0” from their SAN.

Convergence
As enterprises look toward converged infrastructures, the complexities associated with heterogeneity within their environments will be front and center. The driving force behind converged infrastructures is the opportunity to increase efficiency and agility in operations, applications and service management. The benefits go far beyond “one throat to choke” to include reduced cost of running applications, faster infrastructure deployments, simplicity and speed of management, and improved time-to-value for application and cloud deployments.

The road to convergence will be easier as organizations can choose from physical converged infrastructure offerings – where server, storage, networking and management are included in the same chassis – or a software-based management layer that aggregates customers’ heterogeneous infrastructure investments into a virtual converged infrastructure.

Software-Defined Storage – Real Trend or Hype? 
The often discussed concept of Software-defined storage (SDS) has found its way into the enterprise storage and the broader software-defined discussions. However, there’s much debate and market confusion on the true definition of SDS, a la early days of defining “cloud computing.” The allure for SDS is around flexibility, but more significantly, reducing the overall cost of storage. Organizations that manufacture both servers and storage arrays today already offer SANs incorporating the lowest cost industry-standard servers with help from economies of scale. Offerings dubbed “SDS” today typically don’t provide the full-featured benefits of traditional SANs, and it’s uncommon to see these vendors provide full service on both the storage software and the hardware on which it resides. As user appetites grow and available SDS offerings mature, real-world benefits and models for the software-defined data center will become clearer, and even more innovative solutions will emerge.

Automation – Make Your Machines Work for You
Innovative storage vendors place a significant focus on automation and easier-to-manage storage environments to not only reduce storage complexity, but also reduce costs. Innovations like automated tiering, snapshots, virtual server and desktop integration/optimization, and de-duplication and compression are ways for organizations to apply additional “under-the-covers” automation to reduce overall storage costs.

For example, automated tiering helps organizations manage data when and where they need it, and in the most cost-effective storage tier. Many organizations buy storage that is capable of more performance than they might ever consume because they aren’t confident in their ability to measure and configure precise real-time demand for performance. Storage systems with automated tiering allow users to let the system determine the optimal tier for data workloads. Over time, it allows data to gravitate toward the media choice that best fits its actual needs and budget considerations. Expect to see automated tiering open the flood gates for flash adoption in the coming year, as blending tiers of MLC and SLC flash drives, for example, will enable customers to get all flash performance for what they spent on spinning disks last year.

Furthermore, automating processes such as storage provisioning, snapshots, and integration with virtualization software vendors, can remove a substantial amount of mundane IT staff hours spent implementing and managing a successful storage environment. Smart automation leads to easier to use systems that can lower the overall total cost of storage for users. 

There’s a lot of storage innovation in the days ahead. The continued data explosion and advancements in technology will keep driving momentum in these key areas as storage users demand innovative solutions to cost-effectively keep pace.




-------------------------------------------------------------------------
Source: ALAN ATKINSON - http://www.greendatacenternews.org/articles/share/695257/
 

0 comments:

Post a Comment