Power Efficiency Measurement – Our Experts Make It Clear – Part 4

Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier!

In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization.

Join us on our final installment on the  journey to better power efficiency – Part 4: Impact of Storage Architectures on Power Efficiency Measurement.

And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement,  Part 2: Impact of Workloads on Power Efficiency Measurement, and Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives.  Bookmark this blog series and explore the topic further in the SNIA Green Storage Knowledge Center.

Impact of Storage Architectures on Power Efficiency Measurement

Ultimately, the interplay between hardware and software storage architectures can have a substantial impact on power consumption. Optimizing these architectures based on workload characteristics and performance requirements can lead to better power efficiency and overall system performance.

Different hardware and software storage architectures can lead to varying levels of power efficiency. Here’s how they impact power consumption.

Hardware Storage Architectures

  1. HDDs v SSDs:
    Solid State Drives (SSDs) are generally more power-efficient than Hard Disk Drives (HDDs) due to their lack of moving parts and faster access times. SSDs consume less power during both idle and active states.
  2. NVMe® v SATA SSDs:
    NVMe (Non-Volatile Memory Express) SSDs often have better power efficiency compared to SATA SSDs. NVMe’s direct connection to the PCIe bus allows for faster data transfers, reducing the time components need to be active and consuming power. NVMe SSDs are also performance optimized for different power states.
  3. Tiered Storage:
    Systems that incorporate tiered storage with a combination of SSDs and HDDs optimize power consumption by placing frequently accessed data on SSDs for quicker retrieval and minimizing the power-hungry spinning of HDDs.
  4. RAID Configurations:
    Redundant Array of Independent Disks (RAID) setups can affect power efficiency. RAID levels like 0 (striping) and 1 (mirroring) may have different power profiles due to how data is distributed and mirrored across drives.

Software Storage Architectures

  1. Compression and Deduplication:
    Storage systems using compression and deduplication techniques can affect power consumption. Compressing data before storage can reduce the amount of data that needs to be read and written, potentially saving power.
  2. Caching:
    Caching mechanisms store frequently accessed data in faster storage layers, such as SSDs. This reduces the need to access power-hungry HDDs or higher-latency storage devices, contributing to better power efficiency.
  3. Data Tiering:
    Similar to caching, data tiering involves moving data between different storage tiers based on access patterns. Hot data (frequently accessed) is placed on more power-efficient storage layers.
  4. Virtualization
    Virtualized environments can lead to resource contention and inefficiencies that impact power consumption. Proper resource allocation and management are crucial to optimizing power efficiency.
  5. Load Balancing:
    In storage clusters, load balancing ensures even distribution of data and workloads. Efficient load balancing prevents overutilization of certain components, helping to distribute power consumption evenly
  6. Thin Provisioning:
    Allocating storage on-demand rather than pre-allocating can lead to more efficient use of storage resources, which indirectly affects power efficiency

Just What is an IOTTA?  Inquiring Minds Learn Now!

SNIA’s twelve Technical Work Groups collaborate to develop and promote vendor-neutral architectures, standards, and education for management, movement, and security for technologies related to handling and optimizing data. One of the more unique work groups is the  SNIA Input/Output Traces, Tools, and Analysis Technical Work Group (IOTTA TWG).

SNIA Compute, Memory, and Storage Initiative recently sat down with IOTTA TWG Chairs Geoff Kuenning of Harvey Mudd College and Tom West of hyperI/O LLC to learn about some exciting new developments in their work activities and how SNIA members and colleagues can get involved.

Q: What does the IOTTA TWG do?

A: The IOTTA TWG is for those interested in the use of empirical data/metrics to better understand the actual operation and performance characteristics of storage I/O, especially as they pertain to application workloads. We summarize our work in this SNIA video https://www.youtube.com/watch?v=4EVW5IHHhEk

One of our most important activities is to sponsor a collaborative worldwide repository for storage-related I/O trace collection and analysis tools, application workloads, I/O traces, and best practices around such topics.

Q: What are the goals of the IOTTA Repository collaboration?

A: The primary goal of the IOTTA Repository collaboration is to create a worldwide repository for storage related I/O trace files, associated tools, and other related information, all of which are made available free of charge to the storage research and development communities in both academia and industry.

Repository data is often cited in research publications, with 627 citations to date listed on the IOTTA Repository website.

Q: Why is keeping and sharing information by way of a Repository important?

A: The IOTTA Repository provides a common facility through which a broad community (including storage vendors, storage users, and the academic community) can avail themselves of a variety of storage related I/O traces (especially contemporary I/O traces). We like to think of it as a “One-Stop-Shop”.

Q: What kind of information are you gathering for the Repository?  Is some information more important than other(s)?

A: The Repository contains a wide variety of storage related I/O trace types, including Block I/O, HPC Summaries, Key-Value Traces, NFS Traces, Parallel Traces, Static Snapshots, System Call Traces, and Workload Summaries.

Reliability Traces are the latest category of traces added to the IOTTA Repository. Generally, the Reliability Traces category includes records of storage system reliability, for example, long-term records of hard-drive failures.

The IOTTA Repository additionally provides an off-site link to traces that cannot be included directly within the repository (e.g., unable to obtain permission to host a particular trace within the repository).

Q: Who downloads this information? What groups can make use of this information?

A: Academic institutions are among the most frequent downloaders of Repository information, along with storage companies.

Practitioners can make use of various IOTTA Repository traces to gain a better understanding of actual I/O storage operation activity within various environments and scenarios.  Traces can also be used as a basis for benchmarking and testing proposed solutions.

SNIA IOTTA TWG members receive a monthly report that shows the number and types (i.e., trace names) of the traces downloaded during the month, including the downloader region (e.g., Asia, Europe, North America). The report also includes company/institution names associated with the downloaders. More information on joining the IOTTA TWG is at http://iotta.snia.org/faqs/joinIOTTA.

Q: What is some of the latest information in the Repository?

A: In February 2024, we posted NVMe drive reliability traces collected by Alibaba. The collection includes both fail-stop and fail-slow data for a large drive population in Alibaba’s servers.

Q: What is the importance of these traces?

A: The authors of the associated USENIX ATC 2022 paper indicate that the Alibaba Fail-Stop dataset is the first large-scale public dataset on real-world operational data of NVMe SSD.  From their analysis of the dataset, they identified a series of major reliability changes in NVMe SSD.

In addition, the authors of the associated USENIX FAST 2023 paper indicate that the Alibaba Fail-Slow dataset is the first large-scale, clear-labeled public dataset on real-world operational traces aiming at fail-slow detection (i.e., where the drive continues to run but with poor performance). Based upon the dataset, the authors have provided a root cause analysis on fail-slow drives.

With the growing importance of NVMe SSDs in the data center, it is critical to understand the reliability of hardware in the cloud.  The Repository provides the traces download and also links to the papers and presentation videos that discuss these large-scale SSD reliability studies.

Q: What new activity would you like to see in the Repository?

A: We’d like to see more trace downloads for analysis.  Most downloads today are related to benchmarking and replay.  Trace activity could feed into a simulated computer system to test activities like failures.

We would also like to see more input of data related to tape storage. The Repository does not have much information on cold storage and multilevel storage between hot and cold storage.

Finally, we would like feedback on how people are using what they download – for analysis, reliability, benchmarks and other areas they have found the downloads useful. We also want to know what else you would like to be able to download.  You can contact us directly at iottachairs@snia.org.

Thanks for your time and the great information about the IOTTA Repository.  Learn more about the IOTTA Repository on their FAQ page.

Power Efficiency Measurement – Our Experts Make It Clear – Part 2

Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier!

In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization.

Join us on our journey to better power efficiency as we continue with Part 2: Impact of Workloads on Power Efficiency Measurement.  And if you missed Part 1: Key Issues in Power Efficiency Measurement, you can find it here.  Bookmark this blog  and check back in March and April for the continuation of our four-part series. And explore the topic further in the SNIA Green Storage Knowledge Center.

Part 2: Impact of Workloads on Power Efficiency Measurement

Workloads are a significant driving force behind power consumption in computing systems. Different tasks and applications place diverse demands on hardware, leading to fluctuations in the amount of power used. Here’s a breakdown of how workloads can influence power consumption:

  • CPU Utilization. The CPU’s power consumption increases as it processes tasks, with more demanding workloads that involve complex calculations or multitasking leading to higher CPU utilization and, consequently, elevated power usage.
  • Memory Access is another key factor. Accessing memory modules consumes power, and workloads that heavily rely on frequent memory read and write operations can significantly contribute to increased power consumption.
  • Disk Activity, particularly read and write operations on storage devices (whether HDDs or SSDs), consumes power. Workloads that involve frequent data access or large file transfers can lead to an uptick in power consumption. GPU Usage plays a crucial role, especially in tasks like gaming, video editing, and machine learning. High GPU utilization for rendering complex graphics or training deep neural networks can result in substantial power consumption.
  • Network Communication tasks, such as data transfers, streaming, or online gaming, require power from both the CPU and the network interface. The extent of communication and data throughput can significantly affect overall power usage.
  • In devices equipped with displays, Screen Brightness directly impacts power consumption. Brighter screens consume more power, which means workloads involving continuous display usage contribute to higher power consumption.
  • I/O Operations encompass interactions with peripherals like storage devices or printers. These operations can lead to short bursts of power consumption, especially if multiple devices are connected.
  • Understanding the contrast between Idle and Active States is essential. Different workloads can transition devices between these states, with idle periods generally exhibiting lower power consumption. However, certain workloads may keep components active even during seemingly idle times.
  • Dynamic Voltage and Frequency Scaling are prevalent in many systems, allowing them to adjust the voltage and frequency of components based on workload demands. Increased demand leads to higher clock speeds and voltage, ultimately resulting in more significant power consumption.
  • Background Processes also come into play. Background applications, updates, and system maintenance tasks can impact power consumption, even when the user isn’t actively engaging with the device.

In practical terms, comprehending how various workloads affect power consumption is vital for optimizing energy efficiency. For instance, laptops can extend their battery life by reducing screen brightness, closing unnecessary applications, and selecting power-saving modes.

Moreover, SSDs are designed with optimizations for background processes in mind. Garbage collection and NAND Flash cell management often occur during idle periods or periods of low-impact workloads.

Likewise, data centers and cloud providers strategically manage workloads to minimize energy consumption and operational costs while upholding performance standards.

Feedback Needed on New Persistent Memory Performance White Paper

A new SNIA Technical Work draft is now available for public review and comment – the SNIA Persistent Memory Performance Test Specification (PTS) White Paper.

A companion to the SNIA NVM Programming Model, the SNIA PM PTS White Paper (PM PTS WP) focuses on describing the relationship between traditional block IO NVMe SSD based storage and the migration to Persistent Memory block and byte addressable storage.  

The PM PTS WP reviews the history and need for storage performance benchmarking beginning with Hard Disk Drive corner case stress tests, the increasing gap between CPU/SW/HW Stack performance and storage performance, and the resulting need for faster storage tiers and storage products. 

The PM PTS WP discusses the introduction of NAND Flash SSD performance testing that incorporates pre-conditioning and steady state measurement (as described in the SNIA Solid State Storage PTS), the effects of – and need for testing using – Real World Workloads on Datacenter Storage (as described in the SNIA Real World Storage Workload PTS for Datacenter Storage), the development of the NVM Programming model, the introduction of PM storage and the need for a Persistent Memory PTS.

The PM PTS focuses on the characterization, optimization, and test of persistent memory storage architectures – including 3D XPoint, NVDIMM-N/P, DRAM, Phase Change Memory, MRAM, ReRAM, STRAM, and others – using both synthetic and real-world workloads. It includes test settings, metrics, methodologies, benchmarks, and reference options to provide reliable and repeatable test results. Future tests would use the framework established in the first tests.

The SNIA PM PTS White Paper targets storage professionals involved with: 

  1. Traditional NAND Flash based SSD storage over the PCIe bus;
  2. PM storage utilizing PM aware drivers that convert block IO access to loads and stores; and
  3. Direct In-memory storage and applications that take full advantage of the speed and persistence of PM storage and technologies. 

The PM PTS WP discussion on the differences between byte and block addressable storage is intended to help professionals optimize application and storage technologies and to help storage professionals understand the market and technical roadmap for PM storage.

Eden Kim, chair of the SNIA Solid State Storage TWG and a co-author, explained that SNIA is seeking comment from Cloud Infrastructure, IT, and Data Center professionals looking to balance server and application loads, integrate PM storage for in-memory applications, and understand how response time and latency spikes are being influenced by applications, storage and the SW/HW stack. 

The SNIA Solid State Storage Technical Work Group (TWG) has published several papers on performance testing and real-world workloads, and the  SNIA PM PTS White Paper includes both synthetic and real world workload tests.  The authors are seeking comment from industry professionals, researchers, academics and other interested parties on the PM PTS WP and anyone interested to participate in development of the PM PTS.

Use the SNIA Feedback Portal to submit your comments.

Judging Has Begun – Submit Your Entry for the NVDIMM Programming Challenge!

We’re 11 months in to the Persistent Memory Hackathon program, and over 150 software developers have taken the tutorial and tried their hand at programming to persistent memory systems.   AgigA Tech, Intel SMART Modular, and Supermicro, members of the SNIA Persistent Memory and NVDIMM SIG, have now placed persistent memory systems with NVDIMM-Ns into the SNIA Technology Center as the backbone of the first SNIA NVDIMM Programming Challenge.

Interested in participating?  Send an email to PMhackathon@snia.org to get your credentials.  And do so quickly, as the first round of review for the SNIA NVDIMM Programming Challenge is now open.  Read More

Your Questions Answered – Now You Can Be a Part of the Real World Workload Revolution!

The SNIA Solid State Storage Initiative would like to thank everyone who attended our webcast: How To Be Part of the Real World Workload Revolution.  If you haven’t seen it yet, you can view the on demand version here.  You can find the slides here.

Eden Kim and Jim Fister led a discussion on the testmyworkload (TMW) tool and data repository, discussing how a collection of real-world workload data captures can revolutionize design and configuration of hardware, software and systems for the industry.   A new SNIA white paper available in both English and Chinese authored by Eden Kim, with an introduction by Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis, discusses how we can all benefit by sharing traces of our digital workloads through the SNIA SSSI Real-World Workload Capture program.

Read More

Calling All Real-World Workloads

Video streaming is an easy-to-understand workload from the I/O perspective, right?  It’s pretty obvious that it’s a workload heavy on long, streaming reads. The application can be modeled with a consistent read flow, and the software tests should be easy.  However, an analysis of the real-world workload shows something very different. At the disk level, the reads turn out to be a rapid flow of 4k and 8k block reads from a solid-state-disk.  Further, other processes on the system also add in a small amount of 4k and 8k writes in the midst of the reads. All of this impacts the application –and an SSD — which was likely heavily tested on the basis of long, streaming reads.

Understanding the real-world characteristics of a workload can be a significant advantage in the development of new hardware, new systems, and new applications.   The SNIA Solid State Storage Initiative (SSSI) and SSSI member company Calypso Systems are providing an opportunity to build a repository of workloads for the industry to use for real-world testing, as outlined in a new SSSI white paper How to Be a Part of the Real-World Workload Revolution. This paper is also available in Chinese at the SSSI Knowledge Center White Papers page.

By going to the TestMyWorkload site, anyone can participate by providing a trace capture of an I/O workload that can be used by others to develop better products. The capture itself traces the block transfers, but does not capture actual data.  Any workload replay would use representative blocks, so there are no concerns about data security or integrity from these captures.

The repository can be used by any participant to test hardware and software, and can help system vendors and users optimize configurations for the best performance based on real-world data.  By participating in this effort, organizations and individuals can provide insight and gain from the knowledge of all the contributors.

Follow these three steps to be a part of the revolution today!

1.  Read the white paper.

2.  Download the free capture tools at TestMyWorkload.com.

3. Mark your calendar and register HERE to learn more in the free SNIA webcast How to Be a Part of the Real-World Workload Revolution on July 9 at 11:00 am Pacific/2:00 pm Eastern.

Your Questions Answered on Non-Volatile DIMMs

 

by Arthur Sainio, SNIA NVDIMM SIG Co-Chair, SMART Modular

SNIA’s Non-Volatile DIMM (NVDIMM) Special Interest Group (SIG) had a tremendous response to their most recent webcast:  NVDIMM:  Applications are
Here
!  You can view the webcast on demand.

Viewers had many questions during the webcast.  In this blog, the NVDIMM SIG answers those questions and shares the SIG’s knowledge of NVDIMM technology. Read More

How Many IOPS? Users Share Their 2017 Storage Performance Needs

New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance.The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing. Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need. Read More