30 Speakers Highlight AI, Memory, Sustainability, and More at the May 21-22 Summit!

SNIA Compute, Memory, and Storage Summit is where solutions, architectures, and community come together. Our 2024 Summit – taking place virtually on May 21-22, 2024 – is the best example to date, featuring a stellar lineup of 30 speakers in sessions on artificial intelligence, the future of memory, sustainability, critical storage security issues, the latest on CXL®, UCIe™, and Ultra Ethernet, and more.

“We’re excited to welcome executives, architects, developers, implementers, and users to our 12th annual Summit,” said David McIntyre, Compute, Memory, and Storage Summit Chair and member of the SNIA Board of Directors. “Our event features technology leaders from companies like Dell, IBM, Intel, Meta, Samsung – and many more – to bring us the latest developments in AI, compute, memory, storage, and security in our free online event.  We hope you will attend live to ask questions of our experts as they present and watch those you miss on-demand.“

Artificial intelligence sessions sponsored by the SNIA Data, Networking & Storage Forum feature J Michel Metz of the Ultra Ethernet Consortium (UEC) on powering AI’s future with the UEC,  John Cardente of Dell on storage requirements for AI, Jeff White of Dell on edgenuity, and Garima Desai of Samsung on creating a sustainable semiconductor industry for the AI era. Other AI sessions include Manoj Wadekar of Meta on the evolution of hyperscale data centers from CPU centric to GPU accelerated AI, Paul McLeod of Supermicro on storage architecture optimized for AI, and Prasad Venkatachar of Pliops on generative AI data architecture.

Memory sessions begin with Jim Handy and Tom Coughlin on how memories are driving big architectural changes. Ahmed Medhioub of Astera Labs will discuss breaking through the memory wall with CXL, and Sudhir Balasubramanian and Arvind Jagannath of VMware will share their memory vision for real world applications.

Compute sessions include Andy Walls of IBM on computational storage and real time ransomware detection, JB Baker of ScaleFlux on computational storage real world deployments, Dominic Manno of Los Alamos National Labs on streamlining scientific workflows in computational storage, and Bill Martin and Jason Molgaard of the SNIA Computational Storage Technical Work Group on computational storage standards.

CXL will be featured with a CXL Consortium panel on increasing AI and HPC application performance with CXL fabrics, a presentation from Larrie Carr of Rambus on  proprietary internconnects and CXL, and a session from Samsung and Broadcom on bringing unique customer value with CXL accelerator-based memory solutions.

Richelle Ahlvers and Brian Rea of the UCI Express will discuss enabling an open chipset system with UCIe.

The Summit will also dive into security with a number of presentations on this important topic.

And there is much more, including a memory Birds-of-a-Feather session, a live Memory Workshop and Hackathon featuring CXL exercises, and opportunities to chat with our experts! Check out the agenda and register for free!

Power Efficiency Measurement – Our Experts Make It Clear – Part 4

Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier!

In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization.

Join us on our final installment on the  journey to better power efficiency – Part 4: Impact of Storage Architectures on Power Efficiency Measurement.

And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement,  Part 2: Impact of Workloads on Power Efficiency Measurement, and Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives.  Bookmark this blog series and explore the topic further in the SNIA Green Storage Knowledge Center.

Impact of Storage Architectures on Power Efficiency Measurement

Ultimately, the interplay between hardware and software storage architectures can have a substantial impact on power consumption. Optimizing these architectures based on workload characteristics and performance requirements can lead to better power efficiency and overall system performance.

Different hardware and software storage architectures can lead to varying levels of power efficiency. Here’s how they impact power consumption.

Hardware Storage Architectures

  1. HDDs v SSDs:
    Solid State Drives (SSDs) are generally more power-efficient than Hard Disk Drives (HDDs) due to their lack of moving parts and faster access times. SSDs consume less power during both idle and active states.
  2. NVMe® v SATA SSDs:
    NVMe (Non-Volatile Memory Express) SSDs often have better power efficiency compared to SATA SSDs. NVMe’s direct connection to the PCIe bus allows for faster data transfers, reducing the time components need to be active and consuming power. NVMe SSDs are also performance optimized for different power states.
  3. Tiered Storage:
    Systems that incorporate tiered storage with a combination of SSDs and HDDs optimize power consumption by placing frequently accessed data on SSDs for quicker retrieval and minimizing the power-hungry spinning of HDDs.
  4. RAID Configurations:
    Redundant Array of Independent Disks (RAID) setups can affect power efficiency. RAID levels like 0 (striping) and 1 (mirroring) may have different power profiles due to how data is distributed and mirrored across drives.

Software Storage Architectures

  1. Compression and Deduplication:
    Storage systems using compression and deduplication techniques can affect power consumption. Compressing data before storage can reduce the amount of data that needs to be read and written, potentially saving power.
  2. Caching:
    Caching mechanisms store frequently accessed data in faster storage layers, such as SSDs. This reduces the need to access power-hungry HDDs or higher-latency storage devices, contributing to better power efficiency.
  3. Data Tiering:
    Similar to caching, data tiering involves moving data between different storage tiers based on access patterns. Hot data (frequently accessed) is placed on more power-efficient storage layers.
  4. Virtualization
    Virtualized environments can lead to resource contention and inefficiencies that impact power consumption. Proper resource allocation and management are crucial to optimizing power efficiency.
  5. Load Balancing:
    In storage clusters, load balancing ensures even distribution of data and workloads. Efficient load balancing prevents overutilization of certain components, helping to distribute power consumption evenly
  6. Thin Provisioning:
    Allocating storage on-demand rather than pre-allocating can lead to more efficient use of storage resources, which indirectly affects power efficiency

Open Standards Featured at FMS 2023

SNIA welcomes colleagues to join them at the upcoming Flash Memory Summit, August 8-10, 2023 in Santa Clara CA.

SNIA is pleased to join standards organizations CXL Consortium™ (CXL™), PCI-SIG®, and Universal Chiplet Interconnect Express™ (UCIe™) in an Open Standards Pavilion, Booth #725, in the Exhibit Hall.  CMSI will feature SNIA member companies in a computational storage cross industry demo by Intel, MINIO, and Solidigm and a Data Filtering demo by ScaleFlux; a software memory tiering demo by VMware; a persistent memory workshop and hackathon; and the latest on SSD form factors E1 and E3 work by SNIA SFF TA Technical work group. SMI will showcase SNIA Swordfish® management of NVMe SSDs on Linux with demos by Intel Samsung and Solidigm.

CXL will discuss their advances in coherent connectivity.  PCI-SIG will feature their PCIe 5.0 architecture (32GT/s) and PCIe 6.0 (65GT/s) architectures and industry adoption and the upcoming PCIe 7.0 specification development (128GT/s).  UCIe will discuss their new open industry standard establishing a universal interconnect at the package-level.

SNIA STA Forum will also be in Booth #849 – learn more about the SCSI Trade Association joining SNIA.

These demonstrations and discussions will augment FMS program sessions in the SNIA-sponsored System Architecture Track on memory, computational storage, CXL, and UCIe standards.  A SNIA mainstage session on Wednesday August 9 at 2:10 pm will discuss Trends in Storage and Data: New Directions for Industry Standards.

SNIA colleagues and friends can receive a $100 discount off the 1-, 2-, or 3-day full conference registration by using code SNIA23.

Visit snia.org/fms to learn more about the exciting activities at FMS 2023 and join us there!

So just what is an SSD?

It seems like an easy enough question, “What is an SSD?” but surprisingly, most of the search results for this get somewhat confused quickly on media, controllers, form factors, storage interfaces, performance, reliability, and different market segments. 

The SNIA SSD SIG has spent time demystifying various SSD topics like endurance, form factors, and the different classifications of SSDs – from consumer to enterprise and hyperscale SSDs.

“Solid state drive is a general term that covers many market segments, and the SNIA SSD SIG has developed a new overview of “What is an SSD? ,” said Jonmichael Hands, SNIA SSD Special Interest Group (SIG)Co-Chair. “We are committed to helping make storage technology topics, like endurance and form factors, much easier to understand coming straight from the industry experts defining the specifications.”  

The “What is an SSD?” page offers a concise description of what SSDs do, how they perform, how they connect, and also provides a jumping off point for more in-depth clarification of the many aspects of SSDs. It joins an ever-growing category of 20 one-page “What Is?” answers that provide a clear and concise, vendor-neutral definition of often- asked technology terms, a description of what they are, and how each of these technologies work.  Check out all the “What Is?” entries at https://www.snia.org/education/what-is

And don’t miss other interest topics from the SNIA SSD SIG, including  Total Cost of Ownership Model for Storage and SSD videos and presentations in the SNIA Educational Library.

Your comments and feedback on this page are welcomed.  Send them to askcmsi@snia.org.

Your Questions Answered on Persistent Memory, CXL, and Memory Tiering

With the persistent memory ecosystem continuing to evolve with new interconnects like CXL™ and applications like memory tiering, our recent Persistent Memory, CXL, and Memory Tiering-Past, Present, and Future webinar was a big success.  If you missed it, watch it on demand HERE!

Many questions were answered live during the webinar, but we did not get to all of them.  Our moderator Jim Handy from Objective Analysis, and experts Andy Rudoff and Bhushan Chithur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware have taken the time to answer them in this blog. Happy reading!

Q: What features or support is required from a CXL capable endpoint to e.g. an accelerator to support the memory pooling? Any references?

A: You will have two interfaces, one for the primary memory accesses and one for the management of the pooling device. The primary memory interface is the .mem and the management interface will be via the .io or via a sideband interface. In addition you will need to implement a robust failure recovery mechanism since the blast radius is much larger with memory pooling.

Q: How do you recognize weak information security (in CXL)?

A: CXL has multiple features around security and there is considerable activity around this in the Consortium.  For specifics, please see the CXL Specification or send us a more specific question.

Q: If the system (e.g. x86 host) wants to deploy CXL memory (Type 3) now, is there any OS kernel configuration, BIO configuration to make the hardware run with VMWare (ESXi)? How easy or difficult this setup process?

A: A simple CXL Type 3 Memory Device providing volatile memory is typically configured by the pre-boot environment and reported to the OS along with any other main memory.  In this way, a platform that supports CXL Type 3 Memory can use it without any additional setup and can run an OS that contains no CXL support and the memory will appear as memory belonging to another NUMA code.  That said, using an OS that does support CXL enables more complex management, error handling, and more complex CXL devices.

Q: There was a question on ‘Hop” length. Would you clarify?

A: In the webinar around minute 48, it was stated that a Hop was 20ns, but this is not correct. A Hop is often spoken of as “Around 100ns.”  The Microsoft Azure Pond paper quantifies it four different ways, which range from 85ns to 280ns.

Q: Do we have any idea how much longer the latency will be?  

A: The language CXL folks use is “Hops.”   An address going into CXL is one Hop, and data coming back is another.  In a fabric it would be twice that, or four Hops.  The  latency for a Hop is somewhere around 100ns, although other latencies are accepted.

Q: For memory semantic SSD:  There appears to be a trend among 2LM device vendors to presume the host system will be capable of providing telemetry data for a device-side tiering mechanism to decide what data should be promoted and demoted.  Meanwhile, software vendors seem to be focused on the devices providing telemetry for a host-side tiering mechanism to tell the device where to move the memory.  What is your opinion on how and where tiering should be enforced for 2LM devices like a memory semantic SSD?

A: Tiering can be managed both by the host and within computational storage drives that could have an integrated compute function to manage local tiering- think edge applications.

Q: Re VM performance in Tiering: It appears you’re comparing the performance of 2 VM’s against 1.  It looked like the performance of each individual VM on the tiering system was slower than the DRAM only VM.  Can you explain why we should take the performance of 2 VMs against the 1 VM?  Is the proposal that we otherwise would have required those 2 VM’s to run on separate NUMA node, and now they’re running on the same NUMA node?

A: Here the use case was, lower TCO & increased capacity of memory along with aggregate performance of VM’s v/s running few VM’s on DRAM. In this use case, the DRAM per NUMA Node was 384GB, the Tier2 memory per NUMA node was 768GB. The VM RAM was 256GB.

In the DRAM only case, if we have to run business critical workloads e.g., Oracle with VM RAM=256GB,  we could only run 1 VM (256GB) per NUMA Node (DRAM=384GB), we cannot over-provision memory in the DRAM only case as every NUMA node has 384GB only. So potentially we could run 4 such VM’s (VM RAM=256Gb) in this case with NUMA node affinity set as we did in this use case OR if we don’t do NUMA node affinity, maybe 5 such VM’s without completely maxing out the server RAM.  Remember, we did NUMA node affinity in this use case to eliminate any cross NUMA latency.78

Now with Tier2 memory in the mix, each NUMA node has 384GB DRAM and 768GB Tier2 Memory, so theoretically one could run 16-17 such VM’s (VM RAM =256GB), hence we are able to increase resource maximization, run more workloads, increase transactions etc , so lower TCO, increased capacity and aggregate performance improvement.

Q: CXL is changing very fast, we have 3 protocol versions in 2 years, as a new consumer of CXL what are the top 3 advantages of adopting CXL right away v/s waiting for couple of more years?

A: All versions of CXL are backward compatible.  Users should have no problem using today’s CXL devices with newer versions of CXL, although they won’t be able to take advantage of any new features that are introduced after the hardware is deployed.

Q: (What is the) ideal when using agilex FPGAs as accelerators?

A: CXL 3.0 supports multiple accelerators via the CXL switching fabric. This is good for memory sharing across heterogeneous compute accelerators, including FPGAs.

Thanks again for your support of SNIA education, and we invite you to write askcmsi@snia.org for your ideas for future webinars and blogs!

It’s A Wrap – But Networking and Education Continue From Our C+M+S Summit!

Our 2023 SNIA Compute+Memory+Storage Summit was a success! The event featured 50 speakers in 40 sessions over two days. Over 25 SNIA member companies and alliance partners participated in creating content on computational storage, CXL™ memory, storage, security, and UCIe™. All presentations and videos are free to view at www.snia.org/cms-summit.

“For 2023, the Summit scope expanded to examine how the latest advances within and across compute, memory and storage technologies should be optimized and configured to meet the requirements of end customer applications and the developers that create them,” said David McIntyre, Co-Chair of the Summit.  “We invited our SNIA Alliance Partners Compute Express Link™ and Universal Chiplet Interconnect Express™ to contribute to a holistic view of application requirements and the infrastructure resources that are required to support them,” McIntyre continued.  “Their panel on the CXL device ecosystem and usage models and presentation on UCIe innovations at the package level along with three other sessions on CXL added great value to the event.”

Thirteen computational storage presentations covered what is happening in NVMe™ and SNIA to support computational storage devices and define new interfaces with computational storage APIs that work across different hardware architectures.  New applications for high performance data analytics, discussions of how to integrate computational storage into high performance computing designs, and new approaches to integrate compute, data and I/O acceleration closely with storage systems and data nodes were only a few of the topics covered.

“The rules by which the memory game is played are changing rapidly and we received great feedback on our nine presentations in this area,” said Willie Nelson, Co-Chair of the Summit.  “SNIA colleagues Jim Handy and Tom Coughlin always bring surprising conclusions and opportunities for SNIA members to keep abreast of new memory technologies, and their outlook was complimented by updates on SNIA standards on memory-to memory data movement and on JEDEC memory standards; presentations on thinking memory, fabric attached memory, and optimizing memory systems using simulations; a panel examining where the industry is going with persistent memory, and much more.”

Additional highlights included an EDSFF panel covering the latest SNIA specifications that support these form factors, sharing an overview of platforms that are EDSFF-enabled, and discussing the future for new product and application introductions; a discussion on NVMe as a cloud interface; and a computational storage detecting ransomware session.

New to the 2023 Summit – and continuing to get great views – was a “mini track” on Security, led by Eric Hibbard, chair of the SNIA Storage Security Technical Work Group with contributions from IEEE Security Work Group members, including presentations on cybersecurity, fine grain encryption, storage sanitization, and zero trust architecture.

Co-Chairs McIntyre and Nelson encourage everyone to check out the video playlist and send your feedback to askcmsi@snia.org. The “Year of the Summit” continues with networking opportunities at the upcoming SmartNIC Summit (June), Flash Memory Summit (August), and SNIA Storage Developer Conference (September).  Details on all these events and more are at the SNIA Event Calendar page.  See you soon!

50 Speakers Featured at the 2023 SNIA Compute+Memory+Storage Summit

SNIA’s Compute+Memory+Storage Summit is where architectures, solutions, and community come together. Our 2023 Summit – taking place virtually on April 11-12, 2023 – is the best example to date, featuring a stellar lineup of 50 speakers in 40 sessions covering topics including computational storage real-world applications, the future of memory, critical storage security issues, and the latest on SSD form factors, CXL™, and UCIe™.

“We’re excited to welcome executives, architects, developers, implementers, and users to our 11th annual Summit,” said David McIntyre, C+M+S Summit Co-Chair, and member of the SNIA Board of Directors.  “We’ve gathered the technology leaders to bring us the latest developments in compute, memory, storage, and security in our free online event.  We hope you will watch live to ask questions of our experts as they present, and check out those sessions you miss on-demand.”

Memory sessions begin with Watch Out – Memory’s Changing! where Jim Handy and Tom Coughlin will discuss the memory technologies vying for the designer’s attention, with CXL™ and UCIe™ poised to completely change the rules. Speakers will also cover thinking memory, optimizing memory using simulations, providing capacity and TCO to applications using software memory tiering, and fabric attached memory.

Compute sessions include Steven Yuan of StorageX discussing the Efficiency of Data Centric Computing, and presentations on the computational storage and compute market, big-disk computational storage arrays for data analytics, NVMe as a cloud interface, improving storage systems for simulation science with computational storage, and updates on SNIA and NVM Express work on computational storage standards.

CXL and UCIe will be featured with presentations on CXL 3.0 and Universal Compute Interface Express™ On-Package Innovation Slot for Compute, Memory, and Storage Applications.

The Summit will also dive into security with a introductory view of today’s storage security landscape and additional sessions on zero trust architecture, storage sanitization, encryption, and cyber recovery and resilience.

For 2023, the Summit is delighted to present three panels – one on Exploring the Compute Express Link™ (CXL™) Device Ecosystem and Usage Models moderated by Kurtis Bowman of the CXL Consortium, one on Persistent Memory Trends moderated by Dave Eggleston of Microchip, and one on Form Factor Updates, moderated by Cameron Brett of the SNIA SSD Special Interest Group.

We will also feature the popular SNIA Birds-of-a-Feather sessions. On Tuesday April 11 at 4:00 pm PDT/7:00 pm EDT, you can join to discuss the latest compute, memory, and storage developments, and on Wednesday April at 3:00 pm PDT/6:00 pm EDT, we’ll be talking about memory advances.

Learn more in our Summit preview video, check out the agenda, and register for free to access our Summit platform!

“Year of the Summit” Kicks Off with Live and Virtual Events

For 11 years, SNIA Compute, Memory and Storage Initiative (CMSI) has presented a Summit featuring industry leaders speaking on the key topics of the day.  In the early years, it was persistent memory-focused, educating audiences on the benefits and uses of persistent memory.  In 2020 it expanded to a Persistent Memory+Computational Storage Summit, examining that new technology, its architecture, and use cases.

Now in 2023, the Summit is expanding again to focus on compute, memory, and storage.  In fact, we’re calling 2023 the Year of the Summit – a year to get back to meeting in person and offering a variety of ways to listen to leaders, learn about technology, and network to discuss innovations, challenges, solutions, and futures.

We’re delighted that our first event of the Year of the Summit is a networking event at MemCon, taking place March 28-29 at the Computer History Museum in Mountain View CA.

At MemCon, SNIA CMSI member and IEEE President elect Tom Coughlin of Coughlin Associates will moderate a panel discussion on Compute, Memory, and Storage Technology Trends for the Application Developer.  Panel members Debendra Das Sharma of Intel and the CXL™ Consortium, David McIntyre of Samsung and the SNIA Board of Directors, Arthur Sainio of SMART Modular and the SNIA Persistent Memory Special Interest Group, and Arvind Jaganath of VMware and SNIA CMSI will examine how applications and solutions available today offer ways to address enterprise and cloud provider challenges – and they’ll provide a look to the future.

SNIA leaders will be on hand to discuss work in computational storage, smart data acceleration interface (SDXI), SSD form factor advances, and persistent memory trends.  Share a libation or two at the SNIA hosted networking reception on Tuesday evening, March 28.

This inaugural MemCon event is perfect to start the conversation, as it focuses on the intersection between systems design, memory innovation (emerging memories, storage & CXL) and other enabling technologies. SNIA colleagues and friends can register for MemCon with a 15% discount using code SNIA15.

April 2023 Networking!

We will continue the Year with a newly expanded SNIA Compute+Memory+Storage Summit coming up April 11-12 as a virtual event.  Complimentary registration is now open for a stellar lineup of speakers, including Stephen Bates of Huawei, Debendra Das Sharma of  Universal Chiplet Interconnect Express™, Jim Handy of Objective Analysis, Shyam Iyer of Dell, Bill Martin of Samsung, Jake Oshins of Microsoft, Andy Rudoff of Intel, Andy Walls of IBM, and Steven Yuan of StorageX.

Summit topics include Memory’s Headed for Change, High Performance Data Analytics, CXL 3.0, Detecting Ransomware, Meeting Scaling Challenges, Open Standards for Innovation at the Package Level, and Standardizing Memory to Memory Data Movement. Great panel discussions are on tap as well.  Kurt Lender of the CXL Consortium will lead a discussion on Exploring the CXL Device Ecosystem and Usage Models, Dave Eggleston of Microchip will lead a panel with Samsung and SMART Modular on Persistent Memory Trends, and Cameron Brett of KIOXIA will lead a SSD Form Factors Update.   More details at www.snia.org/cms-summit.

Later in 2023…

Opportunities for networking will continue throughout 2023. We look forward to seeing you at the SmartNIC Summit (June 13-15), Flash Memory Summit (August 8-10), SNIA Storage Developer Conference (September 18-21), OCP Global Summit (October 17-19), and SC23 (November 12-17). Details on SNIA participation coming soon!

Our Storage Life on the Edge Webcast Series Continues….

The second webcast in our Storage Life on the Edge series is coming up on March 22, 2022 at 10:00 am Pacific time.  This panel, moderated by Bill Martin, SNIA Compute, Memory, and Storage Initiative Chair, takes a deeper dive to focus on edge use cases in the computational storage space.

Our panelists Mayank Saxena from Samsung, Stephen Bates from Eideticom, and Tong Zhang from ScaleFlux will discuss edge to cloud use cases where storage and compute resources need to be deployed in practical topologies that deliver the very best in application performance. They’ll examine high performance edge data needs, database acceleration solutions, meeting retail chain challenges, and more. You won’t want to miss their panel discussion and the chance to ask your questions live.

Register here to attend. We’ll look forward to seeing you!

Why Cryptocurrency and Computational Storage?

Our new SNIA Compute, Memory, and Storage webcast focuses on a hot topic – storage-based cryptocurrency.

Blockchains, cryptocurrency, and the internet of markets are working to transform finance, wealth, safety, digital security, and trust. Storage-based cryptocurrencies had a breakout year in 2021. Proof of Space and Time is a new blockchain consensus that uses storage capacity to secure the blockchain. Decentralized file storage will enable alternatives to hyperscale data centers for hosting files and objects. Understanding the TCO of a storage system and optimizing the utilization of the storage hardware is critical in scaling these systems.

Join our speakers, Jonmichael Hands of Chia Network and Eli Tiomkin of NGD Systems, for this discussion on how a new approach of auto-plotting SSDs combined with computational storage can lower the total TCO. Registration is free for this webcast on Tuesday, February 15 at 10:00 am Pacific time. Click on the link to register and see you there! https://www.brighttalk.com/webcast/663/526154