• Home
  • About

    NVM Big at Storage Developer Conference SDC Precon

    September 19th, 2015

    Objective Analysis 3D XPoint Report GraphicI’ll be speaking at SNIA’s SDC Pre-Conference this Sunday, Sept 20, about the new Intel-Micron 3D XPoint memory.  I was surprised to find that my talk won’t be unique.  There are about 15 papers at this conference that will be discussing NVM, or persistent memory.

    What’s all this fuss about?

    Part of it has to do with the introduction by Micron & Intel of their 3D XPoint (pronounced “Crosspoint”) memory.  This new product will bring nonvolatility, or persistence, to main memory, and that’s big!

    Intel itself will present a total of seven papers to tell us all how they envision this technology being used in computing applications.  Seven other companies, in addition to Objective Analysis (my company) will also discuss this hot new topic.

    SNIA is really on top of this new trend.  This organization has been developing standards for nonvolatile memory for the past couple of years, and has published an NVM Programming Model to help software developers produce code that will communicate with nonvolatile memory no matter who supplies it.  Prior to SNIA’s intervention the market was wildly inconsistent, and all suppliers’ NVDIMMs differed slightly from one another, with no promise that this would become any better once new memory technologies started to make their way onto memory modules.

    Now that Intel and Micron will be producing their 3D XPoint memory, and will be supplying it on industry-standard DDR4 DIMMs, it’s good to know that there will be a standard protocol to communicate with it.  This will facilitate the development of standard software to harness all that nonvolatile memory has to offer.

    As for me, I will be sharing information from my company’s new report on the Micron-Intel 3D XPoint memory.  This is new, and it’s exciting.  Will it succeed?  I’ll discuss that with you there.

    Free Booklet: How SSD Controllers Maximize SSD Life

    February 5th, 2013

    SNIA SSD Controller BookSNIA’s SSSI has introduced a new booklet: How Controllers Maximize SSD Life.  This 20-page volume, which can be downloaded as a pdf for free from the SNIA website, is a compilation of a series of blog posts on the SSD Guy blog.

    The booklet explains the tricks SSD controller designers use to extend NAND flash life far beyond the limits posed by NAND endurance specifications of 10,000 or fewer erase/write cycles.

    The series was written by this blogger with important help from companies like Intel, SMART Storage, Marvell, and Calypso Systems.

    Hard copies are available through the SNIA Solid State Storage Initiative.

    Introducing SNIA’s Workload I/O Capture Program

    January 17th, 2013

    SNIA’s Solid State Storage Initiative (SSSI) recently rolled out its new Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc.)

    The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks.  SNIA hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website.

    Using this information SNIA member companies will be able to improve the performance of their solid state storage solutions, including SSDs and flash storage arrays.

    How it Works

    The WIOCP software is a safe and thoroughly-tested tool which runs unobtrusively in the background to constantly capture a large set of SSD and HDD I/O metrics that are useful to both the computer user and to SNIA.

    Users simply enter the drive letters for those drives for which I/O operations metrics are to be collected.   The program does not record anything that might be sensitive, including details of your actual workload (for example, files you’ve accessed.)   Results are presented in clear and accessible report formats.

    How can WIOCP Help You?

    Users can collect (and optionally display in real time) information reflecting their current environment and operations with the security of a tool delivered with digital authentication for their protection.

    The collected I/O metrics will provide information useful to evaluate an SSD system environment.

    Statistics from a wide range of applications will be collected, and can be used with the SSS Performance Test Specification to help users determine which SSD should  perform best for them.

    How can Your Participation Help SNIA and the SSSI?

    The WIOCP provides unique, raw information that can be analyzed by SNIA’s Technical Work Groups (TWGs) including the IOTTA TWG to gain insights into workload characteristics, key performance metrics, and SSD design tradeoffs.

    The collected data from all participants will be aggregated and publicly available for download and analysis. No personally identifiable information is collected – participants will benefit from this information pool without comprising their privacy or confidentiality.

    Downloading the WIOCP

    Help SNIA get started on this project by clicking HERE and using the “Download Key Code”: SSSI52kd9A8Z.

    The WIOCP tool will be delivered to your system with a unique digital signature.  The tool only takes a few minutes to download and initialize, after which users can return to the task at hand!

    If you have any questions or comments, please contact: SSSI_TechDev-Chair@SNIA.org

    Quick PTS Implementation

    November 11th, 2011

    PTS ProcedureNeed an abbreviated version of the SNIA SSD Performance Test Specification (PTS) in a hurry?  Jamon Bowen of Texas Memory Systems (TMS) whipped up a simple implementation of certain key parts of the PTS that can be run on a Linux system and interpreted in Excel.

    It’s a free download on his Storage Tuning blog.

    This is a boon for anyone that might want to run a internal preliminary test before pursuing a more formal route.

    The bash script uses the Flexible I/O utility (FIO) to run through part of the SSSI PTS.  FIO does the heavy lifting, and the script manages it.  The script outputs comma separated (CSV) data and the download includes an Excel pivot table that helps format the results and select the measurement window.

    Since this is a bare-bones implementation the SSD must be initialized manually before the test script is run.

    The test runs the IOPS Test from the PTS.  This test covers a range of block sizes, read/write ratios and iterates until the steady state for the device is reached (with a maximum of 25 iterations).  Altogether the test takes over a day to run.

    Once the test is complete, the downloadable pivot tables allow users to select the steady-state measurement window and report the data in a recommended format.

    See Mr. Bowen’s blog at http://storagetuning.wordpress.com/2011/11/07/sssi-performance-test-specification/ for details on this valuable download.

    PCs: Better Boost from Flash than DRAM!

    July 19th, 2011

    Objective Analysis has just published a new study with a somewhat surprising finding – that PCs get a bigger performance improvement by adding a dollar’s worth of NAND flash than by adding a dollar’s worth of DRAM.

    This finding is the result of a series of nearly 300 benchmarks in which the company tested PCs with a variety of DRAM and NAND flash sizes running industry-standard benchmarks: PCMark, SYSmark, HDxPRT, and others.

    In a nutshell the benchmarks showed that dollar-for-dollar NAND yields a greater performance improvement to a PC than does DRAM.  Once PC users and OEMs discover this phenomenon there should be a mass-migration of PC architectures to systems with paired storage (there’s a SNIA Webcast on this), perhaps in hybrid HDDs, and this will present difficulties to DRAM makers whose biggest market is the PC.

    Oddly enough, the study shows that the HDD is likely to remain in PCs for a while to come, since well-designed DRAM-Flash-HDD configurations perform nearly as fast as DRAM-SSD systems with prices and capacities that are similar to those of a conventional DRAM-HDD system.  Future PC users are likely to opt for adding NAND flash, rather than DRAM, to their systems when they upgrade.

    The report is available for purchase at http://Objective-Analysis.com/Reports.html#DRAM-NAND.

    Comments and questions are more than welcome.

    Kaminario – A New Name for Solid State Storage

    June 28th, 2010

    An Israeli start-up named Kaminario is attacking Texas Memory Systems’ home turf with a DRAM SSD that offers speeds as fast as 1.5 million IOPS.  While TMS has built itself a comfortable niche business using custom hardware, Kaminario’s K2 SSD, announced on June 16, is made using standard off-the-shelf low-profile blade servers from Dell.  Only the software is proprietary.

    DRAM SSDs are an interesting product that serves niches which flash SSDs are unlikely to penetrate.  Objective Analysis’ new report on Enterprise SSDs explores the price and speed dynamics that separate these two technologies.  See the Objective Analysis Reports page for more information.

    Some of the K2’s internal servers are dedicated to handling I/O, and are called “io Directors.”  The bandwidth of the storage system scales linearly with the number of io Directors used – a pair of io Directors provides 300K IOPS, and ten io Directors will support 1.5M IOPS.  Below the io Directors are other servers called “Data Nodes” which manage the storage.  Capacity scales linearly with the addition of Data Nodes.  Today’s limit is 3.5TB, but this number will increase over time.

    Redundancy is a key feature of the Kaminario K2: There are at least two of any device: io Directors, Data Nodes, and HDDs per Data Node, since the DRAM-based data is stored onto HDDs in the event of an unexpected power failure.  The system can communicate with the host through a range of interfaces, with FCOE offered at introduction.

    Kaminario’s K2 boasts a significantly smaller footprint and price tag than HDD-based systems with competing IOPS levels.

    To find out more about Kaminario visit Kaminario.com

    Violin Memory wants to Replace your Storage Array

    June 21st, 2010

    Violin Memory introduced their 3000 series memory appliance in mid-May.  This million-plus-IOPS device piles 10-20 terabytes of NAND flash storage into a single 3U cabinet at a price that Violin’s management claims is equivalent to that of high-end storage arrays.

    The system, introduced at $20/GB, or $200,000, is intended to provide enough storage at a low enough price to eliminate any need to manage hot data into and out of a limited number of small solid state drives.  Instead, Violin argues, the appliance’s capacity is big enough and cheap enough that an entire database can be economically stored within it, giving lightning-fast access to the entire database at once.

    Note that Violin acquired Gear6 a month later, in mid-June.  This seems to reveal that the company is hedging its bets, taking advantage of a distressed caching company’s expertise to assure a strong position in architectures based upon a smaller memory appliance managed by caching software.

    There is a good bit of detail about how and why both of these approaches make sense in Objective Analysis’ newest Enterprise SSD report.  See the Objective Analysis Reports page for more information.

    But in regard to the Series 3000, CIOs whose databases are even larger than 10TB will be comforted to hear that Violin will be introducing appliances with as much as 60TB of storage by year-end.

    Violin’s 3000 series can be configured through a communications module to support nearly any interface: Fibre Channel, 10Gb Ethernet, FCOE, PCIe, with Violin offering to support “Even InfiniBand, if asked.”  Inside are 84 modules, each built of a combination of DRAM and both SLC and MLC NAND flash, configured to assure data and pathway redundancy.

    This high level of redundancy and fault management is one of Violin’s hallmarks.

    Violin’s website is Violin-Memory.com

    Nimbus: No Fast HDDs

    June 16th, 2010

    San Francisco’s Nimbus Data Systems launched a solid state storage system in late April that is intended to replace all the HDDs used in a system except for slow disks used in near line storage.  Nimbus holds a viewpoint that solid-state drives eliminate the need for fast disk storage, and that in future times all data centers will be built using only SSDs for speed and capacity drives (slow HDDs) for mass storage.  This viewpoint is gaining a growing following.

    Nimbus’ S-Class Enterprise Flash Storage System uses a proprietary 6GB SAS flash module, rather than off-the-shelf SSDs, to keep the costs low in their systems.  Storage capacity is 2.5-5.0TB per 2U enclosure, and can be scaled up to 100TB.  Throughput is claimed to be 500K IOPS through 10Gb Ethernet connections.   Prices are roughly $8/GB.

    Although Nimbus previously sold systems based on a mix of SSDs and HDDs, they have moved away from using HDDs, and expect for data center managers to adopt this new approach.

    There’s merit to this argument, but it will probably take a few years before CIOs agree on the role of NAND flash vs. enterprise HDDs vs. capacity HDDs in the data center. There’s a lot more detail on the approaches being considered for flash in the enterprise data center in Objective Analysis’ new Enterprise SSD report.  See the Objective Analysis Reports page for more information.

    You can find out more at  NimbusData.com

    New Article: Solid State Drives for Energy Savings

    June 7th, 2010

    A new article, co-authored by myself and Tom Coughlin, can now be read from the SNIA Europe website.  “Solid State Drives for Energy Savings” explains the energy benefits that are being discovered when IT managers start to bring SSDs into their data centers. 

    The article is a quick two pager, and it introduces SNIA’s new TCO Calculator (Total Cost of Ownership), a clever tool that helps estimate the power, rack space, and other savings that come along with a conversion of fast storage from enterprise HDDs to SSDs.

    [Update: After clicking on the above link, it will be necessary to download the April 2010 edition of  Storage Networking Times, in order to read the article.]

    SSDs Strong at MySQL Conference

    April 21st, 2010

    The MySQL Conference, a gathering of programmers who share database strategies, was held in the Santa Clara Convention Center this week.  One hot topic was SSDs.

    My favorite session was hosted by Fusion-io.  They rounded up four satisfied customers who discussed how Fusion-io SSDs had benefited them.

    Craigslist, found that in their systems the RAID card was their highest point of failure.  That doesn’t help much, when the whole point of a RAID system is to prevent failures!  Although the company is only halfway through their first SSD deployment, they are recognizing 2/3 power reductions while their tests are running surprisingly faster than they had with RAID systems.  SSDs are allowing them to move to two incidences of MySQL on a single server with two IOdrives on each server.  Their bottleneck has moved from storage to the software.

    Cloudmark, a net security firm that blocks Spam for over 1 billion in-boxes, started with a system that consumed 48U of servers and storage, and their needs ballooned to the point that they were worried they couldn’t keep pace.  By adding SSDs they were able to actually reduce their systems to 17U, losing 180 drives along the way.  Their processor CPUs are now operating at maximum capacity, something they have never seen before.  Cloudmark’s present system runs 22Us of systems with 22 IOdrives operating in a RAID configuration that is significantly cheaper than a SAN.  The SAN costs $500K.  They have been able to replace this with two servers, each with an IOdrive, for about $20K all told.

    Answers.com: The 18th-rated Internet site in the US and 31st worldwide, is a staunch user of HP hardware.  When HP announced their support of the IOdrive, Answers.com bought a couple and found that they could avoid the purchase of four additional servers by buying a 320GB Fusion-io drive for the price of about two servers.  When they compared performance against their SAS-based systems Answers.com was astounded to get a ten times improvement in complex queries from 350 to 3,500 per second.  Restoring the system from backup dropped from over 6 hours to 12½ minutes!  CPU loading dropped from 30% to 18%.  The company’s old topology had five data centers with four servers per data center.  Today a single server per data center does the trick.  Their tests indicate that they could reduce the number of servers to 1/9th the original number, but conservative policies prevent them from trying this.  The current configuration has been in place for one year, and the company still has more processing power than they need.

    Percona is an important MySQL consulting firm, helping clients through system analysis, coding, and even training.  Percona ran extensive tests on Fusion-io and Intel mainstream SSDs in comparison with enterprise HDDs and even HDD RAID systems.  The conclusion was that the Fusion-io IOdrives were a hands-down winner in database applications.

    All in all these four users present an extremely compelling case for SSDs in general, and Fusion-io drives in particular (although we should keep in mind that Fusion-io selected the speakers for this panel.)  With the results these firms experienced it’s clear that now is the time for all data center managers to stand up and pay attention!