Jun 112013
 

Original article by Chris Mellor at The Register

Enterprise flash array vendor Whiptail….

…By developing the WT-1100 for the entry-level market, Whiptail is positioning itself below the market position occupied by Violin Memory. That could, in turn, put other flash array vendors such as Nimbus Data under pricing pressure.

A 12TB ACCELA is listed at $588,000; that’s $49,000/TB. On that basis, a 4TB WT-1100 could cost $196,000: El Reg feels this would be far too high for a branch office/SME customer. For comparison, a 16TB 4-disk WD Sentinel 1U rackmount storage server is listed at $2,349 retail. But that only comes with a measly dual Atom processor combo running the show.

Whiptail says the WT-1100 starting price is under $20,000. For comparison, ten per cent of the theoretical equivalent 4TB ACCELA price would be $19,600.

  • Share/Bookmark
May 142013
 

Original article by Chris Mellor at The Register

The VC-backed storage crew at Kaminario make three types of K2 array: K2-D, which has DRAM modules; K2-H, with both DRAM and flash; and K2-F, which is all-flash and uses Fusion-io flash cards. But despite a market which is somewhat put off by the high cost of all-flash products – and despite a slew of rivals entering the same space – Kaminario is now focusing on the K2-F all-flash product. The K2-H hybrid product and K2-D details can still be found on the site though.

Version 4 of the K2-F array has quintupled the density of its MLC flash: the previous K2-F had 16TB in 18U (0.9TB/U), while the refreshed one has jumped to 84TB in 18U (4.6TB/U). The K2 capacity has climbed from 100TB to 120TB.

The read/write bandwidth is 30GB/sec, around four times more than the original 8GB/sec for this array. The read latency is 280 microsecs, a tad slower than the previous 260 microsecs, but the write latency has improved from 150 to 120 microsecs.

The random IOPS are up to 2.1 million, well above the K2-F’s original 600,000 IOPS.

Other improvements coming with V4.0 of the SPEAR (Scale-out PErformance ARchitecture) OS include:

– Non-disruptive upgrades
– High-availability with guaranteed performance during component failure and recovery, with a maximum of 25 per cent performance impact
– Hot-swap SAS drives – Kaminario has changed from using Fusion-io PCIe flash cards to SSDs
– OpenStack support via a K2 driver for the Folsom release of OpenStack that represents a K2 array as OpenStack block storage
– RESTful API for external third-party software to use when managing and controlling the array
– Virtually instantaneous K-Snaps – snapshots with consistency groups across multiple OS volumes
– VAAI support

Did we say Kaminario has halved the price of its K2 array? It has, although no actual pricing numbers have been supplied. The company hopes that this will bring new, more budget-limited mid-sized customers through its doors.

Kaminario is facing intensifying all-flash array competition. Let’s just review the competition:

Huawei
IBM -with FlashSystem (acquired TMS RamSan) arrays
Nimbus Data – with its Gemini and E-Series arrays
SolidFire, with a cloud service provider-focused array
Skyera with Skyhawk
Violin Memory and its 6000 series products
Whiptail with INVICTA and ACCELA arrays

This year we expect EMC’s XtremiIO and NetApp’s FlashRay products to debut with HP and HDS products coming, and Dell expected to deliver an all-flash array too. That’s 15 competitors noted in just a quick look. The space between legacy disk drive arrays and all-flash arrays has been shrunk by hybrid flash-disk drive arrays from all the incumbent disk drive array vendors, offering near-flash speed and near-disk array cost, and also startups like Nimble Storage, Tegile and Tintri. The all-flash arrays should have a speed advantage over them but also a cost disadvantage.

Technologies like compression and deduplication can lower the effective cost/GB of a flash array and help offset their cost disadvantage versus the hybrid arrays which offer near-flash array speed at, in the three startup’s case, lower than mainstream vendor array cost. Niche market strategies like Greenbytes and Tintri (VDI) and Solidfire (cloud) differentiate the vendors adopting them. The other, general purpose flash array vendors face strengthening incumbent competition.

A brief summing up says we have 11 general-purpose flash array startups or effective startups – Astute, Huawei, Nexgen, Nimbus, Skyera, Violin and Whiptail – facing six incumbents for customers’ hearts, minds and wallets. There will be blood and not everybody will survive. The brutal prospect is that more than half of the startups will simply fall by the wayside and their survival depends on getting niche, getting big or getting bought by a good Samaritan incumbent. Miss all of those choices and they will have to get out.

  • Share/Bookmark
 May 14, 2013  Kaminario
May 142013
 

Original Article at SearchVirtualStorage

VM auto-alignment and a reporting tool that identifies latency from the guest operating system through the storage.

The Tintri VM storage appliances only support storage with VMware, although Tintri executives say they expect to add support for other hypervisors. The appliances do not support physical servers.

The Tintri VMstore T540 uses a mix of solid-state drives (SSDs) and SATA disk, the same as the VMstore T445, which began shipping in April. The T540 is a 3U dual-controller box with 13.5 TB of usable disk capacity (26.5 TB total ) and 2.4 TB of multi-level cell (MLC) flash. Each box can handle more than 200 virtual machines, said Chris Bennett, Tintri’s vice president of marketing.

Tintri’s original T445 system is a 4U single-controller system with 8.5 TB of usable storage and 1.44 TB of flash. Bennett said the startup will continue to sell the T445 as an entry-level system. The VM storage appliances are NFS-attached today, but Bennett said an iSCSI version may follow. Tintri is also planning more mature storage management features such as replication on a VM basis for future releases.

VMstore nodes can be clustered as NFS shares through 10 Gigabit Ethernet (GbE).

Bennett said VMstore is designed to generate 99% of its I/O from flash. Tintri claims the system also uses inline deduplication for submillisecond latency.

What makes Tintri different from other SSD systems is its integration with VMware. VMstore communicates with the VMware vCenter Server API to determine which VMs are active on the array. Instead of using volumes, LUNs and RAID groups, VMstores map I/O requests directly to the virtual disk on which they occur. The tight VM integration lets VMstore monitor and control I/O performance for each virtual disk.

Tintri claims its system automatically aligns the storage layer to the guest file system, so administrators don’t have to manually realign them to avoid performance degradation over time. The VMstore management dashboard also identifies latency for each VM and virtual disk to help troubleshoot performance problems.

Pricing for the T540 starts at $90,000, including four 10 GbE ports. The T445 costs $64,000.

Ed Lee, Tintri’s architect, said most of the vendor’s early customers use VMstore for specific applications, such as performance-hungry databases.

“Maybe there’s an application they tried to virtualize and failed, so they try running those applications on us, and then they may migrate other apps,” he said.

Tintri’s challenge will be keeping any edge it has managing VMs as the large storage vendors work more closely with VMware Inc. to take advantage of VMware vStorage APIs for Array Integration (VAAI). At VMworld in August, VMware previewed next-generation VAAIs that enable administrators to provision storage without using LUNs, RAID groups and NAS mount points. EMC Corp., NetApp Inc., Dell Inc., IBM, Hewlett-Packard Co. and Hitachi Data Systems Corp. are working with VMware on these features. But no storage vendors have said they were working on features such as auto-alignment or I/O visibility from VM to storage.

Ray Lucchesi, Silverton Consulting president, said he hasn’t seen other vendors as tightly integrated with VMware as Tintri.

“Tintri is laser-focused on VMware, and tightly coupled to VMware APIs,” Lucchesi said. “I haven’t seen other vendors drill down to the virtual machine and produce the same statistics from the I/O level. I don’t know if other storage vendors are working on that level of integration with VMware. If they are, they’re not showing it yet.”

  • Share/Bookmark
 May 14, 2013   Articles With Pricing, Tintri
May 062013
 
This post highlights the most popular Solid State Disk/Flash vendors and provides a chart to help decipher their costs. This data has been aggregated from various sources so no claims are made as to its accuracy.

In some cases the manufacturers provide a link to “Self-Service Pricing” via EchoQuote™ so you can get up to date pricing information quickly, often in minuts (last column).

Top 10 Solid State/Flash Array Vendors in Alphabetical order:

Vendor Category Pricing
Astute Networks Flash Memory Arrays Not Available
Fusion-io Pricing Solid-State PCI Express Cards (Nexsan acquisition may put it on path to full appliance gear) Not Available
Range $2-$5/GB
Nimbus Data Pricing Flash Memory Arrays Not Available
Per 2012 article – $150K for 10TB dual configuration
OCZ Pricing Flash PCI Express Cards Not Available
Range $2-$5/GB
Skyera Pricing Flash Memory Arrays Not Available
Texas Memory Systems Pricing PCI Cards
Flash Memory Arrays
Not Available
Virident Pricing PCI Cards
Flash PCI Express Cards
Flash Max II
Starts at $6000
Violin Memory Pricing PCI Cards
Flash PCI Express Cards
Flash Memory Arrays
Velocity cards come in 1.37, 2.75, 5.5 and 11TB raw capacity versions at a list price cost of $6/GB for all of them except the entry-level 1.37TB card which lists at $3/GB.Flash Max II
Whiptail Pricing PCI Cards
Flash Memory Arrays
From $50K to $250K for multi-terabyte arrays
  • Share/Bookmark
May 032013
 

Original article at Anandtech

The Fusion-io ioScale comes in capacities from 400GB to up to 3.2TB (single half length PCIe slot) making it one of the highest density, commercially available drives. Compared to traditional 2.5″ SSDs, the ioScale provides significant space savings as you would need several 2.5″ SSDs to build a 3.2TB array. The ioScale doesn’t need RAID for parity as there is built-in redundancy, which is similar to SandForce’s RAISE (some of the NAND die is reserved for parity data, so you can rebuild the data even if one or more NAND dies fail).

The ioScale is all MLC NAND based, although Fusion-io couldn’t specify the process node or manufacturer because they source their NAND from multiple manufacturers (makes sense given the volume required by Fusion-io). Different grades of MLC are also used but Fusion-io is promising that all their SSDs will match with the specifications regardless of the underlying components.

The same applies to the controller: Fusion-io uses multiple controller vendors, so they couldn’t specify the exact controller used in the ioScale. One of the reasons is extremely short design intervals because the market and technology is evolving very quickly. Most of Fusion-io’s drives are sold to huge data companies or governments, who are obviously very deeply involved in the design of the drives and also do their own validation/testing, so it makes sense to provide a variety of slightly different drives. In the past I’ve seen at least Xilinx’ FPGAs used in Fusion-io’s products, so it’s quite likely that the company stuck with something similar for the ioScale.

What’s rather surprising is the fact that ioScale is a single-controller design, even at up to 3.2TB. Usually such high capacity drives use a RAID approach, where multiple controllers are put behind a RAID controller to make the drive appear as a single volume. There are benefits with that approach too, but using a single controller often results in lower latencies (no added overhead by the RAID controller), prices (less components needed) and it takes less space.

The ioScale has previously been available to clients buying in big volumes (think tens of thousands of units) but starting today it will be available in minimum order quantities of 100 units.

Pricing starts at $3.89 per GB, which puts the 450GB model at $1556. For Open Compute Platforms, Fusion-io is offering a 30% immediate discount, which puts the ioScale at just $2.72/GB. For comparison, a 400GB Intel SSD 910 currently retails at $2134, so the ioScale is rather competitive in price, which is one of Fusion-io’s main goals.Volume discounts obviously play a major role, so the quoted prices are just a starting point.

  • Share/Bookmark
Mar 072013
 

Original article by Maria Deutscher at Silicon Angle

It’s turning out to be a big week for flash. Violin Memory, one of the largest suppliers of SSD-based storage solutions, announced a new product line-up called Velocity. The lineup features three PCIe server cards with 1.37, 2.75, 5.5 and 11TB of raw storage.

These four capacity levels will be priced at $4,200, $16,900, $33,800 and $67,500, respectively. With the new line-up, Violin is making a strategic play in the pricing game, simplifying and streamlining the supply chain. The deal with Toshiba is an important factor in Violin’s plan, empowering the storage provider to better manage the ebb and flow of NAND pricing.

Now for the specs. Sustained performance using 4KB blocks measures at 120,000 IOPS for the 1.37 card, 1000, 270,000 and 540,000 IOPS for the 2.75, 5.5 and 11TB cards. This figure jumps above one million IOPS in setups where 512KB blocks are used instead.

The new Violin chips run barebone firmware that supports Oracle, Microsoft SQL Server and VMware. Compatibility for management software from other vendors will probably roll out with future patches.

The other important feature of Velocity, beside the performance and support, is that the chips are completely self-contained. This means that software can be booted faster and without help from the host server, something that the competition – that is to say Fusion-io’s competing PCIe chips – can’t do.

The launch is backed by Toshiba, one of Violin’s earliest backers and its top distributor in Japan. The manufacturer will leverage Velocity to enhance its storage portfolio.

Hiroyuki Sato, storage products division veep at Toshiba Corporation Semiconductor & Storage Products Company, said: “The PCIe card market is important to Toshiba’s customers. Expanding our strategic relationship with Violin Memory will allow us to bring the valuable Violin enterprise intellectual property to a broad range [of] industry-leading solutions in our future product offerings.”

Flash is hot, and Toshiba is not the only whale in the ocean looking to make the most of the technology. Earlier this week Seagate had a PCIe update all its own, while last month NetApp announced an all-flash array for software-driven environments that’s more scalable and efficient than less abstracted solutions.

  • Share/Bookmark
 March 7, 2013  Violin Memory
Jul 182012
 

Original article at NexGen Storage

Pricing and availability

The NexGen n5 Series n5-100 and n5-50 systems will be available on August 20, 2012 and the n5-150 system will be available September 30, 2012 through authorized NexGen resellers. List pricing for the three new n5 Series systems will range from $55,000 to $108,000.

NexGen Storage today announced the expansion of its n5 Series of storage systems with new PCIe solid-state offerings. The NexGen n5 Series offers several solid-state configurations that deliver a range of performance levels and price points, and each n5 system offers both 10GbE and 1GbE network options. With higher performance and capacity in a compact, 3U footprint, the new offerings provide:

• 5x to 10x lower $ per GB than all-SSD arrays with equivalent performance1; and
• 10x more IOPS per rack unit versus disk-based storage systems2.

With its expanded offering, NexGen makes enterprise-class solid-state storage capabilities available and affordable for mainstream customers to meet targeted performance requirements in mixed workload environments.

“There is tremendous end-user value to be gained from judiciously employing solid-state storage as part of an overall storage approach. Our research indicates a clear shift toward end-users viewing solid-state technology as applicable for increasingly broad data center deployment and usage, and not just for specific applications or isolated workloads,” said Mark Peters, senior analyst at Enterprise Strategy Group. “By offering high-end solid-state capabilities at affordable price points, NexGen’s n5 Series solutions are well-positioned to bring that solid-state storage value into mainstream data centers.”

Each n5 Series storage system provides active-active high availability and delivers the full power of NexGen’s cutting-edge capabilities, including:

Predictable Performance with Storage QoS. Provides predictable, guaranteed application storage performance. IT administrators can set performance levels for all applications and manage performance as easily as capacity.
Service Levels for Total Control. Automatically shifts resources from non-critical to mission-critical applications as needed to ensure performance is maintained for an organization’s more critical applications, even if the system is compromised.
The Lowest $/GB and $/IOP. Moves data real-time between high-performing solid-state and economical disk drives to offer industry-leading price/performance.

“NexGen’s innovative solid-state storage systems and Storage QoS allow us to deliver extremely efficient, high quality IT services to our organization,” said Robert Samples, senior systems engineer at Kansas City Urology Care. “NexGen’s n5 systems have a very small footprint compared with my existing storage, which chews up a ton of power, takes up roughly 15U of rack space and costs a fortune every year in maintenance and support. By comparison, the NexGen n5 takes up only 3U of rack space and utilizes about one-third of the power.”

“Organizations can achieve higher storage efficiency along with more consistent performance levels through smart, right-sized solid-state storage deployments,” said Rick Merlo, vice president of sales, NexGen Storage. “NexGen’s n5 Series gives organizations the ability to meet varying performance, capacity and price point requirements.” Read more about this on NexGen’s blog.

  • Share/Bookmark