Original article at Scale Computing Article “4 Hidden Infrastructure Costs for SMB”
Infrastructure complexity is not unique to companies that operate enterprise datacenters. Just because a business or organization is small, does not mean it is exempt from the feature needs of big enterprises. Small and mid-size organizations require fault tolerance, high availability, mobility, and flexibility as much as anyone. Unfortunately, the complexity of traditional datacenter and virtualization architecture hits the SMB the hardest. Here are 4 of the hidden costs that can cripple the SMB IT budget.
1 – Training and Expertise
Setting up a standard virtualization infrastructure can be complex; it requires virtualization, networking, and storage expertise. In the larger enterprises, expertise is often spread out across dozens of admins through new hire, formal training, or consulting. However, in the SMB data center with only a handful or even only one admin and limited budgets, expertise can be harder to come by. Self-led training and research can take costly hours out of every week and admins may only have time to achieve a minimum level of expertise to maintain an infrastructure without the ability to optimize it. Lack of expertise affects infrastructure performance and stability, not allowing for the most return on infrastructure investment.
2 – Support Run-Around
A standard virtualization infrastructure has components from a number of different vendors including the storage vendor, server vendor, and hypervisor vendor to name just the basics. Problems arising in the infrastructure are not always easy to diagnose and with multiple vendors and vendor support centers in the mix, this can lead to a lot of finger pointing. Admins can spend hours if not days calling various support engineers from different vendors to pinpoint the issue. Long troubleshooting times can correspond to long outages and lost productivity because of vendor support run-around.
3 – Admin Burn-Out
The complexity of standard virtualization environments containing multiple vendor solutions and multiple layers of hardware and software mean longer nights and weekends performing maintenance tasks such as firmware updates, refreshing hardware, adding capacity, and dealing with outages caused by non-optimized architecture. Not to mention, admins of the complex architectures cannot detach long enough to enjoy personal time off because of the risk of outage. Administrators who have to spend long nights and weekends dealing with infrastructure issues are not as productive in daily tasks and have less energy and focus for initiatives to improve process and performance.
4 – Brain Drain
Small IT shops are particularly susceptible to brain drain. The knowledge of all of the complex hardware configurations and application requirements is concentrated in a very small group, in some cases one administrator. While those individuals are around, there is no problem but when one leaves for whatever reason, there is a huge gap in knowledge which might never be replaced. There can be huge costs involved in rebuilding the knowledge or redesigning systems to match the expertise of the remaining or replacement staff.
Although complexity has hidden costs for all small, medium, and enterprise datacenters, the complexity designed for the enterprise and inherited down into the SMB makes those costs more acute. When choosing an infrastructure solution for a small or mid-size datacenter, it is important to weigh these hidden costs against the cost of investing in solutions that offer automation and management that mitigate the need for expertise, support run-around, and after hours administration. Modern hyperconverged infrastructures like HC3 from Scale Computing offer simplicity, availability, and scalability to eliminate hidden infrastructure costs.
Scale Computing, the market leader in hyperconverged storage, server and virtualization solutions for midsized companies, today announced an SSD-enabled entry to its HC1000 line of hyperconverged infrastructure (HCI) solutions for less than $25,000, designed to meet the critical needs in the SMB market for simplicity, scalability and affordability.
HC1150 combines virtualization with flash
The HC1150 combines virtualization with servers and high performance flash storage to provide a complete, highly available datacenter infrastructure solution at the lowest price possible. Offering the full line of features found in the HC2000 and HC4000 family clusters, the entry level HC1150 provides the most efficient use of system resources – particularly RAM – to manage storage and compute resources, allowing more resources for use in running additional virtual machines. The sub-$25,000 price point also includes a year of an industry-leading premium support at no additional cost.
Scale Computing’s HC3 platform brings storage, servers, virtualization, and high availability together in a single, comprehensive system. With no virtualization software to license and no external storage to buy, HC3 solutions lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications optimized and running. The integration of flash-enabled automated storage tiering into
Scale’s converged HC3 system adds hybrid storage including SSD and spinning disk with HyperCore Enhanced Automated Tiering (HEAT). Scale’s HEAT technology uses a combination of built-in intelligence, data access patterns, and workload priority to automatically optimize data across disparate storage tiers within the cluster.
HC1150 priced for a cost entry into the SMB space
The HC1150 was not the only new addition to the HC1000 family. The new HC1100 which replaces the previous HC1000 model, provides a big increase in compute and performance. Improvements include an increase in RAM per node from 32GB to 64GB; an increase in base CPU per node from 4 cores to 6 cores; and a change from SATA to 7200 RPM, higher capacity NL-SAS drives. With the introduction of the HC1100 comes the first use of Broadwell Intel CPUs into the HC1000 family. All of the improvements in the HC1100 over the HC1000 model come with no increase in cost over the HC1000. Additionally, the HC1150 scales with all other members of the HC3 family for the ultimate in flexibility and to accommodate future growth.
Scale Computing’s HC1150, as with its entire line of hyperconverged solutions, is currently available through the company’s channel with end user pricing starting at $24,500. For additional information or to purchase, interested parties can contact Scale Computing representatives at www.scalecomputing.com/scale-computing-pricing-and-quotes.
The field of all-in-on hyper-converged platforms has certainly grown over the past 24 months. While some smaller startups (Nimboxx?) could not gain enough traction early, others have been delivering robust, time-tested gear for several years. Here’s a run-down of a few of the top names.
Scale Computing enters the best hyper converged solutions category by differentiating its HC3 and HC3x hyperconverged platforms with ease of use and simplicity. The company chose to standardize on KVM as a single hypervisor and built its own management layer, creating high functionality without the need for a virtual storage appliance, coupled with object-based storage with direct access to the hypervisor. Reliance on KVM also eliminates licensing fees for commercial hypervisors, making the product attractive to smaller organizations. Scale Computing recently added integrated disaster recovery capabilities into its HC3 platform.
Pivot3 is hardly a newbie, founded in 2002 with a focus on converging virtual servers, storage and networks. The company says it launched its best hyper converged appliance in 2008, when a casino asked for a secure and cost-effective way to store video streams (at the time, Pivot3 called it “serverless computing.”).”
Using Scalar Erasure Coding, Pivot3 developed vSTAC OS, which the company says “allows any program running on one appliance within the cluster to access resources across all the appliances in the cluster.” Pivot3 focuses on the video surveillance and virtual desktop markets and counts more than 1,300 customers worldwide. Recently, Pivot3 acquired NexGen Storage to flush out it’s all-flash offerings.
Hyperconvergence pioneer Nutanix launched its first product in 2011 and initially focused on a message of “ban the SAN.” Today, the company’s Virtual Computing Platform provides integrated compute and storage through servers running a standard hypervisor and the Nutanix OS. According to Gartner’s report on integrated systems, Nutanix’s technology is unique in that “the storage and compute elements are natively converged to create a much tighter level of integration”; a node-based approach that “enables theoretically limitless additions of new compute or storage bandwidth in very small increments.”
Nutanix, which released what it claims was the industry’s first all-flash hyperconverged array last year, has raised $317 million in funding, filed 43 patents, and touts an annualized sales run rate of $300 million. Last year, the company inked an OEM deal with Dell to offer converged appliances built with Nutanix software running on Dell PowerEdge servers.
Gridstore offers best hyper converged solution purpose-built for Microsoft Hyper-V. The startup’s hyperconverged appliances come in both all-flash and hybrid versions. Unlike other scale-out storage products, which use standard storage protocols such as SMB or iSCSI, Gridstore places much of the work of managing the scale-out cluster into the client as a virtual controller. Gridstore may have an advantage in the market if it can capitalize on its position as the first Hyper-V optimized storage system.
Dell’s Nutanix-Based XC Series
Dell’s first Nutanix based hyper converged solution is the XC730xd, which is based on Dell’s PowerEdge R730xd rack-mount server platform. The XC730xd, based on Intel Xeon E5 2600 v3 processors, fits up to 32 TB of storage capacity in a 2U enclosure, or about 60 percent more capacity than the previous model based on the PowerEdge R720xd servers. The second model, the XC630, is based on Dell’s 1U PowerEdge R630 platform, and can be configured with up to 9.6 TB of storage capacity.
EMC VSPEX BLUE
The EMC VSPEX BLUE best hyper-converged infrastructure appliance delivers compute, storage, networking and management through VMware EVO: RAIL and EMC software. EMC claims the solution goes from power on to provisioning virtual machines in less than 15 minutes.
Included with the appliance is VSPEX BLUE Manager, which provides access to electronic services and automated patch and software update notifications; VSPEX BLUE Market, which gives access to pre-validated solutions; and EMC Secure Remote Support for monitoring of the appliance.
Hewlett-Packard in December entered the best hyper-converged infrastructure market with its HP ConvergedSystem 200-HC StoreVirtual. Based on the company’s StoreVirtual virtualized storage solution, it provides advanced data services, disaster recovery, and heterogeneous interoperability across physical and virtual application domains. HP ConvergedSystem 200-HC StoreVirtual includes the converged management of HP OneView for VMware vCenter, as well as robust VMware vSphere integration. A version running the HP Helion cloud was released recently.
HP also recently unveiled its HP ConvergedSystem 200-HC EVO: RAIL, a new hyper-converge appliance based on the VMware EVO: RAIL platform. This combines HP ProLiant SL servers with the VMware suite including VMware vSphere, vCenter Server and VMware Virtual SAN.
The SteelFusion 4.0 from San Francisco-based Riverbed Technology targets the simplification of branch-office IT support by virtualizing and consolidating 100 percent of data and servers from remote sites into data centers to centralize data security and IT management. SteelFusion does this with a series of hyper-converged appliances that are deployed in a remote office to run applications over a WAN using data stored in a central data center.
New with SteelFusion is FusionSync, which provides seamless branch continuity by ensuring all branch data is accessible across private and hybrid cloud environments. This, according to Riverbed, gives remote offices the ability to withstand and recover from data center failures with zero downtime.
Another pioneer in the hyperconvergence space, SimpliVity came out of stealth mode in 2012. The startup’s OmniCube platform combines compute, hypervisor, storage services and network switching on x86 server hardware with centralized management. OmniCube goes further than other integrated systems by incorporating features such as built-in VM backup, in-line data deduplication, compression and optimization at the source, according to Gartner.
Tintri VMstore best hyper-converged appliance consists of a fully redundant box containing flash and spinning storage, designed to simplify the task of providing storage for VMs while adding performance.
Unlike traditional networked storage systems, even those that also integrate flash and spinning disks, there are no LUNs, volumes or tiers, which Tintri says present barriers to virtualization because they have no intrinsic meaning at the VM level. Instead, each I/O request maps to the particular virtual disk on which it occurs, the system accesses the vCenter Server API to monitor and control I/O performance at virtual disk level, and you manage in terms of virtual disks and VMs.
Scale Computing, a hyper-convergence leader, has modified its HC3 hyper-converged system, adding a pair of hybrid arrays that allocate NAND flash as a tunable tier of primary storage.
The Scale flash HC2150 and HC4150 node hardware is similar to the vendor’s HC3 2000 and HC3 4000 models, except with additional RAM and increased processor cores (from four to eight).
Scale isn’t going all-flash with the latest rollout. Each hybrid HC2150 node is equipped with a single 400 GB single-level cell NAND solid-state drive (SSD) and three 3 TB SAS hard disk drives (HDDs). Raw capacity per three-node HC2150 cluster is 12 TB, including 1.2 TB of flash storage. The HC2150 ships with 128 GB of RAM per node and can be upgraded to 256 GB of RAM per node.
Each HC4150 hyper-converged node supports two 400 GB NAND drives and 2.4 TB of flash per cluster, with up to six SATA drives. The 4150 ships with 384 GB of RAM and can be upgraded to 512 GB per node.
List price for a baseline Scale flash HC2150 cluster is $61,500, and $106,625 for the HC4150 cluster. The prices include HyperCore data services for remote and local snapshots, multisite virtual machine replication and failover/failback capabilities.
Excerpt from Howard Marks of Network Computing http://www.networkcomputing.com/servers-storage/hyperconverged-stacks-from-simplivity-an/240007306
In last week’s post, “The Hyperconverged Infrastructure,” we explored the industry trend toward integrated storage, compute and networking stacks, and its latest development, in which vendors combine the compute and storage components into a single, hyperconverged, scale-out building block. Two vendors, SimpliVity and Scale Computing, used last month’s VMworld to reveal new hyperconverged systems.
The OmniCube’s software also provides inline data deduplication, which not only expands the available storage in the cluster but also reduces the amount of internode data replication traffic needed to make the system able to survive a node failure. Since OmniCube’s storage subsystem was designed specifically to host vSphere VMs, it has enough context to manage storage on a VM rather than a volume basis–including per-VM application-consistent snapshots and replication.
Since the data is deduplicated in ingest, OmniCubes use significantly less WAN bandwidth for replication than other storage systems. SimpliVity even has a software instance of the OmniCube stack that can run on a public cloud so organizations can use cloud providers for disaster recovery.
If SimpliVity’s offering was just a scale-out hybrid storage system with inline dedupe and the ability to do per-VM application-consistent snapshots as well as replicate VMs to a public cloud provider, it would join Tintri as one of my top storage systems for virtualization. Add in that I can run my workloads on the same system or from other vSphere hosts so I can scale compute and storage separately, and I start thinking it might be too good to be true.
By comparison, Scale Computing aimed its HC3 at significantly smaller use cases and customers than SimpliVity. For the past several years, Scale Computing has been selling scale-out unified storage systems for SMB/SME customers built from 1U servers running Linux and an extended version of IBM’s GPFS distributed file system. While most hyperconverged systems use a virtual machine running under a hypervisor as a virtual storage appliance, Scale’s HC3 uses clustered GPFS and runs the KVM hypervisor on top of GPFS.
While KVM, Linux and GPFS provide a reliable platform, they’re not widely known as easy to use and generally require a significantly higher level of technical expertise to install, optimize and administer than most SMBs can muster. Scale addresses this by providing a simple Web UI for administering the whole shebang, from creating file shares to spinning up new virtual machines.
A three-node HC3 cluster, which Scale recommends for up to 30 virtual servers, will cost an SMB or remote office about $25,500, while an eight-node cluster is just less than $68,000. These are all-inclusive prices for servers, storage, hypervisor and the Web management software. Most users would probably pay significantly more for three servers, a low-end disk array and vSphere licenses.
Users can add nodes to the cluster at any time, should they need more compute or storage resources. Scale is even allowing users with storage-only nodes to upgrade to HC3 with a memory and software upgrade.
Of course, for less than $10,000 a node, Scale isn’t providing the same performance as SimpliVity or Nutanix. Each HC3 node has a single quad-core Xeon processor, 32 Gbytes of memory and four 1-Tbyte disk drives. Scale doesn’t currently use flash for acceleration, so small clusters will have rather modest storage performance from a dozen or so 7,200 RPM drives. Luckily, most SMBs have rather modest storage performance needs.
SimpliVity and Scale Computing join Nutanix and Pivot3 in the hyperconverged infrastructure arena. These two examples show that the concept of hyperconvergence can extend from a very modest cost to high-performance systems with leading-edge storage features.
Scale Computing sells servers loaded with the company’s custom storage software for small businesses that don’t necessarily need the petabytes of storage that massive companies need. The company offers 3TB nodes for $12,000, 6TB nodes that cost $15,000, and 12TB nodes that cost $21,000. The technology is designed to be plug and play, allowing businesses to plug in additional storage nodes without having to bring down their services or migrate data.
The Indianapolis, Ind.-based company’s funding comes at a particularly strange time, since cloud-based storage solutions are beginning to dominate the small- to mid-sized business space. Cloud computing products are typically much cheaper than building and maintaining databases in-house. They also charge per gigabyte of storage, so companies don’t end up paying for any wasted space.
There are some concerns with storing information on the public cloud — particularly in regards to security. Most major companies have strict security standards that can’t be fulfilled with public cloud storage services. That’s not to say cloud storage providers like Rackspace aren’t able to keep the data secure. It just means that the companies’ compliance requirements are often too high to effectively use the service.
There are also some performance concerns, because the information still has to be streamed through the internet from cloud storage servers onto a local device. That can lead to some lag, and the lost time can pile up after a while. Devices plugged into a local network are always going to be faster than having to stream information through a broadband connection. But as cloud computing becomes more advanced in the form of compression techniques and faster broadband infrastructure, those concerns are quickly disappearing.
Scale Computing’s most recent round of fundraising was led by Scale Venture Partners and Northgate Capital. Existing investors, which include Benchmark Capital, also participated in this round. Rob Theis, managing director of Scale Venture Partners, will join the company’s board of directors as part of the deal.
Scale Computing has secured more than 200 companies as customers and shipped 1,000 of its storage nodes. Scale Computing has raised a total of $31 million including the most recent round of fundraising.