Blog

  • A Charitable Culture

    A Charitable Culture

    January 11th, 2018
    Read More

    “If you haven't any charity in your heart, you have the worst kind of heart trouble.”

    - Bob Hope

    One of the main reasons I joined the High Availability, Inc. team was because of the company culture.  Having worked with High Availability, Inc. closely during my years at NetApp, I was able to see first-hand how strong that culture was.  In my opinion, company culture encompasses many aspects - from positive employee morale, to the readiness to support your teammates and, ultimately just having employees who love going to work every day.   However, beyond this, another important factor to me personally is a company’s willingness to give back charitably to both their community and worthy causes.

    Charity is a huge part of my life outside of work.  Since 1989 my family and close friends have run a charity supporting brain tumor research in memory of my brother called the JAG Fund. Because of this, I’ve always valued the charitable efforts in my professional life, and High Availability, Inc. is certainly no different. After joining the team, I quickly realized that giving back was just as important to High Availability, Inc.

    Here are a few examples that prove this….

    As mentioned previously, my entire family is deeply involved in the JAG Fund. Every year, we host a Black Tie-Gala to raise both support and awareness for brain tumor research. I was thrilled when my teammates at High Availability, Inc. pledged their support for the JAF Fund through a corporate sponsorship towards our Black-Tie Gala! Not only did H.A. support the JAG Fund financially, but they also attended the event. In fact, my boss, Steve Eisenhart, was rated as one of the top dancers that evening and the High Availability, Inc. table was by far the most supportive during our silent auction. They even set a record for their winning bids!

    A second example was High Availability, Inc.’s partnership with Tech Impact, a non-profit organization headquartered in Philadelphia created to provide cost-effective IT support for other non-profit organizations.  Tech Impact also helps help young urban adults move into a career in IT through their ITWorks Program.  Having supported Tech Impact prior to coming to High Availability, Inc., I knew how worthy of an organization it was, so I was thrilled when the executive team at H.A. approved a sponsorship for the annual Tech Impact Luncheon. I was fortunate enough to attend the event with several other members of our team and even a few customers.  It was amazing to see first-hand the positive impact of our contribution!

    Lastly, but most impressive in my opinion, was High Availability, Inc.’s support of Hurricane Harvey relief efforts. H.A. specifically contributed to the J.J. Watt Foundation.  The aftermath of Hurricane Harvey was absolutely devastating for the state of Texas. The Category 4 storm dumped over 27 trillion gallons of rain over Texas, and forced over 30,000 people to seek temporary shelter.

    After the storm passed, High Availability, Inc. acted immediately. The team set up a relief fund program for those affected by Hurricane Harvey, and even matched each contribution dollar-for-dollar. Within just a few minutes, the H.A. team donated over $3,000!

    “Everyone stepped up and was happy to help in any way they could,” said Greg Robertson, CFO and Founder of High Availability, Inc. “I feel very fortunate to be working with individuals who see the importance of helping those in need.”

    Within three days, High Availability, Inc. had raised exactly $10,000!

    High Availability, Inc. plans to double our charitable efforts in 2018. In fact, Steve Eisenhart, CEO and Founder of High Availability, Inc., has even implemented the first ever “H.A. Challenge”, which encourages employees to give back a full working day to a local charity through volunteering.

    Of course, these aren’t the only examples of H.A. giving back but they are the top 4 that I’ve seen since joining the organization just over 18 months ago.  Pretty impressive and a true example of the company embracing a charitable culture!

  • Tips For Meraki WLAN Deployment Part 1: AP Addressing

    Tips For Meraki WLAN Deployment Part 1: AP Addressing

    December 29th, 2017
    Read More

    Meraki cloud-managed network infrastructure has brought a new level of manageability to the network, and many of High Availability’s customers have found out just how easy it is to operate a Meraki-based network infrastructure. For this series of articles, I wanted to recap a few tips for making the deployment go smoothly as well. Listed below are five often-overlooked topics essential for a complete and trouble-free Meraki installation.

    1. AP Addressing (Static or DHCP)
    2. Naming
    3. Tagging
    4. Installation Photos
    5. Floorplans

    We will be discussing each of these topics in the next few blog articles.

    AP Addressing

    By default, Meraki access points will request an IP address through the Dynamic Host Configuration Protocol (DHCP). If you are putting your APs on a client-serving network (which can be OK in a small office environment), that’s usually all they need to get started. However, larger, more complex network designs often dictate that access points’ management interfaces live on a dedicated AP VLAN or maybe a common infrastructure management VLAN. In those cases, DHCP may not be enabled by default. There are a few options in this situation.

    First, DHCP can be enabled for the VLAN. If this is done, the approach I like to take is to set that DHCP scope up on the router or layer 3 switch serving the VLAN, rather than on a Windows AD server or similar. Why? Well, the simple fact is that VLANs used for infrastructure are easy to ignore, and sometime in the future the DHCP scopes might be migrated or the server decommissioned, with little or no attention paid to the DHCP scope defined for a non-user VLAN. In this case, the stability of a layer 3 switch to provide the DHCP may be desirable. Also, unlike client workstations, there is no strong need to have reverse-DNS PTR records registered for the APs or anything like that so putting the APs’ DHCP scope on a network device keeps all the configuration needed for the APs “in the network.”

    Now, there may be times when DHCP addressing for your access points is not feasible or desirable. One support issue I’ve encountered more than once occurs when the DHCP scope that supplies IP addresses to the APs is exhausted (this usually occurs when a client-facing subnet is used for the access points). Eventually the AP is unable to renew its IP address, and it stops working. If this is the only option for addressing, DHCP may not be a good choice. Perhaps security policy prohibits the management network from providing the DHCP service. Or maybe administrator simply prefers that all infrastructure devices have fixed IP addresses. In this case, there are two ways to assign static IPs to your Meraki APs.

    If you can initially provide an IP address via DHCP, the AP will check into the Meraki Dashboard, and assigning a static IP is simple.

    First, go into the access point details page under Wireless > Access points, and click the access point in question. To the left of the main browser pane, you will see the IP settings. Click the pencil icon to edit them:

    A small box will appear, in which you can select the addressing type. Here, you can select “Static IP:”

    The box will expand and you can enter the AP addressing details, like this:

    After filling in the appropriate details, click the Save button. The AP will reboot, and should come up at its new, static IP address. Rinse and repeat for the other APs. Note that the “VLAN” field only needs to be populated if the AP’s management VLAN is not the untagged/native VLAN for the connected switch port.

    What about an instance where you cannot bring the AP up on DHCP initially? If the AP must be statically addressed from the start, before it can even reach the Meraki Dashboard, you need to locally connect to the AP.

    In this case, power up the factory-fresh AP and it should begin broadcasting an SSID of “Meraki Setup.” Connect to this SSID, and then open your browser and go to “ap.meraki.com.” You should see a status page like this:

    Switch to the “Configure” tab at the top, and when prompted for a Username and Password, enter the serial number of the AP under the Username and click OK, like this:

    The Configure screen will allow you to choose the Uplink addressing mode:

    As before, select Static, then fill in the addressing details and click Save at the bottom of the screen. After restarting, your AP should be able to connect to the Internet and the Meraki Dashboard via the statically-programmed IP address.

    Hopefully, this blog article has been helpful in getting your Meraki access points addressed and connected to the Meraki cloud in a reliable manner. As always, the networking experts at High Availability are available to assist you with your networking project.

    Next time, we will cover tips for deploying your Meraki hardware using the Meraki mobile app and taking advantage of some special Meraki features!

  • 3 Ways To Protect Your Cisco SIP Voice Network

    3 Ways To Protect Your Cisco SIP Voice Network

    December 27th, 2017
    Read More

    The placement of Voice Infrastructure devices on the network presents some challenges to voice engineers. They must satisfy the security requirements of the Enterprise while providing simple and uninterrupted access to external systems such as PSTN SIP trunks and internal systems such as VoIP phones and servers. In this article, we discuss Voice security from a Routing and Switching point of view. This article is not about internal Voice security mechanisms such as authentication and encryption. 

    Obviously, we do not recommend terminating external SIP trunks on Unified Communications Manager servers directly. Therefore, in this article, we will talk about the placement of Voice Gateways on the network and the interaction between the various Voice components. We consider the Voice Gateway to be a CUBE that terminates all media streams. 

    Placing a Voice Gateway on the internal network is a major security risk. It exposes your entire enterprise to an external entity even if that entity is your trusted voice carrier. Therefore, all our designs require placing the Voice Gateway on a DMZ behind a NextGen firewall such as Cisco ASA with Firepower services. First, the ASA does SIP inspection and can deploy security ACLs to filter inbound traffic and only allow connections from specific IPs such as your Voice Gateway SIP signaling and media IP. Second, NextGen firewalls can detect and protect against malware embedded in the data stream.

    The following scenarios are based on some very common WAN designs that we have seen deployed at our customer sites of various sizes.

     

    I. Dedicated PSTN SIP trunk via a dedicated Voice Gateway.

    This is the simplest setup. A primary dedicated SIP trunk over a private T1 or Ethernet WAN connection and an optional backup SIP trunk via the Internet. Note that the backup SIP trunk can be hosted on a separate Voice Gateway too to provide hardware redundancy. The following diagram illustrates this design. 

    In this scenario, the Voice Gateway is on a dedicated DMZ behind the NextGen firewall. The firewall has SIP inspection turned on and only allows inbound SIP control traffic sourced from the inside IP of the Voice Gateway to the CUCM. Similarly, an outbound ACL controls SIP signaling traffic from CUCM to the Voice Gateway. SIP media traffic will be automatically allowed based on the negotiated RTP ports.

     

    II. Voice Gateway is co-located with your MPLS router

    Today’s most common WAN designs utilize MPLS as the underlay infrastructure for Enterprise WAN connectivity. It is also not uncommon for an MPLS carrier to provide PSTN SIP trunks over the same physical fiber or copper cable. Therefore, to save on hardware your Voice Gateway might be co-located with your WAN ISR router. If your MPLS/SIP provider separates the two functions and terminates each on a separate VLAN, then it is possible to use VRF (Virtual Routing and Forwarding) to virtualize your router: The Voice portion can use the Global Routing table while the MPLS portion can use a VRF. Otherwise, both technologies can use the Global routing table. The following diagram illustrates this former scenario: 

    In the above diagram, we go a step further by separating MPLS and Voice Routing tables. Note that we are only discussing Voice Security. You still need to configure Quality of Service (QoS) to prioritize your VoIP traffic whether you are using a VRF or not. QoS is outside the scope of this article.

    As in the previous scenario, the security ACLs, firewall inspection, and Firepower services protect the Enterprise Voice Infrastructure from external attacks.

     

    III. Dedicated Voice Gateway and a shared MPLS/SIP connection

    This is a very common design in which the customer orders an MPLS and a SIP trunk from the same carrier as in the 2nd scenario above. However, the customer is using a dedicated Voice Gateway with an optional backup SIP trunk to the same carrier via the Internet.

    The following diagram illustrates this solution:

    In this scenario, a single Voice Gateway sits behind a NAT firewall. The VG communicates with the SIP provider via the “out_voice” interface and communicates to CUCM and the IP Phones via the “in_voice” interface. NAT is only used when communicating over the backup SIP trunk via the Internet connection. With recent firewall firmware, we see no issues in NAT’ing SIP control and media traffic. As in all previous scenarios, the Voice Gateway sits on a DMZ segment and the CUCM sits on a dedicated internal VLAN. Note that the “in_voice” interface can be connected directly to the internal LAN without going through the firewall.

    Summary

    While there are many ways to protect your Voice Infrastructure, we have highlighted three common scenarios and listed the advantages of having Voice control and media traffic inspected by a NextGen firewall. Other more complicated designs can be derived from the above scenarios especially if you combine your DMVPN, MPLS, or iWAN technologies into a single device.

  • High Availability, Inc. Hosts Star Wars Movie Premiere

    High Availability, Inc. Hosts Star Wars Movie Premiere

    December 21st, 2017
    Read More

    Last week, High Availability, Inc. welcomed customers and partners alike to the Movie Tavern in Collegeville, PA for an advanced screening of Star Wars: The Last Jedi. The annual movie premiere has been a cornerstone event for High Availability, Inc. since 2012. It’s a chance for the H.A. team to thank customers for their business during the past year and allows for everyone, even our partners, to kick back and relax.

    “Our movie premier event is a great chance for our employees, customers, and partners to get together and escape the end-of-year busyness.” Said Greg Robertson, Chief Financial Officer for High Availability, Inc. “The event creates the perfect mixture of industry news, education, and fun,” Robertson added.

    However, the event wasn’t as relaxing for our speakers! H.A.’s infamous “Ignite-Style” presentations took place before the film. Speakers from Cisco, NetApp, Quantum, Riverbed, Rubrik, Varonis, Veeam, and High Availability, Inc. had to present an emerging technology or solution from their organization. Although, the H.A. team set forth some abnormally strict rules! Each speaker had only 5 minutes to present and had to utilize 20 slides – no more, no less. The slides were timed to advance every 15 seconds whether the speaker was ready or not.

    “The Ignite format was challenging for the speakers, but helpful for the audience.” Explained Pat Hopkins, a speaker from Quantum.  “The speakers had to prepare a short, fast moving but focused message to get their point across. With this style, the audience can quickly absorb valuable information about potential solutions they could bring back to their own organizations. Great format,” Hopkins added.

    The most talked about presentation of the evening, which was delivered by Bob McCouch from High Availability, Inc., discussed the most popular emerging technologies from each year a Star Wars film was released. McCouch, a Principal Technologist for High Availability, Inc., discussed everything from the rise of the modern cell phone, big data, and the internet of things.

    In short, the High Availability, Inc. movie premier was an enormous success! We can’t wait for next year! Thanks to all our customers and partners for participating in the event.

    Click Here to access event pictures

  • Quantum and Veeam Just Keep Getting Better Together

    Quantum and Veeam Just Keep Getting Better Together

    December 19th, 2017
    Read More

    In the world of backup and recovery, deduplication appliances have always had a great purpose in keeping large amounts of data for long periods of time.  From the backup software perspective, Veeam has continued to grow as the platform of choice for virtual environments.

    Quantum has recently announced some new integration with Veeam, in the form of Data Mover software for DXi and also the iBlade for Scalar libraries that both integrate very nicely.  For now, I would like to focus on the data mover.  The Veeam Data Mover is a piece of software that runs directly on the DXi and performs some of the processing for Veeam Backup and Replication software.  By running the target data mover directly on the DXi, some of the backup and restore operations within Veeam are much more efficient and happen quickly.  The backup data is sent by the Veeam backup server to the data mover, which in turn writes to an NFS share on the DXi for deduplication, compression, and eventually storage.

    Configuration is straightforward, and requires only a few steps.  There are a few prerequisites that must be met, which are listed below:

    1. A Veeam License is required, and must be installed on the DXi
    2. The DXi system must be running at least version of 3.4 firmware
    3. A memory upgrade may be required on the DXi
    4. The DXi must have NAS support

    My focus for this article is on configuration of the Veeam Data Mover integration, so any of the above listed requirements for the DXi can be addressed by your local Quantum reseller.

    The first step that must be configured on the DXi is to create the NAS share.  Create a new NFS NAS share with deduplication enabled.  If replication is needed, the option is there to use Quantum replication, which can easily be configured on the share as well.  Once the share is added, there is a new tab in the 3.4 firmware called “Application Environment”.  By checking the “Enable Veeam” box and entering a new password, the DXi software will build the data mover environment.  That’s it – a few simple commands and the DXi is ready for Veeam.

    Now that the DXi is ready as a target for the data, we must configure Veeam to send it there.  In the Backup Infrastructure tab, add a new Linux server.  Use the name (or ip address) associated with a data port on the DXi  and add the associated credentials that were specified during the Quantum configuration.  This will add the newly created data mover as a managed Linux server within Veeam.

    Next, once again go to the Backup Infrastructure tab and add a new Backup Repository.  The specified type for this repository will be Linux Server.  When you add the server, and the path screen is displayed, click populate and then click next to move to the Repository folder path.  Click “Browse” and drill down to the newly created NFS NAS share on the DXi.  Make sure to select Decompress backup data blocks and Use per-VM backup files.  Default settings can be used for the rest of the wizard.

    Once completed, you will have successfully created a new Veeam Repository on the Quantum DXi which resembles the diagram below:

    The finished product is not only an extremely efficient way to backup and store virtual machines, but also for recovery as well.  You’ve got Quantum DXI with the StorNext data management software to maximize efficiency, as well as the industry leading variable length deduplication technology.  Combined with Veeam to manage the backup and recovery process as well as move the data, it doesn’t get any better.  Recovery of virtual machines, whether that be instant recovery or individual files, is just as easy and efficient.  Protecting your virtual machine data has never been easier.

  • Commvault GO 2017 Recap

    Commvault GO 2017 Recap

    November 13th, 2017
    Read More

    Commvault GO 2017 Recap

    Last week, thousands of IT professionals gathered in Washington, D.C. for Commvault GO 2017. Commvault GO is Commvault’s annual technical conference for storage and data management professionals. Now in its second year, Commvault GO was created to showcase the newest products and solutions from Commvault and to give end-users an update on the strategic direction of the company.

    The agenda was jam-packed with fascinating presentations, breakout sessions, and technical trainings. Luckily, I created a thorough plan for each day so I could make the most of my time at Commvault GO! However, after walking the floor and attending several breakout sessions and technical trainings, I quickly recognized two recurring themes:

    1. Organizations need to shift to a “Data-Centric” approach
    2. Commvault is making strides to simplify the deployment and management of their solutions

    So, let me break down these two themes:

    Robert Hammer, CEO of Commvault, did a brilliant job in his keynote presentation speaking to the “Data-Centric” approach. Hammer spoke about how customers today are starting to change their mindset from infrastructure-centric to data-centric due to a dramatic increase in the new data generation, which is predicted to grow 10x each year from today (163 zettabytes for those keeping score at home) up until 2025. He explained that customers will be forced to focus less on their IT infrastructure and more on the data stored and moving between that infrastructure. Commvault’s answer to this challenge is the Commvault Data Management Platform.  Simply stated, the platform is designed to help organizations achieve better data insights for compliance, e-discovery and a variety of other digital transformation use cases. 

    Today, customers are trying to figure out how to distribute and share their data across multiple platforms, which makes where the data lives irrelevant. At the same time, data protection is critical.  There seems to be a concession that it’s no longer a matter of “if” a ransomware attack will hit but rather “when.”   No matter the severity of the breach, whether just a few servers were affected or your entire system crashed, the goal is to get operations back online quickly and with minimal data loss. With tools like Intellisnap & Live Sync, customers are provided with the resources to rapidly recover in place or restore operations in another location with minimal downtime, even on a completely different infrastructure.  The key to making this all work is planning.  In fact, every customer that presented spoke about their S.O.P. when it comes to ransomware attacks. To say “measure twice and cut once” is an understatement, especially when factoring public cloud into the equation. The industry is trying to carefully navigate these waters and ultimately, we are all going to have to get to this place to continue to innovate and grow in all of our respective industries.

    The second theme of simplicity I found very refreshing.  While Commvault’s capabilities are vast and go far beyond the limits of just backup and recovery, that robust feature set has in the past carried the misperception of the solution being complex to manage.  As a result, Commvault has come out with new and innovative solutions to streamline these processes to become more operational and efficient. The first new streamlined solution, and one of the most talked about products of the conference, is Commvault Hyperscale.

    Commvault Hyperscale can be used as an appliance or as software. It consolidates all the roles performed by discrete servers in the traditional data protection architecture into a single software defined stack. As mentioned previously, this new offering follows Commvault’s new simplistic and stripped-down approach:

    1. Setup time is 30 minutes for the appliance
    2. You only buy what you need today since growing the solution is a simple as adding 3 nodes at a time growing out the CPU, compute and memory resources at the same time
    3. No more data silos along with fork lift migrations when it’s time to refresh
    4. More resiliency built into solution as losing a drive or even an entire node does not take down the back environment
    5. Operation efficiency (especially when using the appliance) as less “components” to maintain (no storage, no SAN, etc…)
    6. Supported by 7 platforms and can be consumed as an appliance or as software

    This is definitely a step in the right direction for Commvault. Eliminating what used to be several weeks of work to just days is every IT professionals dream, especially when a lot of automation is required.

    To summarize, Commvault GO 2017 was fascinating and I could not be more pleased with the direction Commvault has chosen to take. Their understanding of the customers thought process and their simplified solutions show that Commvault has listened to our feedback and is growing as an organization. I can’t wait for Commvault GO 2018!

  • High Availability, Inc. Receives Execution Excellence Award as Regional Partner of the Year at Cisco Partner Summit 2017

    High Availability, Inc. Receives Execution Excellence Award as Regional Partner of the Year at Cisco Partner Summit 2017

    November 9th, 2017
    Read More

    Last week at Cisco’s annual partner conference in Dallas, Texas, Cisco named High Availability, Inc. their Regional (U.S. East) Partner of the year for innovation, leadership and best practice as a Cisco business partner. Bill Volovnik, Partner Account Manager for Cisco, accepted the award on behalf of the entire organization.

     

    “We couldn’t be more excited and honored to receive the America’s Regional Partner of the Year award from Cisco,” said Steve Eisenhart, Chief Executive Officer of High Availability, Inc. "Cisco has become our most strategic business partner over the last three years.  We have an amazing team of sales people and engineers focused on designing, delivering and supporting Cisco’s entire portfolio.  We look forward to continuing our success, and strengthening our Cisco partnership even more in FY18.”

     

    Steve Eisenhart, CEO and founder of High Availability, Inc., will formally accept the award on behalf of the entire organization in a private awards ceremony with Cisco later this month.

     

    Cisco Partner Summit Theatre awards reflect the top-performing partners within specific technology markets across the United States. All award recipients are selected by a group of Cisco Global Partner Organization and regional and theatre executives. 

     

    Cisco Partner Summit is attended by more than 2,100 global attendees from Cisco’s eco-system of partners representing more than 1,000 companies worldwide from more than 75 countries.

     

  • Introducing NetApp HCI: The Next Generation of Hyper Converged Infrastructure

    Introducing NetApp HCI: The Next Generation of Hyper Converged Infrastructure

    October 12th, 2017
    Read More

    Last week, thousands of IT professionals gathered in Las Vegas for the 2017 NetApp Insight conference. NetApp Insight is NetApp’s annual technical conference for storage and data management professionals. The conference was full of technical sessions, round-table discussions, self-paced hands-on labs, certification courses and much more.

    One of the highlights of the year’s conference was the official unveiling of NetApp HCI, the next generation of hyper converged infrastructure, and the very first HCI platform designed for enterprise-scale. Chief Product Architect Adam Carter presented a brief overview of NetApp HCI which touched on hardware specifications, an installation/administration demo, performance guarantees, ONTAP select integration, and Data fabric consumption options.

    Let’s dive a little deeper into what NetApp HCI has to offer:

    Hardware

    Each 2U chassis holds four half width (1RU) storage and/or compute nodes. The base model starts with a dual chassis (4RU) solution consisting of two blank slots for expansion, four storage nodes, and two compute nodes for high availability (HA) and N+1. The minimum configurable model includes 32 cores for VMs, 512GB memory, and 5.5TB-11TB (depending on storage efficiency) of all flash capacity. Node sizes can be mixed and matched to achieve the desired host and storage specifications. Each node can push a staggering 50-100k IOPs depending on the type of workload. The best part is there isn’t a controller VM (CVM) for operations so the CPU and Memory shown will be dedicated to VMs (no “HCI tax”).

    Cloud Scale and Datacenter Integration

    Because the network is decoupled vs a traditional HCI model, storage and compute nodes scale independently and can coexist within the same 2U chassis. The ability to incrementally and independently scale storage or compute nodes creates a cloud like “grow-as-you need model.” This eliminates the need for a large investment every time there is a requirement to scale out. Nodes can scale into the “100s” according to Chief Product Architect Adam Carter. For additional cost savings, the cloud scale model eliminates the need to purchase additional ESXi licenses to add storage nodes vs traditional HCI platforms.

    To scale, you add the node and run through a simple two step process. The node is then non-disruptively and seamlessly assimilated into the ESXi cluster or single storage pool. The same rule applies to a node failure. After replacing a failed node, the self-healing functionality of the cluster kicks in and brings the node back into the cluster with a short two step process.

    Traditional HCI models are difficult to phase in and often require a full datacenter refresh. NetApp HCI leverages existing SAN/NAS switches and can present iSCSI storage to external servers for permanent use or to utilize a phase out approach. 

    Installation and Administration

    With over 400 steps automated by the NetApp Deployment Engine (NDE), the NetApp HCI cluster is VM ready in under an hour. To deploy NetApp HCI, the user accepts the VMware and NetApp EULAs, sets the admin password, then enters the IP information for storage and VM networking. The HCI cluster then automatically installs the SolidFire Element OS on the storage nodes, installs VMware ESXi on the server nodes, deploys a new vCenter or alternatively integrates with an existing vCenter, then installs NetApp HCI management plugins and the lightweight management VM used for alerting, management, and phone home. The system is now fully VM ready.

    NetApp HCI is integrated with vSphere eliminating the requirement to learn a new UI. The automatically injected NetApp HCI plugins allow administrators to add new volumes by selecting the size, performance SLA, and target hosts (how big, how fast, and who needs it). The volumes are then automatically created and presented to the appropriate resources. Alerting is further integrated giving users the ability to view storage related events directly through vCenter. 

    Advanced Storage Services

    Simplicity is one of the key components to most HCI implementations. The tradeoff to simplicity usually means administrative capabilities are limited and unlocking advanced features is cumbersome. NetApp has considerably simplified their APIs and added what I think is a learning feature. Not only did NetApp consolidate and simplify their API commands, they added a checkbox to the GUI which will give administrators the API output. The administrator can then modify the API output to gain higher-level and repeatable management, orchestration, backup, and disaster-recovery options.  

    For even greater advanced functionality, NetApp HCI goes beyond SAN. ONTAP Select comes prepackaged and deployed at no additional CPU, memory, or storage cost. This gives organizations best in class NAS file services for advanced CIFS and NFS configuration and protection.

    Performance Guarantee

    NetApp HCI storage is built on the SolidFire platform which was originally designed for cloud providers with multiple customers sharing the same infrastructure. Besides building in multitenancy, SolidFire added comprehensive Quality of Service (QoS) capabilities to guarantee performance of all tenants and workloads. NetApp HCI leverages QoS at the aggregate level or granular control at the VM level though the automated VVOL integration. Administrators can define and enforce performance guarantees with minimum, maximum, and burst settings for each volume or application independent of capacity.

    Data Fabric Ready

    It was also announced that SnapMirror will be added to NetApp HCI in the very near term. Administrators will soon be able to migrate data between NetApp portfolio products. In the near term, workloads will have the ability to be moved between or protected at endpoints between NetApp HCI, FAS/AFF, ONTAP Select, and ONTAP Cloud deployments. Below are the fundamental components of the NetApp Data Fabric ecosystem HCI can leverage:

    • File services with ONTAP select
    • Data protection with Snapshots and cloning capabilities
    • Data Management and Monitoring with Active IQ and OnCommand Insight (OCI)
    • Backup and recovery with Altavault, CommVault, or Veeam Snapshot integration
    • Object services with Storage Grid
    • Replication via SnapMirror

Join the High Availability, Inc. Mailing List

Subscribe