Blog

  • Intel Vulnerabilities: What We Want Our Cisco Customers to Know

    Intel Vulnerabilities: What We Want Our Cisco Customers to Know

    January 23rd, 2018
    Read More

    Earlier this month, security researcher Jann Horn from Google’s Project Zero reported the discovery of some serious vulnerabilities in most modern CPUs, most notably Intel. According to researchers, “virtually every user of a personal computer” are at risk.

    The vulnerabilities, which are now collectively being referred to as Meltdown and Spectre, exploit performance enhancement features. The vulnerabilities could give hackers access to passwords, photos, and other sensitive data if exposed. Intel CPUs, which drive most computer platforms in the market today, are most heavily affected because they have a few tweaks that AMD/ARM CPUs do not. ARM CPUs are more common in mobile devices, and they are apparently not affected by the Meltdown attack, but are mostly vulnerable to the Spectre attacks.

    These attacks allow one process to access data, including memory page contents, that would otherwise be restricted. According to Google’s research, this could even extend to the point of breaking the VM isolation boundary, e.g. a VM running malicious code for this attack may be able to access memory of another VM on the same host. This is detrimental for data centers and even worse for cloud platforms.

    Cisco was one of the first networking vendors to come forward with comments on the situation. They assured end-users that there is no attack vector on their appliance platforms unless another vulnerability to execute arbitrary code is first exploited. In other words, these attacks require local code to run on the target machine. Since routers, switches, firewalls, and load balancers generally do not allow just anyone to run a piece of software, an attacker would need to first use another exploit to enable the attacker to run a piece of arbitrary code, and then could theoretically execute one of these new attacks. Such arbitrary code execution vulnerabilities are found from time to time, but keeping up with software updates on networking appliances should prevent most attacks. This also goes for virtual appliances (such as ASAv, CSR1000V, NGFWv, etc.). Though Cisco acknowledges that if the virtual server platform these systems are running on is compromised, the virtual appliances could be impacted.

    One exception to the limited impact on routers and switches is those that can run user-specified processes in containers, such as the Open Agent container feature on some Cisco Nexus switches or Open Service containers on Cisco ASR routers. Always make sure you run software from trusted sources in containers on your network equipment.

    That leads us to where the real impact is: server platforms

    Attackers leveraging Spectre and Meltdown will primarily be targeting servers. Even though they are relatively difficult to execute, these attacks still have the potential to surface. Since server OS’ can, by nature, execute arbitrary code, these exploits can be run on a system that has that malicious code placed on them in some way. As of right now, no viruses/worms/malware are known which use these attacks, but it’s probably just a matter of time. These exploits cannot be run just by visiting a website, viewing an email, etc., they need code to run locally on the target machine. But, typical malware insertion methods (whether a worm or phony email attachments) could be used to get the code to a target machine when combined with other attacks.

    Click Here to view the Cisco PSIRT page up which has some details on exposure of their products

    UCS servers are vulnerable. This is because, like most servers today, they have Intel CPUs. This is not Cisco’s fault, but they use the impacted CPU components, like most every server issued since around 1995. Per the Cisco PSIRT linked above, there will be microcode updates available on February 18th for most Cisco server platforms, which, when applied in concert with applicable OS updates, should mitigate the vulnerabilities. The UCS update will be a BIOS update that updates the CPU microcode.

    Mitigations and Mitigation Impacts:

    For Meltdown, the easiest mitigation is an OS patch. Apple, Microsoft, and Linux all have patches available. Operating Systems derived from Linux will need to be updated once the upstream updates are incorporated into their builds. This is the case for many embedded platforms (including some Cisco routers, switches, etc.).

    There is a CPU performance impact since the mitigation is to disable performance features. The extent of that performance hit is highly dependent on workload. Apparently, a single-workload system or one which does limited context switching (e.g., an end-user laptop/desktop) will not see a significant impact on the performance front. Our engineers suspect that dedicated network appliances (routers, switches) will not see much impact even when they are eventually patched at the OS level for the same reason.

    Where is gets concerning is the server/data center/virtualization environment. Here, context switching is very frequent due to the multiple workloads running on each CPU. These mitigation patches require not only disabling some performance enhancements in the CPUs, but require doing MORE work to secure memory contents before switching to a different task. The numbers generally floating around for impact are 5-30% - that’s a lot. Again, it is hard to predict the impact on a specific workload.

    Here is a graph showing the impact to two server instances before and after OS patching:

    (Source: https://twitter.com/ApsOps/status/949251143363899392)

    You can see a 10-20% jump in CPU. Ouch.

    Click Here to see more examples of impacted workloads. Some have a negligible impact, and others are significant.

    In the link above, Redis and PostgreSQL (both database platforms) saw the most impact.

    Mitigating the Spectre attacks is more difficult and no specific mitigations have been released yet for most platforms.

    So, the potential here is that everyone loses effectively CPU capacity in their compute environments. We can’t predict how much, but potentially enough to require a larger server farm to handle a workload than was previously required. Once patches are applied, end-customers will start to see the impacts to their workloads and will have to adjust future sizing appropriately – or in the worst case, they may need to backfill for “lost” CPU capacity if their workloads were already taxing their CPU resources.

    Tips going forward:

    • Keep an eye on the PSIRT page for Cisco details
    • Conduct patch operation system procedures ASAP (despite the potential performance hit)
    • Reach out to your H.A. account manager to set up any upcoming UCS updates

     

  • A Charitable Culture

    A Charitable Culture

    January 11th, 2018
    Read More

    “If you haven't any charity in your heart, you have the worst kind of heart trouble.”

    - Bob Hope

    One of the main reasons I joined the High Availability, Inc. team was because of the company culture.  Having worked with High Availability, Inc. closely during my years at NetApp, I was able to see first-hand how strong that culture was.  In my opinion, company culture encompasses many aspects - from positive employee morale, to the readiness to support your teammates and, ultimately just having employees who love going to work every day.   However, beyond this, another important factor to me personally is a company’s willingness to give back charitably to both their community and worthy causes.

    Charity is a huge part of my life outside of work.  Since 1989 my family and close friends have run a charity supporting brain tumor research in memory of my brother called the JAG Fund. Because of this, I’ve always valued the charitable efforts in my professional life, and High Availability, Inc. is certainly no different. After joining the team, I quickly realized that giving back was just as important to High Availability, Inc.

    Here are a few examples that prove this….

    As mentioned previously, my entire family is deeply involved in the JAG Fund. Every year, we host a Black Tie-Gala to raise both support and awareness for brain tumor research. I was thrilled when my teammates at High Availability, Inc. pledged their support for the JAF Fund through a corporate sponsorship towards our Black-Tie Gala! Not only did H.A. support the JAG Fund financially, but they also attended the event. In fact, my boss, Steve Eisenhart, was rated as one of the top dancers that evening and the High Availability, Inc. table was by far the most supportive during our silent auction. They even set a record for their winning bids!

    A second example was High Availability, Inc.’s partnership with Tech Impact, a non-profit organization headquartered in Philadelphia created to provide cost-effective IT support for other non-profit organizations.  Tech Impact also helps help young urban adults move into a career in IT through their ITWorks Program.  Having supported Tech Impact prior to coming to High Availability, Inc., I knew how worthy of an organization it was, so I was thrilled when the executive team at H.A. approved a sponsorship for the annual Tech Impact Luncheon. I was fortunate enough to attend the event with several other members of our team and even a few customers.  It was amazing to see first-hand the positive impact of our contribution!

    Lastly, but most impressive in my opinion, was High Availability, Inc.’s support of Hurricane Harvey relief efforts. H.A. specifically contributed to the J.J. Watt Foundation.  The aftermath of Hurricane Harvey was absolutely devastating for the state of Texas. The Category 4 storm dumped over 27 trillion gallons of rain over Texas, and forced over 30,000 people to seek temporary shelter.

    After the storm passed, High Availability, Inc. acted immediately. The team set up a relief fund program for those affected by Hurricane Harvey, and even matched each contribution dollar-for-dollar. Within just a few minutes, the H.A. team donated over $3,000!

    “Everyone stepped up and was happy to help in any way they could,” said Greg Robertson, CFO and Founder of High Availability, Inc. “I feel very fortunate to be working with individuals who see the importance of helping those in need.”

    Within three days, High Availability, Inc. had raised exactly $10,000!

    High Availability, Inc. plans to double our charitable efforts in 2018. In fact, Steve Eisenhart, CEO and Founder of High Availability, Inc., has even implemented the first ever “H.A. Challenge”, which encourages employees to give back a full working day to a local charity through volunteering.

    Of course, these aren’t the only examples of H.A. giving back but they are the top 4 that I’ve seen since joining the organization just over 18 months ago.  Pretty impressive and a true example of the company embracing a charitable culture!

  • Tips For Meraki WLAN Deployment Part 1: AP Addressing

    Tips For Meraki WLAN Deployment Part 1: AP Addressing

    December 29th, 2017
    Read More

    Meraki cloud-managed network infrastructure has brought a new level of manageability to the network, and many of High Availability’s customers have found out just how easy it is to operate a Meraki-based network infrastructure. For this series of articles, I wanted to recap a few tips for making the deployment go smoothly as well. Listed below are five often-overlooked topics essential for a complete and trouble-free Meraki installation.

    1. AP Addressing (Static or DHCP)
    2. Naming
    3. Tagging
    4. Installation Photos
    5. Floorplans

    We will be discussing each of these topics in the next few blog articles.

    AP Addressing

    By default, Meraki access points will request an IP address through the Dynamic Host Configuration Protocol (DHCP). If you are putting your APs on a client-serving network (which can be OK in a small office environment), that’s usually all they need to get started. However, larger, more complex network designs often dictate that access points’ management interfaces live on a dedicated AP VLAN or maybe a common infrastructure management VLAN. In those cases, DHCP may not be enabled by default. There are a few options in this situation.

    First, DHCP can be enabled for the VLAN. If this is done, the approach I like to take is to set that DHCP scope up on the router or layer 3 switch serving the VLAN, rather than on a Windows AD server or similar. Why? Well, the simple fact is that VLANs used for infrastructure are easy to ignore, and sometime in the future the DHCP scopes might be migrated or the server decommissioned, with little or no attention paid to the DHCP scope defined for a non-user VLAN. In this case, the stability of a layer 3 switch to provide the DHCP may be desirable. Also, unlike client workstations, there is no strong need to have reverse-DNS PTR records registered for the APs or anything like that so putting the APs’ DHCP scope on a network device keeps all the configuration needed for the APs “in the network.”

    Now, there may be times when DHCP addressing for your access points is not feasible or desirable. One support issue I’ve encountered more than once occurs when the DHCP scope that supplies IP addresses to the APs is exhausted (this usually occurs when a client-facing subnet is used for the access points). Eventually the AP is unable to renew its IP address, and it stops working. If this is the only option for addressing, DHCP may not be a good choice. Perhaps security policy prohibits the management network from providing the DHCP service. Or maybe administrator simply prefers that all infrastructure devices have fixed IP addresses. In this case, there are two ways to assign static IPs to your Meraki APs.

    If you can initially provide an IP address via DHCP, the AP will check into the Meraki Dashboard, and assigning a static IP is simple.

    First, go into the access point details page under Wireless > Access points, and click the access point in question. To the left of the main browser pane, you will see the IP settings. Click the pencil icon to edit them:

    A small box will appear, in which you can select the addressing type. Here, you can select “Static IP:”

    The box will expand and you can enter the AP addressing details, like this:

    After filling in the appropriate details, click the Save button. The AP will reboot, and should come up at its new, static IP address. Rinse and repeat for the other APs. Note that the “VLAN” field only needs to be populated if the AP’s management VLAN is not the untagged/native VLAN for the connected switch port.

    What about an instance where you cannot bring the AP up on DHCP initially? If the AP must be statically addressed from the start, before it can even reach the Meraki Dashboard, you need to locally connect to the AP.

    In this case, power up the factory-fresh AP and it should begin broadcasting an SSID of “Meraki Setup.” Connect to this SSID, and then open your browser and go to “ap.meraki.com.” You should see a status page like this:

    Switch to the “Configure” tab at the top, and when prompted for a Username and Password, enter the serial number of the AP under the Username and click OK, like this:

    The Configure screen will allow you to choose the Uplink addressing mode:

    As before, select Static, then fill in the addressing details and click Save at the bottom of the screen. After restarting, your AP should be able to connect to the Internet and the Meraki Dashboard via the statically-programmed IP address.

    Hopefully, this blog article has been helpful in getting your Meraki access points addressed and connected to the Meraki cloud in a reliable manner. As always, the networking experts at High Availability are available to assist you with your networking project.

    Next time, we will cover tips for deploying your Meraki hardware using the Meraki mobile app and taking advantage of some special Meraki features!

  • 3 Ways To Protect Your Cisco SIP Voice Network

    3 Ways To Protect Your Cisco SIP Voice Network

    December 27th, 2017
    Read More

    The placement of Voice Infrastructure devices on the network presents some challenges to voice engineers. They must satisfy the security requirements of the Enterprise while providing simple and uninterrupted access to external systems such as PSTN SIP trunks and internal systems such as VoIP phones and servers. In this article, we discuss Voice security from a Routing and Switching point of view. This article is not about internal Voice security mechanisms such as authentication and encryption. 

    Obviously, we do not recommend terminating external SIP trunks on Unified Communications Manager servers directly. Therefore, in this article, we will talk about the placement of Voice Gateways on the network and the interaction between the various Voice components. We consider the Voice Gateway to be a CUBE that terminates all media streams. 

    Placing a Voice Gateway on the internal network is a major security risk. It exposes your entire enterprise to an external entity even if that entity is your trusted voice carrier. Therefore, all our designs require placing the Voice Gateway on a DMZ behind a NextGen firewall such as Cisco ASA with Firepower services. First, the ASA does SIP inspection and can deploy security ACLs to filter inbound traffic and only allow connections from specific IPs such as your Voice Gateway SIP signaling and media IP. Second, NextGen firewalls can detect and protect against malware embedded in the data stream.

    The following scenarios are based on some very common WAN designs that we have seen deployed at our customer sites of various sizes.

     

    I. Dedicated PSTN SIP trunk via a dedicated Voice Gateway.

    This is the simplest setup. A primary dedicated SIP trunk over a private T1 or Ethernet WAN connection and an optional backup SIP trunk via the Internet. Note that the backup SIP trunk can be hosted on a separate Voice Gateway too to provide hardware redundancy. The following diagram illustrates this design. 

    In this scenario, the Voice Gateway is on a dedicated DMZ behind the NextGen firewall. The firewall has SIP inspection turned on and only allows inbound SIP control traffic sourced from the inside IP of the Voice Gateway to the CUCM. Similarly, an outbound ACL controls SIP signaling traffic from CUCM to the Voice Gateway. SIP media traffic will be automatically allowed based on the negotiated RTP ports.

     

    II. Voice Gateway is co-located with your MPLS router

    Today’s most common WAN designs utilize MPLS as the underlay infrastructure for Enterprise WAN connectivity. It is also not uncommon for an MPLS carrier to provide PSTN SIP trunks over the same physical fiber or copper cable. Therefore, to save on hardware your Voice Gateway might be co-located with your WAN ISR router. If your MPLS/SIP provider separates the two functions and terminates each on a separate VLAN, then it is possible to use VRF (Virtual Routing and Forwarding) to virtualize your router: The Voice portion can use the Global Routing table while the MPLS portion can use a VRF. Otherwise, both technologies can use the Global routing table. The following diagram illustrates this former scenario: 

    In the above diagram, we go a step further by separating MPLS and Voice Routing tables. Note that we are only discussing Voice Security. You still need to configure Quality of Service (QoS) to prioritize your VoIP traffic whether you are using a VRF or not. QoS is outside the scope of this article.

    As in the previous scenario, the security ACLs, firewall inspection, and Firepower services protect the Enterprise Voice Infrastructure from external attacks.

     

    III. Dedicated Voice Gateway and a shared MPLS/SIP connection

    This is a very common design in which the customer orders an MPLS and a SIP trunk from the same carrier as in the 2nd scenario above. However, the customer is using a dedicated Voice Gateway with an optional backup SIP trunk to the same carrier via the Internet.

    The following diagram illustrates this solution:

    In this scenario, a single Voice Gateway sits behind a NAT firewall. The VG communicates with the SIP provider via the “out_voice” interface and communicates to CUCM and the IP Phones via the “in_voice” interface. NAT is only used when communicating over the backup SIP trunk via the Internet connection. With recent firewall firmware, we see no issues in NAT’ing SIP control and media traffic. As in all previous scenarios, the Voice Gateway sits on a DMZ segment and the CUCM sits on a dedicated internal VLAN. Note that the “in_voice” interface can be connected directly to the internal LAN without going through the firewall.

    Summary

    While there are many ways to protect your Voice Infrastructure, we have highlighted three common scenarios and listed the advantages of having Voice control and media traffic inspected by a NextGen firewall. Other more complicated designs can be derived from the above scenarios especially if you combine your DMVPN, MPLS, or iWAN technologies into a single device.

  • High Availability, Inc. Hosts Star Wars Movie Premiere

    High Availability, Inc. Hosts Star Wars Movie Premiere

    December 21st, 2017
    Read More

    Last week, High Availability, Inc. welcomed customers and partners alike to the Movie Tavern in Collegeville, PA for an advanced screening of Star Wars: The Last Jedi. The annual movie premiere has been a cornerstone event for High Availability, Inc. since 2012. It’s a chance for the H.A. team to thank customers for their business during the past year and allows for everyone, even our partners, to kick back and relax.

    “Our movie premier event is a great chance for our employees, customers, and partners to get together and escape the end-of-year busyness.” Said Greg Robertson, Chief Financial Officer for High Availability, Inc. “The event creates the perfect mixture of industry news, education, and fun,” Robertson added.

    However, the event wasn’t as relaxing for our speakers! H.A.’s infamous “Ignite-Style” presentations took place before the film. Speakers from Cisco, NetApp, Quantum, Riverbed, Rubrik, Varonis, Veeam, and High Availability, Inc. had to present an emerging technology or solution from their organization. Although, the H.A. team set forth some abnormally strict rules! Each speaker had only 5 minutes to present and had to utilize 20 slides – no more, no less. The slides were timed to advance every 15 seconds whether the speaker was ready or not.

    “The Ignite format was challenging for the speakers, but helpful for the audience.” Explained Pat Hopkins, a speaker from Quantum.  “The speakers had to prepare a short, fast moving but focused message to get their point across. With this style, the audience can quickly absorb valuable information about potential solutions they could bring back to their own organizations. Great format,” Hopkins added.

    The most talked about presentation of the evening, which was delivered by Bob McCouch from High Availability, Inc., discussed the most popular emerging technologies from each year a Star Wars film was released. McCouch, a Principal Technologist for High Availability, Inc., discussed everything from the rise of the modern cell phone, big data, and the internet of things.

    In short, the High Availability, Inc. movie premier was an enormous success! We can’t wait for next year! Thanks to all our customers and partners for participating in the event.

    Click Here to access event pictures

  • Quantum and Veeam Just Keep Getting Better Together

    Quantum and Veeam Just Keep Getting Better Together

    December 19th, 2017
    Read More

    In the world of backup and recovery, deduplication appliances have always had a great purpose in keeping large amounts of data for long periods of time.  From the backup software perspective, Veeam has continued to grow as the platform of choice for virtual environments.

    Quantum has recently announced some new integration with Veeam, in the form of Data Mover software for DXi and also the iBlade for Scalar libraries that both integrate very nicely.  For now, I would like to focus on the data mover.  The Veeam Data Mover is a piece of software that runs directly on the DXi and performs some of the processing for Veeam Backup and Replication software.  By running the target data mover directly on the DXi, some of the backup and restore operations within Veeam are much more efficient and happen quickly.  The backup data is sent by the Veeam backup server to the data mover, which in turn writes to an NFS share on the DXi for deduplication, compression, and eventually storage.

    Configuration is straightforward, and requires only a few steps.  There are a few prerequisites that must be met, which are listed below:

    1. A Veeam License is required, and must be installed on the DXi
    2. The DXi system must be running at least version of 3.4 firmware
    3. A memory upgrade may be required on the DXi
    4. The DXi must have NAS support

    My focus for this article is on configuration of the Veeam Data Mover integration, so any of the above listed requirements for the DXi can be addressed by your local Quantum reseller.

    The first step that must be configured on the DXi is to create the NAS share.  Create a new NFS NAS share with deduplication enabled.  If replication is needed, the option is there to use Quantum replication, which can easily be configured on the share as well.  Once the share is added, there is a new tab in the 3.4 firmware called “Application Environment”.  By checking the “Enable Veeam” box and entering a new password, the DXi software will build the data mover environment.  That’s it – a few simple commands and the DXi is ready for Veeam.

    Now that the DXi is ready as a target for the data, we must configure Veeam to send it there.  In the Backup Infrastructure tab, add a new Linux server.  Use the name (or ip address) associated with a data port on the DXi  and add the associated credentials that were specified during the Quantum configuration.  This will add the newly created data mover as a managed Linux server within Veeam.

    Next, once again go to the Backup Infrastructure tab and add a new Backup Repository.  The specified type for this repository will be Linux Server.  When you add the server, and the path screen is displayed, click populate and then click next to move to the Repository folder path.  Click “Browse” and drill down to the newly created NFS NAS share on the DXi.  Make sure to select Decompress backup data blocks and Use per-VM backup files.  Default settings can be used for the rest of the wizard.

    Once completed, you will have successfully created a new Veeam Repository on the Quantum DXi which resembles the diagram below:

    The finished product is not only an extremely efficient way to backup and store virtual machines, but also for recovery as well.  You’ve got Quantum DXI with the StorNext data management software to maximize efficiency, as well as the industry leading variable length deduplication technology.  Combined with Veeam to manage the backup and recovery process as well as move the data, it doesn’t get any better.  Recovery of virtual machines, whether that be instant recovery or individual files, is just as easy and efficient.  Protecting your virtual machine data has never been easier.

  • Commvault GO 2017 Recap

    Commvault GO 2017 Recap

    November 13th, 2017
    Read More

    Commvault GO 2017 Recap

    Last week, thousands of IT professionals gathered in Washington, D.C. for Commvault GO 2017. Commvault GO is Commvault’s annual technical conference for storage and data management professionals. Now in its second year, Commvault GO was created to showcase the newest products and solutions from Commvault and to give end-users an update on the strategic direction of the company.

    The agenda was jam-packed with fascinating presentations, breakout sessions, and technical trainings. Luckily, I created a thorough plan for each day so I could make the most of my time at Commvault GO! However, after walking the floor and attending several breakout sessions and technical trainings, I quickly recognized two recurring themes:

    1. Organizations need to shift to a “Data-Centric” approach
    2. Commvault is making strides to simplify the deployment and management of their solutions

    So, let me break down these two themes:

    Robert Hammer, CEO of Commvault, did a brilliant job in his keynote presentation speaking to the “Data-Centric” approach. Hammer spoke about how customers today are starting to change their mindset from infrastructure-centric to data-centric due to a dramatic increase in the new data generation, which is predicted to grow 10x each year from today (163 zettabytes for those keeping score at home) up until 2025. He explained that customers will be forced to focus less on their IT infrastructure and more on the data stored and moving between that infrastructure. Commvault’s answer to this challenge is the Commvault Data Management Platform.  Simply stated, the platform is designed to help organizations achieve better data insights for compliance, e-discovery and a variety of other digital transformation use cases. 

    Today, customers are trying to figure out how to distribute and share their data across multiple platforms, which makes where the data lives irrelevant. At the same time, data protection is critical.  There seems to be a concession that it’s no longer a matter of “if” a ransomware attack will hit but rather “when.”   No matter the severity of the breach, whether just a few servers were affected or your entire system crashed, the goal is to get operations back online quickly and with minimal data loss. With tools like Intellisnap & Live Sync, customers are provided with the resources to rapidly recover in place or restore operations in another location with minimal downtime, even on a completely different infrastructure.  The key to making this all work is planning.  In fact, every customer that presented spoke about their S.O.P. when it comes to ransomware attacks. To say “measure twice and cut once” is an understatement, especially when factoring public cloud into the equation. The industry is trying to carefully navigate these waters and ultimately, we are all going to have to get to this place to continue to innovate and grow in all of our respective industries.

    The second theme of simplicity I found very refreshing.  While Commvault’s capabilities are vast and go far beyond the limits of just backup and recovery, that robust feature set has in the past carried the misperception of the solution being complex to manage.  As a result, Commvault has come out with new and innovative solutions to streamline these processes to become more operational and efficient. The first new streamlined solution, and one of the most talked about products of the conference, is Commvault Hyperscale.

    Commvault Hyperscale can be used as an appliance or as software. It consolidates all the roles performed by discrete servers in the traditional data protection architecture into a single software defined stack. As mentioned previously, this new offering follows Commvault’s new simplistic and stripped-down approach:

    1. Setup time is 30 minutes for the appliance
    2. You only buy what you need today since growing the solution is a simple as adding 3 nodes at a time growing out the CPU, compute and memory resources at the same time
    3. No more data silos along with fork lift migrations when it’s time to refresh
    4. More resiliency built into solution as losing a drive or even an entire node does not take down the back environment
    5. Operation efficiency (especially when using the appliance) as less “components” to maintain (no storage, no SAN, etc…)
    6. Supported by 7 platforms and can be consumed as an appliance or as software

    This is definitely a step in the right direction for Commvault. Eliminating what used to be several weeks of work to just days is every IT professionals dream, especially when a lot of automation is required.

    To summarize, Commvault GO 2017 was fascinating and I could not be more pleased with the direction Commvault has chosen to take. Their understanding of the customers thought process and their simplified solutions show that Commvault has listened to our feedback and is growing as an organization. I can’t wait for Commvault GO 2018!

  • High Availability, Inc. Receives Execution Excellence Award as Regional Partner of the Year at Cisco Partner Summit 2017

    High Availability, Inc. Receives Execution Excellence Award as Regional Partner of the Year at Cisco Partner Summit 2017

    November 9th, 2017
    Read More

    Last week at Cisco’s annual partner conference in Dallas, Texas, Cisco named High Availability, Inc. their Regional (U.S. East) Partner of the year for innovation, leadership and best practice as a Cisco business partner. Bill Volovnik, Partner Account Manager for Cisco, accepted the award on behalf of the entire organization.

     

    “We couldn’t be more excited and honored to receive the America’s Regional Partner of the Year award from Cisco,” said Steve Eisenhart, Chief Executive Officer of High Availability, Inc. "Cisco has become our most strategic business partner over the last three years.  We have an amazing team of sales people and engineers focused on designing, delivering and supporting Cisco’s entire portfolio.  We look forward to continuing our success, and strengthening our Cisco partnership even more in FY18.”

     

    Steve Eisenhart, CEO and founder of High Availability, Inc., will formally accept the award on behalf of the entire organization in a private awards ceremony with Cisco later this month.

     

    Cisco Partner Summit Theatre awards reflect the top-performing partners within specific technology markets across the United States. All award recipients are selected by a group of Cisco Global Partner Organization and regional and theatre executives. 

     

    Cisco Partner Summit is attended by more than 2,100 global attendees from Cisco’s eco-system of partners representing more than 1,000 companies worldwide from more than 75 countries.

     

Join the High Availability, Inc. Mailing List

Subscribe