Blog

  • High Availability, Inc. Named to 2018 CRN Fast Growth 150 List

    High Availability, Inc. Named to 2018 CRN Fast Growth 150 List

    August 9th, 2018
    Read More

    High Availability, Inc. Named to 2018 CRN Fast Growth 150 List
    Recognizing Thriving Solution Providers in the IT Channel

    Audubon, PA, August 9th, 2018 - High Availability, Inc. announced today that CRN®, a brand of The Channel Company, has named High Availability, Inc. to its 2018 Fast Growth 150 list. The list is CRN’s annual ranking of North America-based technology integrators, solution providers and IT consultants with gross sales of at least $1 million that have experienced significant economic growth over the past two years. The 2018 list is based on an increase of gross revenue between 2015 and 2017. The companies recognized this year represent a remarkable combined total revenue of more than $50 billion.

    “It is a tremendous achievement to be named to CRN’s Fast Growth 150 for the 4th year in a row,” said Steve Eisenhart, Chief Executive Officer of High Availability, Inc. "Our investment in existing/new employees, our strategic partners, new/emerging technologies and our Cloud and Managed Services Division have led to incredible year-over-year growth.  We are fortunate to have such amazing clients who continue to excel and thrive in their own sectors. We are excited about where we are as an organization and look forward to figuring out a way to make this list again next year.” Eisenhart concluded.

    “CRN’s 2018 Fast Growth 150 list features companies that are growing in an ever-changing, challenging market,” said Bob Skelley, CEO of The Channel Company. “As traditional solution providers are moving towards a services-focused business model, this extraordinary group have been able to successfully adapt; outperforming competitors and proving themselves as channel leaders. We are pleased to recognize these organizations and look forward to their continued success.”

    The complete 2018 Fast Growth 150 list is featured in the August issue of CRN and can be viewed online at www.crn.com/fastgrowth150.

    High Availability, Inc. is a premier solution provider and integrator of data center products and cloud services. High Availability, Inc. solves complex business challenges by architecting and implementing forward-thinking technical solutions, while forming trusting, collaborative relationships. By taking a hands-on, consultative approach, the High Availability, Inc. team creates custom tailored systems and solutions to fit both current requirements and future IT and business needs.

  • What’s New in Azure Storage?

    What’s New in Azure Storage?

    August 8th, 2018
    Read More

    Azure is now a fully mature, platform neutral, public cloud provider that gives every business from the small manufacturer to the global enterprise the ability to move their entire IT workload to the cloud from webhosting, to e-commerce, to service for internal services such as databases, collaboration and messaging services and on and on. But at the end of the day, all these services, both internal and customer/partner, facing rely on the life blood of modern business: information.  That information exists in the form of data.

    Data must be housed somewhere and with the latest general release for storage, Azure has something for everyone.  Azure Files.  Azure Blobs.  Azure Disks.  So which one is right for your data needs?  Well of course that depends on the format and type of data, how you want to access and present it, and how often it needs to be accessed.  Let’s take a look.
     

    Azure Disks

    The most direct cognate of what we think of in traditional terrestrial computers and servers, disk are exactly what they sound like: Azure hosted disk drives of various sizes and capacities either HDD or SSD from 32 GB to 4 TB, all with 500 IOPs per disk, and throughput speeds of 60 MB/sec. Azure Disks are lift and shift certified for a range of Microsoft enterprise applications such as AZURE SQL Databases, Dynamics AX and CRM, Exchange Server and more. Choose Azure disks for applications for which you require native file system APIs, with persistent storage for read and write operations.  Azure Disks are the answer when you need to store data that doesn’t need to be accessed outside of the virtual machine the disk is attached to.

    An integral feature is built in, automatic, redundancy.  All Azure disks are automatically copied to three locations.  This is basic Locally Redundant Storage (LRS). All copies reside in the same data center, but in different racks. You also have the option to increase resiliency by choosing from the following upgrades. Zone Redundant Storage (ZRS) which places one of the hosted copies on a disk in a second, fully isolated, Availability Zone. Or you may choose the option of Geo Redundant Storage (GRS), that replicates your data to a second data center in a different Azure region for read access in the case of a Microsoft declared disaster.

    All redundancy models are designed to provide at least 99.999999999 (11 9s) availability of your data (LRS).  ZRS provides 12 9s and GRS provides 16 9s.  Unless you are a global enterprise the size of , well Microsoft, your data center is quite unlikely to provide that kind of data availability.

     

    Azure Files

    Azure files provides the ability to set up structured file shares that that can be mapped the same way network drives are, via Server Message Block protocol (SMB).  This allows multiple Azure VM’s to reference the same files simultaneously. Your premise servers can map drive shares the same way, if you’ve extended your active directory into Azure.  Azure Files also allows you to share these file stores with outside users, if needed, with full security provided through shared access tokens which can be managed centrally, even by setting expiration dates if you like.  Think shared file exchange locations with business partners, without having to provide them with user names and passwords or access to your network, whether cloud of on premise.

     

    Azure Blob Storage

    If you run an art house, music service, or any number of businesses that depend on storing a large number of unstructured files that need to be individually accessed without need of traditional paths, Azure Blob Storage is your Huckleberry. Think of Blob Storage when you need to serve files or images directly to a browser through your spiffy new web site. Or when you need to serve files through distributed access from a location off network. But Doug I need to serve streaming video or audio? Glad you asked, Azure Blob. I need a place to throw by daily backups and keep them for 99 years!  Azure blob has you covered.  Generate a ton of log and diagnostic files that need to be available for analysis by on premise or Azure hosted services?  Azure Blob.

    The best part?  These files are available from anywhere in the world that has an internet connection via HTTP or HTTPS, using the Azure REST API, Azure PowerShell, Azure CLI or Azure Storage client libraries. Oh, did I mention that client libraries are compatible with .NET, Java, Python, PHP and Ruby?  Well you guessed it.  Azure Blob’s got you covered.

    So step away from maintaining all that hardware in your SAN. The biggest most complicated SAN is what makes you cool. Being able to preserve, protect, and securely serve that data to who you want (and only who you want to) without worrying about keeping a stock of hot swappable disks 9and the capital expenses) in your server room is what makes you cool. Cool as Azure blue.
     

  • Top Technology Buzz Words

    Top Technology Buzz Words

    July 9th, 2018
    Read More

    When I was given this assignment, it brought me back to my fist day in technology 16 years ago (yes, I’m old and social media still scares me a bit). I remember thinking I have no idea what anyone is talking about and I wished everyone would speak in complete sentences and stop using “Buzz Words” that I don’t know. What I really realized is I had a lot to learn. So how did I come up with the five Buzz Words below? It’s a combination of what I hear every day, talking with peers and doing some research.

    Artificial Intelligence (AI)

    This is an obvious one and even people who aren’t in technology have heard of it. Self-driving cars comes to mind first.  The formal explanation is AI refers to the autonomous intelligent behavior of software or machines that have a human-like ability to make decisions and to improve over time by learning from experience. Different approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimization, logic and methods based on probability and economics. In business intelligence, there has been movement from static reports on what has already happened to real-time analytics assisting businesses with more accurate reporting. The ability to make changes in real time that could impact revenues is incredibly powerful. Think of tracking customers movements (not just traffic patterns but eye movements and body language) in stores so you could sell shelf space at a premium daily (think supermarkets). AI enables you to see what is happening at every moment, and send alerts when something isn’t following the norm.

    Finally, there is the social impact of AI. Robots have been around for a very long time but performed the same task in repetition. This enabled factories to perform certain tasks at a speed and pace that humans couldn’t compete with causing job loss. Now, Robots (software and machines) have the ability to learn in real time enabling them to take on more human like task and take even more jobs from humans.  Think of the success of Uber and how many people the company employees. The real profits come when the cars drive themselves!

    Blockchain

    Blockchain and Bitcoin have been all over the news as well and have been the most talked about Buzz Words since late 2017. Blockchain and more specifically Bitcoin have been constantly discussed at the water cooler after Bitcoin went crazy and surpassed the price of gold for the first time. In 2017, its value climbed from less than $1,000 to over $10,000. Although the price as dropped significantly in 2018 the technology is here to stay.

    Everyone has heard of the blockchain, but few understand how it works due to its complexity. The blockchain works with Bitcoin, and depending on the circles you run in, Bitcoin was either:

    • The shadiest thing that has ever happened to the internet
    • The coolest innovation in modern currency, ever
    • A non-factor since you had no idea what it was

    Bitcoin is a system of currency that doesn’t rely on banks, countries, or any outside institutions. This is potentially a very big deal, as there are many people living in developing countries that have to deal with issues like hyperinflation, not being able to exchange their currency for others, and having to exchange currency on the black market. In addition, think of the overhead (cost) the financial system charges us every day to protect, manage and move money. Bitcoin together with the protection of Blockchain has the potential to eliminate this costly layer and crush banking profits.

    Bitcoin has the ability to solve the previously mentioned – but the technology underlying Bitcoin, called Blockchain, is the where the rubber meets the road. Blockchain is what enables Bitcoin users to be able to exchange currency without any being ripped off or getting “counterfeit” Bitcoin. Basically, blockchain works by keeping a record of each transaction that happens using Bitcoin as a currency. This record is completely transparent to everyone and is part of the fundamental structure of Bitcoin.

    The Blockchain structure makes it very difficult to forge Bitcoin or do any sort of fraudulent activities involving the currency.  Even though Bitcoin and Blockchain technology is viewed as a threat to the banking system many of the banks have started using it to pay large amounts of money with less time spent on security, thanks to the safety of the Blockchain.

    Serverless Architecture

    Another IT buzzword is the Serverless architecture. It refers to an application that relies on third-party services (or “BaaS”, Backend as a Service), or on custom code run in ephemeral containers (also called “FaaS”, Function as a Service). The name can be confusing but Serverless computing is not running code without servers. It is called Serverless from a developer’s point of view. The person or the business who owns the systems doesn’t need to buy, rent or provision servers or machines to run the backend code. Basically, they don’t have to manage servers.

    It works because it lets developers focus on their central business problem instead of worrying about patching servers another time, or spending too long on building complex systems. Unfortunately, not all applications can be implemented Serverless-style. Legacy systems or public cloud bring limitations. Serverless architecture does bring benefits like reduced operational and development costs, reduced time to market, and an enhanced productivity but there are some downsides. It is not optimal for high-performance computing and its workload because of the resources limits imposed by cloud providers.

    Internet of Things

    Without question, Internet of Things (IoT) is one of the most popular buzzwords in recent memory and will continue to go mainstream as its applications become more physical.

    As society continually becomes more plugged in, the physical world around us is right behind and will become one big information system. The amount of digital data consumed and digested will be endless. Everyday physical objects will be connected to the Internet and to each other creating a stream of intelligence. The challenge for manufacturers will be creating an end user experience that will be seamless across devices.  Imagine an endless set of end points where people access applications and information. The use cases are endless, mobile devices, wearables, home electronics etc. Basically, anything you want to monitor and have access to will be available.   

    So, the obvious question is why aren’t we monitoring everything now. Sounds easy right? The answer is us, meaning there is a race to own the market. There are too many competing platforms to have seamless integration at this point in time. Think of the big three (Google, Amazon & Apple), they all have different eco systems that make cross platform integration challenging. But, like anything involving money I am sure it is only a matter of time for them to figure it out. 

    Digital Detox

    By far, Digital Detox is my favorite technology buzz word and for one simple reason. I’m a father. I’m not anti-technology and it’s safe to say the digitization of everything has brought endless advancements to our world. The advancements in healthcare alone are stunning. But at the same time, where has the off button gone? My generation (Gen X) struggles as parents because we remember what is was like to be raised without technology but are forced to embrace it because our children are engulfed in it.  The anxiety are children (adults as well) feel when not connected is very real. The prioritization of a Digital Detox needs to be as important as sleeping or eating.

    Many people have developed the “phantom vibration syndrome”, that sensation to feel or hear our phone buzzing while it doesn’t. Everyday there is a new article discussing the negative impact of social media and smart phones on our brains. FOMO, Fear of Missing Out, is a real issue for many of us. Professionally speaking, being always connected has increased efficiencies and improved customers access to problem resolution but at what cost? If we leave our smart phones in the car while we eat are we bad employees?

    With such an exponential innovation pace in technology, it can be nice to remind ourselves and the world around us that humans and human brains are not machines and not computers and a digital detox is all we need to reconnect with ourselves.

     

  • Troubleshooting Tool Enhancements in ASA v9.9 Code – Part 1

    Troubleshooting Tool Enhancements in ASA v9.9 Code – Part 1

    July 9th, 2018
    Read More

    *** Packet-tracer Enhancements in ASA 9.9 code ***

     

    The packet-tracer and capture utilities built into Cisco ASA's, and now Firepower appliances as of v6.1, are great "go to" troubleshooting tools for Cisco firewall administrators.  The packet-tracer and the capture utilities can be used in combination or used as separate tools depending on the situation. 

     

    Packet-tracer allows the simulation of a particular traffic flow to see how it will be handled while being processed by the firewall.  The capture utility allows administrators to run packet captures directly on the firewall appliance which can then be reviewed directly on the ASA or can be downloaded and reviewed using a protocol analyzer such as Wireshark.

     

    Some of the useful information that the packet-tracer utility provides once a trace is completed includes: which interface the packet will exit, if the flow matches any particular access-lists and if the traffic is permitted or denied, if and how the traffic will be translated, if the packet will be encrypted, as well as many other useful items.  This tool allows for us engineers to validate configuration changes confirming functionality prior to closing out a change request. 

     

    With ASA v9.9 code, Cisco has included some extremely welcome additions to the already-awesome packet-tracer and capture utilities.  A few of the enhancements that I'm excited about include now being able to actually send the simulated packet to the destination address in order for the remote host to receive and process the packet.  This ability to now transmit the simulated packet assists with verifying that the remote destination host receives the traffic and if the ASA receives a response if expected.

     

    Using the sample topology below, I wanted to demonstrate the new packet-tracer transmit feature.

     

     

    In this sample topology, I will be using R01 as the client workstation and will use R02 as the destination web server.  Between these two routers include ASA1, a provider router, and ASA2.  The ASA's have an IKEv2 IPsec VPN tunnel established and the Provider router is acting as a transit router between the two firewalls.

     

    For these examples, I will be sending a tcp/80 request from R01 (10.0.1.1) acting as the client over to R02 (10.0.2.1) acting as the server.

     

    During this example, I will also be running a packet capture on ASA1 filtering traffic specifically between R01 and R02.

     

    ASA1(config)# sho run access-l cap

    access-list cap extended permit ip host 10.0.1.1 host 10.0.2.1

    access-list cap extended permit ip host 10.0.2.1 host 10.0.1.1

    ASA1(config)#

    ASA1(config)# sho cap

    capture cap type raw-data access-list cap interface inside [Capturing - 0 bytes]

    ASA1(config)#

     

    The first example below, we will take a look at using packet-tracer without using the new “transmit” option.  During the packet-tracer, we can see each step including any ACL's being hit, which exit interface the traffic will use, if/how how the traffic will be translated, etc. 

     

    ASA1(config)# packet-tracer input inside tcp 10.0.1.1 32000 10.0.2.1 80

    Phase: 1

    Type: CAPTURE

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    MAC Access list

    Phase: 2

    Type: ACCESS-LIST

    Subtype:

    Result: ALLOW

    Config:

    Implicit Rule

    Additional Information:

    MAC Access list

    Phase: 3

    Type: ROUTE-LOOKUP

    Subtype: Resolve Egress Interface

    Result: ALLOW

    Config:

    Additional Information:

    found next-hop 198.18.1.1 using egress ifc  outside

    Phase: 4

    Type: UN-NAT

    Subtype: static

    Result: ALLOW

    Config:

    nat (inside,outside) source static inside inside destination static site2 site2 no-proxy-arp route-lookup

    Additional Information:

    NAT divert to egress interface outside

    Untranslate 10.0.2.1/80 to 10.0.2.1/80

    Phase: 5

    Type: ACCESS-LIST

    Subtype: log

    Result: ALLOW

    Config:

    access-group inside in interface inside

    access-list inside extended permit ip any4 any4

    Additional Information:

    Phase: 6

    Type: NAT

    Subtype:     

    Result: ALLOW

    Config:

    nat (inside,outside) source static inside inside destination static site2 site2 no-proxy-arp route-lookup

    Additional Information:

    Static translate 10.0.1.1/32000 to 10.0.1.1/32000

    Phase: 7

    Type: NAT

    Subtype: per-session

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 8

    Type: IP-OPTIONS

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 9

    Type: QOS

    Subtype:     

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 10

    Type: CAPTURE

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 11

    Type: VPN

    Subtype: encrypt

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 12

    Type: NAT

    Subtype: rpf-check

    Result: ALLOW

    Config:      

    nat (inside,outside) source static inside inside destination static site2 site2 no-proxy-arp route-lookup

    Additional Information:

    Phase: 13

    Type: VPN

    Subtype: ipsec-tunnel-flow

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 14

    Type: QOS

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 15

    Type: NAT

    Subtype: per-session

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 16

    Type: IP-OPTIONS

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 17

    Type: CAPTURE

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    Phase: 18

    Type: FLOW-CREATION

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    New flow created with id 37, packet dispatched to next module

                 

    Result:

    input-interface: inside

    input-status: up

    input-line-status: up

    output-interface: outside

    output-status: up

    output-line-status: up

    Action: allow

    ASA1(config)#

     

    With the packet-tracer complete, when we take a look at the capture below, we will see that the ASA processed the packet and created the flow, but the packet did not leave the ASA.

     

    ASA1(config)# sho cap cap

    1 packet captured

       1: 16:38:36.544115       10.0.1.1.32000 > 10.0.2.1.80: S 1550136167:1550136167(0) win 8192

    1 packet shown

    ASA1(config)#

     

     

    In this next example, we will add the "transmit" option to the same packet-tracer.  In this new packet-tracer, we will see the same results in the packet-tracer output, only this time the packet exited the ASA and was sent to the remote destination host.  We can verify this within the capture.

     

    ASA1(config)# packet-tracer input inside tcp 10.0.1.1 32000 10.0.2.1 80 transmit

    Phase: 1

    Type: CAPTURE

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    MAC Access list

    .....

    omitted for brevity

    .....

    Phase: 18

    Type: FLOW-CREATION

    Subtype:

    Result: ALLOW

    Config:

    Additional Information:

    New flow created with id 39, packet dispatched to next module

                 

    Result:

    input-interface: inside

    input-status: up

    input-line-status: up

    output-interface: outside

    output-status: up

    output-line-status: up

    Action: allow

    ASA1(config)#

     

    Taking a look at this new capture, we can see that the packet-tracer output appears the same, but this time, the packet was sent to the remote destination, the remote destination processed the request, and responded to the simulated packet back to the client, R01.  Since R01 did not send the actual request, R01 then responded with a reset back to R02.

     

    ASA1(config)# sho cap cap

    3 packets captured

       1: 16:43:19.486852       10.0.1.1.32000 > 10.0.2.1.80: S 115026015:115026015(0) win 8192

       2: 16:43:19.611677       10.0.2.1.80 > 10.0.1.1.32000: S 4195304887:4195304887(0) ack 115026016 win 4128 <mss 536>

       3: 16:43:19.623380       10.0.1.1.32000 > 10.0.2.1.80: R 115026016:115026016(0) win 0

    3 packets shown

    ASA1(config)#

     

    As you can see, with being able to simulate and now transmit a packet to a remote destination address, we can test and verify connectivity on behalf of a particular host.  This could also assist a remote engineer with troubleshooting connectivity on other end without having to engage other groups or departments requiring their time to test the connectivity.

     

     

     

  • FortiGate Firewall Policies

    FortiGate Firewall Policies

    July 5th, 2018
    Read More

    Fortinet FortiGates are stateful firewalls that permit or deny access based on firewall policies.  Firewall policies define what to do with traffic that matches specified criteria.  These rules consist of information found in traffic flows, along with various other items.

    Firewall policies are processed in a top down fashion.  The first matching item will be applied to the traffic.  These actions can be permit, deny, NAT, authentication, and various other powerful options.  There is an implicit deny at the bottom of the list that will drop any traffic not matching policies higher in the list.  Logging should be enabled for your firewall policies for monitoring and troubleshooting purposes (allowed and violating traffic).

    The policies will use various objects such as schedules, NAT rules, service definitions, interface, address, device, users, etc.  These objects are used in the policies as matching criteria for applying various actions.  You can match on schedule (recurring or one-time), ingress/egress interfaces, source IP, originating user, device ID, destination IP or service, etc.  Policies require the source and destination interfaces to be specified, but “any” is an acceptable choice for one or both fields.  Flows must match the source and destination designations to be considered a match.

    Sources can be specified as a subnet, fully qualified domain name (requires DNS), IP address or range of IPs, Internet Service DB, or geographic location.  You can also specify a user or device as a source in addition to at least one of the aforementioned source items.  Users can be found in the local FortiGate database, remote authentication server (i.e.; RADIUS or LDAP), certificate, or Fortinet Single Sign-on.  Source user can be used for authentication prior to permitting network access as well. 

    The FortiClient application is an agent based option for device identification.  Various traffic types can be used for agentless device identification (like TCP fingerprinting, LLDP, SSDP, DHCP, etc.).  You will not be able to use an address in the source field if an ISDB is set as source.  The ISDB is upgraded at periodic intervals to ensure accuracy of the objects contained within.

    Your policies can also match on destination information, such as fully qualified domain name (requires DNS), Internet service database (ISDB) objects, geographic location, IP address or range, subnet.  You cannot use user or devices as destinations since they are identified on the ingress interface.  ISDB objects consist of IP addresses, port numbers, and protocols that are used by Internet services.

    Policies come in many different types such as rate limiting, multicast, local aka FortiGate traffic (the actual Fortinet device is the source or destination), IPv4 and IPv6, etc. 

    The deny action drops packets and prevents further processing, while accept will administer deeper processing (if configured), or further actions such as NAT.  Further processing could be antivirus scanning or web filtering for instance. 

    There is a Learning mode for firewall policies that allow you to essentially deploy the FortiGate in a monitor only mode.  Logging is enabled for all traffic and you will be able to see the data gathered about all flows traversing the device.  All packets are permitted in this mode.  The Learning Reports page will display all logs to assist in building your firewall policies. 

    Please be mindful that policy/object deletions and changes are applied immediately.  This can cause outages if not properly tested and implemented during a maintenance window.  All of your modifications should be carefully planned and tested before implementation. 

     

    FortiGate devices are powerful firewalls that offer traditional as well as next generation features that can help secure your network. 

     

    By:
    James Prendergast

    CCIE #51060

  • SD-WAN: Build vs. Buy

    SD-WAN: Build vs. Buy

    July 5th, 2018
    Read More

    As interest in the benefits of Software-Defined Wide-Area Networks (SD-WAN) has grown in the past 18 months, we at H.A. see more and more questions from customers about the best way to consume SD-WAN. In this post, I will discuss some of the decision points that may help businesses determine where they fall on the SD-WAN “Buy” vs. “Build” spectrum.

     

    If you need a primer on SD-WAN technology, H.A.’s Jason Bishop wrote a great SD-WAN 101 article previously on this blog.

     

    Early in the emergence of SD-WAN, most people thought of SD-WAN as a thing an enterprise would own. That the enterprise’s architects would research and select an SD-WAN platform and then procure and deploy the components internally (possibly with the help of a VAR). This means the enterprise moves away from service provider lock-in; it gives them the ability to build their own WAN over any network transport. In fact, that’s the exact story most SD-WAN solutions tell. Some even explicitly point out that SD-WAN becomes a good way to strong arm your existing Service Providers (SPs) into lower service rates by reminding them of your ability to ditch them completely and roll your own WAN with whatever low-cost circuits you can get.

     

    As SD-WAN started to come into the marketplace, most traditional WAN providers immediately identified it as a potential threat to their value-add WAN offerings. After all, if SD-WAN provides enterprises the ability to swap underlays (WAN circuits) out at will without impacting the overlays that the business’ data runs over, this puts WAN SPs into a position where their services become a pure race-to-the-bottom to avoid being swapped out at any time.

     

    In response, most SPs and have begun offering a managed SD-WAN solution where they provide customer-premises equipment that provides the SD-WAN functionality over circuits the SP provides (either directly or through their own agreement with a third-party provider).

     

    This got me thinking about the pros and cons of a managed SD-WAN approach versus running your own SD-WAN from an end-user enterprise’s perspective, particularly through the lens of whether managing a WAN (whether traditional or software-defined) is really a “core competency” of any enterprise.

     

    Managed SD-WAN Pros:

     

    • One throat to choke – It’s on the SP to own, manage, and maintain all the transports they may be using for your SD-WAN.
    • Speed of deployment – SD-WAN is hot and everyone wants to know how to get there. Taking the “Buy” path and letting someone else own/operate/manage the solution will certainly be one of the fastest ways to leverage SD-WAN benefits with limited retraining of operations staff.
    • Insulation from market consolidations – If an enterprise selects a service provider for a managed SD-WANaaS solution and the SD-WAN technology vendor the SP chooses does not survive the inevitable market consolidation, that is the SP’s problem, not the enterprise’s.
    • For enterprises with strong strategic partnerships with an SP that previously had difficulty getting circuits into some locales, an SD-WANaaS model may make it easier to keep remote sites “on net” even with no SP-owned/leased transport.

     

    Managed SD-WAN Cons:
     

    • If you are concerned about locking-in with an SP and letting them get more ingrained in your environment, managed SD-WAN is clearly not for you.
    • Data Privacy concerns come to mind – recently we’ve seen much less trust in the SP, with many enterprises choosing to encrypt data even over a “private” WAN. Managed SD-WAN and the application-level traffic routing and advanced analytics puts a lot of information into the SP’s hands, and potentially the hands of any actor or agency with hooks into them.
    • Pricing – Clearly, service providers’ interest in managed SD-WAN offerings stem from concerns about their existing profits eroding in an age of SD-WANs built out of low-cost broadband connections. Will an SP-managed service hold the same pricing advantages of a self-managed one? I would have my doubts.

     

     

    On the other hand, a self-managed SD-WAN solution purchased and operated by the enterprise may be a better fit in some cases.

     

    Self-Deployed SD-WAN Pros:

     

    • Flexibility to select exactly the right product for your needs. Each SD-WAN vendor has some unique features and value propositions, and choosing your own SD-WAN vendor allows you to prioritize those features differently than an SP may have.
    • Aggressive feature deployment: If a solution has new features in software that matter for your business, you can probably get them deployed quicker if you run your own SD-WAN solution. SPs may be less aggressive with rolling out new features.
    • Control: The topology, underlays, and security are completely under control of your business and can be configured however needed.
    • Freedom from service providers – much like “cutting the cord” with your home cable, keeping the circuit providers as a commodity service with no value-add makes it easier to swap them out or go in a different direction if a situation or site requires it.

     

    Self-Deployed SD-WAN Cons:

     

    • Product selection: You must select a vendor for your solution. Few, if any, are interoperable so mixing vendors isn’t really a feasible option at this stage. A couple years ago, this was a riskier proposition, but in recent months some market consolidation of the most notable startups (such as Viptela and VeloCloud) have occurred as they are acquired by incumbent IT vendors (such as Cisco and VMware).
    • Training: SD-WAN uses newer, sometimes proprietary, encapsulations and routing methods, new terminology, and underlay/overlay concepts. You will need to be comfortable with these topics to reliably manage your own SD-WAN. This may require engineers to be trained or have access to lab environments.

     

     

    Whether you are interested in building your own SD-WAN or purchasing SD-WAN services from a service provider, the benefits of the technology make it very compelling in today’s IT environment. High Availability works with several vendors of SD-WAN solutions, so contact your account manager today to discuss options and let us help find the solution that best fits your organization.

     

  • High Availability, Inc. Named NetApp East “Emerging” Partner of the Year at Second Annual Channel Connect Conference

    High Availability, Inc. Named NetApp East “Emerging” Partner of the Year at Second Annual Channel Connect Conference

    June 29th, 2018
    Read More

    High Availability, Inc. receives NetApp accolade for outstanding achievements in supporting NetApp products in 2018.

    Audubon, PA, June 29th, 2018 - High Availability, Inc. has been named the NetApp East “Emerging” Partner of the Year for its overall FY18 revenue, year over year growth, key account wins and participation in regional partnership activities.

    Over the last decade, High Availability, Inc. has built a strong relationship with every facet of NetApp. As a NetApp Platinum Partner, a NetApp Professional Services Certified Partner, a NetApp FlexPod Premium Partner, a NetApp Cloud Service Provider, and as a member of NetApp Partner CTO Advisory Board; the team at High Availability, Inc. puts a great deal of focus and time into their ever-growing partnership with NetApp.

    "The Channel Partners recognized today have gone above and beyond to support our joint customers in digital transformation,” said Jeff McCullough, vice president, Channel Sales, NetApp. “It’s my honor to congratulate High Availability, Inc. on being named as our East “Emerging” Partner of the Year. NetApp looks forward to continuing to work with you to support our joint customers in their digital transformation.”

    “We are honored to be recognized by NetApp and we are excited about our growing partnership,” said Randy Kirsch, High Availability, Inc. Executive Vice President. “This award reflects our strong partnership with NetApp and their channel team to deliver innovative solutions and leverage emerging technologies to our mutual clients.”

    The 2018 Americas Partner Awards were announced on stage at NetApp’s inaugural Channel Connect Conference (C3) where strategic partner executives from across the Americas region gathered to hear about NetApp’s strategic vision and engage with NetApp executives.

  • High Availability, Inc. Named NetApp Overall Partner of the Year

    High Availability, Inc. Named NetApp Overall Partner of the Year

    June 19th, 2018
    Read More

    High Availability, Inc. Named NetApp Overall Partner of the Year
    High Availability, Inc. receives NetApp accolade for outstanding commitment and partnership.

    Audubon, PA, June 19th, 2018 - High Availability. Inc, announced today that NetApp, one of our most strategic technology partners, has named High Availability, Inc. as the NetApp Overall Partner of the Year – PA/DE District for FY18. The award was given for outstanding services, partnering, and commitment. This is High Availability, Inc.’s sixth consecutive year receiving this accolade from NetApp

    “Another year, another top recognition for an awesome partner and team at High Availability Inc., a NetApp partner since their inception in 2000,” said Dan Repka, Channel Development Manager for NetApp.  “Over the past years, the team at High Availability has built a strong, growing and well-regarded data management practice and expanded into all areas of the data center helping customers implement varied integrated solutions.  This partner is all in with NetApp as a Platinum partner, a Professional Services Certified partner, a FlexPod Premium partner, a NetApp Cloud Service Provider and a member of our Partner CTO Advisory Board.  While the company overall continues to grow at double digits for the past 8 years, their NetApp business has followed that trend by driving transformative solutions within their installed base and new accounts in the commercial, enterprise, SLED and healthcare market segments.  The team are experts at implementing NetApp data driven solutions leveraging our entire portfolio of flash, converged, hyperconverged, and cloud integrated solutions to empower our mutual customers to change the world with data. Congratulations!” added Repka.

Join the High Availability, Inc. Mailing List

Subscribe