• High Availability, Inc. Awarded COSTARS Hardware and Software Contract

    High Availability, Inc. Awarded COSTARS Hardware and Software Contract

    June 18th, 2018
    Read More

    Audubon, PA, June 11th, 2018 - High Availability. Inc, announced today that COSTARS, Pennsylvania’s cooperative purchasing program for eligible public procurement units and state affiliated entities, has officially named High Availability, Inc. as an official vendor. The COSTARS program offers members the chance to obtain more competitive pricing for professional services.


    COSTARS was created in 2004 by the Department of General Services (DGS) to increase the cooperative purchasing options for local public procurement units (LPPUs’) and state-affiliated entities. Initially, the DGS limited the LPPUs’ to only certain statewide contracts, but this quickly changed so members could obtain the best and most beneficial contracts for their organizations. In fact, DGS estimates that more than 10,000 entities within the Commonwealth of Pennsylvania are eligible to become COSTARS

    members. Several thousand of these have already registered as COSTARS members,

    and the list of registered members continues to grow.


    “We are thrilled to be given the opportunity to serve even more state affiliated entities through our newly acquired COSTARS contracts,” said Randy Kirsch, Executive Vice President of High Availability, Inc. “This allows us to vastly expand our network in the state of Pennsylvania!” Kirsch added.


    To learn more about the COSTARS program please visit

  • High Availability, Inc. Named to CRN’s 2018 Solution Provider 500 List

    High Availability, Inc. Named to CRN’s 2018 Solution Provider 500 List

    June 8th, 2018
    Read More

    High Availability, Inc. Named to CRN’s 2018 Solution Provider 500 List
    CRN's 2018 Solution Provider 500 list ranks the top integrators, service providers and IT consultants in North America by services revenue.

    Audubon, PA, June 4th, 2018 - High Availability, Inc. announced today that CRN®, a brand of The Channel Company, has named High Availability, Inc. to its 2018 Solution Provider 500 list. The Solution Provider 500 is CRN’s annual ranking of the largest technology integrators, solution providers and IT consultants in North America by revenue.

    The Solution Provider 500 is CRN’s predominant channel partner award list, serving as the industry standard for recognition of the most successful solution provider companies in the channel since 1995. The complete list will be published on, making it readily available to vendors seeking out top solution providers to partner with. 

    CRN has also released its 2018 Solution Provider 500: Newcomers list, recognizing 26 companies making their debut in the Solution Provider 500 ranking this year.  

    “We could not be more excited to be recognized on CRN’s Solution Provider 500 for the 6th year in a row,” said Steve Eisenhart, Chief Executive Officer of High Availability, Inc. "We exceeded our own expectations by jumping 49 spots to #239.  This achievement is a tribute to our dedicated and talented employees, loyal customers and supportive business partners. We will continue to make investments in the right people, the right partners and the right technologies to advance as an organization and improve our position on this list in the future." Eisenhart added.

    “CRN’s Solution Provider 500 list spotlights the North American IT channel partner organizations that have earned the highest revenue over the past year, providing a valuable resource to vendors looking for top solution providers to partner with,” said Bob Skelley, CEO of The Channel Company. “The companies on this year’s list represent an incredible, combined revenue of $320 billion, a sum that attests to their success in staying ahead of rapidly changing market demands. We extend our sincerest congratulations to each of these top-performing solution providers and look forward to their future pursuits and successes.”

    The complete 2018 Solution Provider 500 list will be available online at and a sample from the list will be featured in the June issue of CRN Magazine.

  • Look! Up in the sky, in the clouds! It's my DR, saving the day!

    Look! Up in the sky, in the clouds! It's my DR, saving the day!

    May 31st, 2018
    Read More

    Being the superhero of IT is not an easy job.  Fighting crime, helping citizens, keeping peace, all in a day's work.  Avoiding disaster is one of the IT superhero's duties that can be quiet, and behind the scenes.  It can also be costly and risky, hard to budget and hard to plan.

    Two of DR's arch enemies - Cost and Risk!

    Fortunately, moving your DR into the cloud can fight cost and fight risk.  Let's look at some of the top reasons a cloud-based DR solution can help save money and leap tall pitfalls.


    Managing a DR Location
    Cost: Buying or leasing a secondary IT location can be a huge infrastructure undertaking.  Moving to the cloud often requires little infrastructure purchasing, as space, power, cooling and connectivity can all be rolled up into a single lower expense.

    Risk: Cloud based data centers are designed with risk avoidance in mind.  Redundant power and internet into the datacenter can achieve nearly 100% uptime.

    Cost:  Purchasing DR equipment can be a costly and shocking endeavor.  Replicating production equipment that has its own maintenance costs and software licensing cost is many times difficult to justify the huge Capital Expenditure from a budget perspective.  Cloud space, with its potential for co-location and virtualization of hardware make total economic sense.  Entire DR solutions can be leased from month to month, making an easier to justify Operational Expense.

    Risk: Many times, due to budget, space, internet speed, or other factors, legacy DR solutions can have shortcuts created to avoid some of these challenges.  Every time a shortcut is introduces, a risk is created.  Having a cloud based Operational Expensed design allows for design and implementation for a complete solution, without cutting corners.


    Cost: Hiring additional resources to manage DR equipment, and connections can be pricey.  Having a cloud provider manage the DR solution can offload the need to hire staff or have staff make frequent trips to and from a legacy DR location.

    Risk:  Having the expertise of dedicated DR professionals at the ready eliminates the risks of using internal staff that may not be comfortable with the design and implementation of maintaining a true DR solution.

    Cost:  Designing a legacy DR solution means creating, deploying and maintaining a replica of a full production environment.   Staff must coordinate with multiple hardware vendors, software vendors, and internet providers to maintain the DR environment.  Having a DR test plan and conducting test can be a coordination nightmare.  Having a single cloud service provider design, implement and maintain the full DR environment means a single vendor can be tasked with disaster testing and plan validation.

    Risk:  Relying on many partners in the event of a disaster presents challenges and risks in recovery, timing, tasks and ownership if an event should occur.  Having a single provider that can "flip the DR switch" and fail over a production environment to the DR location.

    Being a superhero can be as simple as making a call and finding a partner up in the clouds.

  • Cisco Unified Communication (UC) Server-  Hardware Options

    Cisco Unified Communication (UC) Server- Hardware Options

    May 23rd, 2018
    Read More

    These are various options when it comes to installing Cisco Unified Communication (UC) applications.  I have summarized these options with pros and cons of each:

    Business Edition (BE) 7000- BE7K or Business Edition (BE) 6000- BE6K

    • What is it?
      • The Cisco BE6K/BE7K is built on a virtualized UCS that ships ready-for-use with a pre-installed virtualization hypervisor and application installation files. The BE6K/BE7K is a UCS TRC in that UC applications have been explicitly tested on its specific UCS configuration
    • Pros
      • Easy to order- one SKU.  That SKU includes everything including VMware license
      • All OVA templates and ISO Images are preloaded with server
    • Cons
      • There is no flexibility to choose hardware and software from



    UC on Cisco Tested Reference Configuration (TRC) servers

    • What is it?
      • UCS TRCs are specific hardware configurations of UCS server components. These components include CPU, memory, hard disks (in the case of local storage), RAID controllers, and power supplies
    • Pros
      • Ordering process involve more than BE7K but it is simple compare to Spec based solution- that is check TRC specification against the actual hardware including CPU, memory, hard drive, VMware etc.
      • Provide more flexibility compare to BE6K and BE7k in terms of choosing hardware/software
    • Cons
      • There are still less options to choose hardware/ software from 
      • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
      • Require ordering VMware foundation or VMware standard license


    UC on Spec Based servers

    • What is it?
      • Specifications-based UCS hardware configurations are not explicitly validated with UC applications. Therefore, no prediction or assurance of UC application virtual machine performance is made when the applications are installed on UCS specs-based hardware. In those cases, Cisco provides guidance only, and ownership of assuring that the pre-sales hardware design provides the performance required by UC applications is the responsibility of the customer
    • Pros
      • Can leverage existing compute infrastructure, including 3rd party hardware
      • Provide the most flexibility in terms of hardware/software options to choose from
    • Cons
      • Ordering equipment’s based on Spec-based requires more upfront planning, validation and potential pre-testing
      • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
      • Requires VMware Vcenter


    UC on Cisco HyperFlex

    • What is it?

      • UC on Cisco HyperFlex is available as TRC
    • Pros
      • Same as TRC servers
      • Provide more robust and scalable solution
    • Cons
      • This could be expensive solution, unless it is part of larger Hyperflex deployment
      • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
      • Requires VMware Vcenter


    Who should be looking for UC on HCI (HyperFlex)?

    1. Server team with incumbent 3rd-party compute looking for alternative storage
    2. Voice/video team seeking HCI alternative to BE6K or BE7K appliance for UC
    3. UCS that is not BExK, one team in charge of everything, wants HCI instead of other approaches
    4. UCS that is not BExK, separation of duties where server team owns VMware/compute/storage, server team looking for HCI


    High Level Solution


    The Hyperflex bundle comes with four (4) HX240C nodes and pair of Cisco 6248 Fiber Interconnect.  The system is managed by HyperFlex (HX) software running on Cisco 6248.

    Following applications are supported by TRC (Tested Reference Configuration) on HyperFlex

    • (CUCM) Unified Communications Manager
    • (IMP) Unified Communications Manager – IM & Presence
    • Expressway C & Expressway E
    • (CER) Emergency Responder
    • (PCP) Prime Collaboration Provisioning
    • (PCA) Prime Collaboration Assurance
    • (PCD) Prime Collaboration Deployment
    • (PLM) Prime License Manager (standalone)
    • (CUC) Unity Connection
    • (UCCX) Unified Contact Center Express
    • (TMS) Telepresence Management Suite


    Sample Design


    • Minimum system using HX240c M4SX TRC#1, HX 1.8.1.
    • 4x HX nodes, each with VMware vSphere ESXi 6.0
    • 2x 6200 FI switches
    • VMware vCenter 6.0 for management


    SAN/NAS Best Practices

    General Guidelines

    • Adapters for storage access must follow supported hardware rules

    • Cisco UC apps use a 4-kilobyte block size to determine bandwidth needs.
    • Design your deployment in accordance with the UCS High Availability guidelines  
    • 10GbE networks for NFS, FCoE or iSCSI storage access should be configured using Cisco Platinum Class QOS for the storage traffic.
    • Ethernet ports for LAN access and ethernet ports for storage access may be separate or shared. Separate ports may be desired for redundancy purposes. It is the customer's responsibility to ensure external LAN and storage access networks meet UC app latency, performance and capacity requirements
    • In absence of UCS 6100/6200, normal QoS (L3 and L2 marking) can be used starting from the first upstream switch to the storage array.
    • With UCS 6100/6200
    • FC or FCoE: no additional requirements. Automatically handled by Fabric Interconnect switch.
    • iSCSI or NFS: Follow these best practices:
      • Use a L2 CoS between the chassis and the upstream switch.
      • For the storage traffic, recommend a Platinum class QoS, CoS=5, no drop (Fiber Channel Equivalent)
      • L3 DSCP is optional between the chassis and the first upstream switch.
      • From the first upstream switch to the storage array, use the normal QoS (L3 and L2 marking). Note that iSCSI or NFS traffic is typically assigned a separate VLAN.
      • iSCSI or NFS: Ensure that the traffic is prioritized to provide the right IOPS. For a configuration example, see the FlexPod Secure Multi-Tenant (SMT) documentation (
    • The storage array vendor may have additional best practices as well.
    • if disk oversubscription or storage thin provisioning are used, note that UC apps are designed to use 100% of their allocated vDisk, either for UC features (such as Unity Connection message store or Contact Center reporting databases) or critical operations (such as spikes during upgrades, backups or statistics writes). While thin provisioning does not introduce a performance penalty, not having physical disk space available when the app needs it can have the following harmful effects
      • degrade UC app performance, crash the UC app and/or corrupt the vDisk contents.
      • lock up all UC VMs on the same LUN in a SAN


    Link Provisioning and High Availability

    Consider the following example to determine the number of physical Fiber Channel (FC) or 10Gig Ethernet links required between your storage array (such as the EMC Clariion CX4 series or NetApp FAS 3000 Series) and SAN switch for example, Nexus or MDS Series SAN Switches), and between your SAN switch and the UCS Fabric Interconnect Switch. This example is presented to give a general idea of the design considerations involved. You should contact your storage vendor to determine the exact requirement.

    Assume that the storage array has a total capacity of 28,000 Input/output Operations Per Second (IOPS). Enterprise grade SAN Storage Arrays have at least two service processors (SPs) or controllers for redundancy and load balancing. That means 14,000 IOPS per controller or service processor. With the capacity of 28,000 IOPS, and assuming a 4 KByte block size, we can calculate the throughput per storage array controller as follows:

    • 14,000 I/O per second * (4000 Byte block size * 8) bits = 448,000,000 bits per second
    • 448,000,000/1024 = 437,500 Kbits per second
    • 437,500/1024 = ~428 Mbits per second

    Adding more overhead, one controller can support a throughput rate of roughly 600 Mbps. Based on this calculation, it is clear that a 4 Gbps FC interface is enough to handle the entire capacity of one Storage Array. Therefore, Cisco recommends putting four FC interfaces between the storage array and storage switch, as shown in the following image, to provide high availability.

    Note: Cisco provides storage networking and switching products that are based on industry standards and that work with storage array providers such as EMC, NetApp, and so forth. Virtualized Unified Communications is supported on any storage access and storage array products that are supported by Cisco UCS and VMware. For more details on storage networking, see

  • New Features and Changes in vSphere 6.7

    New Features and Changes in vSphere 6.7

    May 23rd, 2018
    Read More

    VMware just released vSphere 6.7, an incremental upgrade to the 6.x line of code. Although this is a minor release, it has lots of changes and improvements under the hood.

    HTML5 Client

    Most people I talk to hate the VMware Flash web client, preferring the old C# thick client for its ease of use and quick response. The HTML5 client offers a cleaner, more intuitive interface with better performance than the Flash version, but it is not yet feature-complete.

    With the release of 6.7, a number of features have been added that were previously missing.  All storage workflows are now available, and Update Manager in now integrated as shown below. The client is now 90-95% complete, with VMware promising full functionality by fall of 2018:

    4K Native Drive Support

    Drives with 4k sector size are supported in 6.7 with built-in 512-byte sector emulation for legacy OS support.

    Increased Limits

    Device limits continue to increase in 6.7.

    Max Virtual Disks increased from 60 to 256

    Max ESXi number of Devices increased from 512 to 1024

    Max ESXi paths to Devices increased from 2048 to 4096

    VCSA VAMI Improvements

    The Virtual Appliance Management Interface, known as VAMI, is available at https://vcenter:5480.  There have been several major improvements. 

    Services can now be managed directly from here.

    Scheduled backups of VCSA are now available through the backup tab in VAMI

    Quickboot and Single-Reboot Upgrades

    Quickboot allows the hypervisor to restart without restarting the underlying hardware.  This is useful in the event of applying hypervisor patches and is available only from VUM.  6.7 also enables single-reboot upgrades, which eliminates the second reboot requirement and greatly speeds up the upgrade process.

    DRS initial placement improvements

    vCenter 6.7 has the improved initial placement from 6.5 but this now works with HA enabled. Performance is said to be 3x faster.

    Support for RDMA (Remote Direct Memory Access)

    RDMA allows transfer of memory contents from one system to another, bypassing the kernel.  This delivers high i/o bandwidth with low latency.  The feature requires an HCA (Host Channel Adapter) on source and destination systems.

    vSphere Persistent Memory

    Persistent memory, utilizing NVDIMM modules, is supported for hosts and VMs.  This technology enables incredible levels of performance, with speeds 100x faster than SSD storage.

    Deprecated CPUs

    Since new vSphere 6.7 features depend on specific CPU capabilities, a lot of fairly recent legacy CPUs are no longer supported.  Check your hardware for compatibility before planning a 6.7 upgrade. 

    Release notes:


    vSphere 6.7 is a solid update that addresses a lot of pain points in prior releases.  It will be a welcome improvement for businesses of all sizes.



  • Top Tips by CIOs for CIOs

    Top Tips by CIOs for CIOs

    April 24th, 2018
    Read More

    Supervising a technology team, no matter the size, is a huge responsibility. From recruiting engineers and technology technicians to transforming your data center – a Chief Information Officer must be able to handle any situation that comes across their desk while simultaneously maintaining the goals and objectives of their organization.

    We asked some of the region’s top CIOs and CTO’s for advice and tips for other technology leaders – here are our findings;


    • “Surround yourself with a core group of smart, technically savvy, team oriented, and motivated professionals.  The team will grow from there….similar people find each other.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Listen to everyone’s ideas, turn your ideas into someone else’s, ask probing questions…”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Make sure that your team understands mission / objectives and that they have the training / aptitude to be successful.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
    • “Manage up and across as much as or more than down, be curious on motives and what make people tick.”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Drive a culture of partnership.  The team is greater than any individual member.”
      • Geoff Pieninck, Global IT Director, Colorcon
    • “Promote the team and their efforts, not the leaders.”
      • Marty Keane, Chief Technology Officer, Penn Capital


    • “Don’t major in the minors. Identify the big impact projects, issues, and opportunities and address them long term.  Filter out the noise.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Change is inevitable.  Change is uncomfortable. Drip change, don’t pour change and allow the change to disburse evenly.”
      • Marty Keane, Chief Technology Officer, Penn Capital
    • “Turn ‘No’ into challenges - be flexible.”
      • John S. Supplee, Chief Operating Officer, Haverford Trust


    • “Challenge your staff with additional responsibility and exposure to new opportunities…success and recognition follows.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Practice empathy – understand employee and co-workers experiences; this encourages people to speak up if they are struggling.”
      • Ronn Temple, Director – Enterprise Technology, Dorman Products
    • ”Learn to stay intrigued, motivated and curious.  Find were you are get your inspiration from.  Innovation is forged from such foundations.”
      • Marty Keane, Chief Technology Officer, Penn Capital
    • “Stay humble.  Celebrate your successes, but don’t be complacent.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Know that a leader will hold all to ethical, moral, and performance standards.”
      • Ronn Temple, Director – Enterprise Technology, Dorman Products


    • “Right size technical and business solutions based on the needs of the business.  Biggest, best, newest is not always the answer.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “People, planning and process are critical components for ongoing team success.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
    •  “Balance technical solutions with sound financial analysis regarding costs, savings, and impact to financial statements.  Speak the language of those paying the bills.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Make bold decisions, pull the plug on failed projects no matter what the costs are…”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Aligns IT services to business needs.  Always be able to point IT actions back to a key business driver.”
      • Geoff Pieninck, Global IT Director, Colorcon
    • “Provide focus on strategic, tactical and operational needs as required to reach goals.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
  • Why HCI is the Datacenter Architecture of the Future

    Why HCI is the Datacenter Architecture of the Future

    April 9th, 2018
    Read More

    So why talk about data center architecture? As part of the IT community, it can be fun to talk and geek out about the latest technologies or trends. We can appreciate how things used to be done and are excited about where things are headed. But, at some point it must become practical to the business side of the house. Most people in your company and your customers don’t care what you are doing in your datacenter. They could not care less if you are 100% virtualized or still bare metal. They don’t know if you are running public, private, or hybrid cloud.  They might even laugh at how seriously we can use the term cloud.

    There is something to be gained by thinking of the business results of our datacenters and the IT infrastructure contained therein. We look at the result that Google, Facebook, and Amazon are getting, and we study why they made the choices they did. Each one uses an HCI approach in the datacenter.

    If a company is virtualized it is not really a question we consider anymore, but rather what small percentage is not virtual yet. With very mature virtual operations, there is now a drive to containerize applications and services. We don’t debate if virtualization is better than bare metal since we see the flexibility, consolidation, and ease of management inherent in the ability to run multiple workloads on the same physical server. Treating a server as a file that can be manipulated independent of the underlying hardware is a key feature of virtualization.

    Hyperconverged infrastructure picks up where virtualization left off. It takes the software approach to more of the IT stack. Software is consuming everything. What used to be separate products are just part of the stack or cloud. HCI treats storage as a service to be delivered to the VM or Application layer. Networking is delivered in a similar fashion. Just like the public cloud, everything is abstracted behind a management console and administered as one unit.

    HCI delivers the best of the public cloud experience – initial right sizing, incremental growth, self-service, and programmability/automation. But with the benefits of on premises infrastructure – security, performance, and control. An extra bonus is when you look at the duration of years, it is less expensive to own commodity hardware resources in your datacenter than rent it in a public cloud for deterministic workloads.

    Your IT operations is a major component of the engine that drives your business. The more these processes can be automated in software, the less friction with productivity and profits will increase. The large cloud organizations automate as much as possible, so they can iterate improvements faster. Some call this process DevOps. Tying IT processes directly to business outcomes which then feed back into the IT process. This creates a virtuous cycle that is a business enabler instead of a cost center. And the more your datacenter is software defined, the more flexibility you have in the DevOps process.

    The best HCI vendors will allow you to meld architectures. Whether you call it hybrid cloud or cloud bursting, this approach helps balance cloud costs and avoids a whole new type of vendor lock-in. You should be able to mix mode 1 legacy apps with mode 2 cloud native apps, so you can migrate as the business is able. Sound like the data center of the future, except it is possible today.

    In closing, I ask again, why talk about data center architecture? The simple answer is: if you are reading this, it is probably part of your job. Every major infrastructure vendor has some type of HCI offering. It could be software, hardware, or appliance based. With the industry rapidly evolving, it is important to have a trusted technology advisor like HA, Inc. who can help you look at all the options out there and assist in your due diligence. We can help you make an informed decision, so your data center and IT practices are evolving with the industry into a bright future.

  • Configuring AnyConnect Remote Access VPN on Cisco FTD

    Configuring AnyConnect Remote Access VPN on Cisco FTD

    April 6th, 2018
    Read More

    Cisco ASA’s have been a part of Cisco’s security product lineup since 2005 replacing the older PIX firewalls.  Over the more recent years, Cisco has really focused a great deal on security adding more and more solutions for different portions of the network.  One of the newer security solutions was brought in with the acquisition of SourceFire from back in 2013.  SourceFire, at the time of the acquisition, was one of the top leading Intrusion Prevention solutions on the market.  Shortly after that acquisition, what was previously known as Sourcefire, received a name change to Cisco FirePOWER, then to then FirePower, and more recently, Firepower.  Yes, the name changed quite a bit over the past few years. 


    Firepower added the Next-Generation Firewall (NGFW) solutions that are now pretty much required in networks of all sizes.  The NGFW feature-sets add additional visibility into application networking, user traffic, content filtering, vulnerability monitoring, and much more providing the security that’s needed.


    Cisco first added their NGFW solution to the Cisco ASA5500X products by adding a Firepower module (SFR) into the firewall appliance.  This SFR module is essentially a hard drive that runs as a Firepower sensor.  Policies are pushed to this module which directs traffic to be bounced from the ASA over to this sensor for inspection, then traffic is sent back to the ASA for processing.


    In addition to offering the Cisco ASA as a firewall security solution, Cisco added a newer Firepower Threat Defense (FTD) appliance.  The Cisco FTD appliance consolidates some of the ASA functionality and the NGFW features down into a single appliance.  This allows for easier management of the security solutions with having one single management interface as opposed to having to manage the ASA configuration separately from the NGFW features which are typically managed from Firepower Management Center (FMC).


    The Cisco FTD appliance carries most (not all) of the features that an ASA would support.  One particular feature that was brought over from the ASA is remote access VPN connectivity.  Some of the remote access features that were ported over from the ASA did not make it over to FTD.  The most notable features that are missing from this Remote Access VPN on FTD solution as of v6.2 are:


    • local user authentication
    • 2-factor authentication
    • ISE Posturing
    • LDAP attribute mapping
    • AnyConnect modules


    *reference the link below for a full list of limitations




    In this article, I will be providing a sample of how to configure a remote access VPN solution on Cisco FTD. 


    This article is going to assume that the FTD appliance is already registered, licensing is acquired, and that the appliance is being managed by FMC. 


    To start the remote access VPN configuration, we first need to apply the AnyConnect licensing to the FTD appliance.  Navigate to System > Licenses > Smart Licenses.



    Select the “Edit Licenses” button on the upper right.

    Select the licensing that was purchased and move your FTD appliance into the right window to assign the license to the appliance.  In this case, “AnyConnect Apex” licensing was selected, and the appliance named “FTD” appliance to the right.  When complete, select “Apply” at the bottom right.



    Now that the licensing has been assigned, we can continue with the building blocks required for the RA VPN connectivity.  The next step would be to create all of the various objects (software package, profile, IP Pool, etc).  These objects will all tie together during the RA VPN config wizard.


    The first object we will create is the software package object.  Navigate to Objects > Object Management > VPN > AnyConnect File


    Here, we will add the VPN client software packages for the different required Operating Systems that will be used in the environment.



    Select “Add AnyConnect File” at the top-right.



    Enter a name, browse to the AnyConnect client package file which can be downloaded using the link below (valid Cisco contract required) and select “AnyConnect Client Image” as the file type.  When complete, select the “Save” button.  Repeat this process for each client type that will be connecting (Windows, Mac, Linux).




    Within this same location, we will add the AnyConnect profile.


    Select “Add AnyConnect File” at the top-right once again.


    Enter a name, browse to the profile, select AnyConnect Client Profile from as the File Type and select “Save” when complete.


    • this xml profile can be created using the Cisco VPN Profile Editor tool on a Windows machine.  This Profile Editor tool can be downloaded using the same link that was provided above


    We will now move on to creating the IP Pool object. This IP pool will be used as the DHCP pool for remote access clients as the client connects to the FTD appliance using AnyConnect.


    In FMC, open Objects > Object Management > Address Pools > IPv4 Pools



    Select “Add IPv4 Pools” at the top-right



    Provide a name, enter the pool range, and subnet mask then select “Save”



    We will now configure an object-group that references this VPN IP Pool


    Open Objects > Object Management > Network


      Select “Add Network > Add Group” at the top-right
    Provide an object name then manually enter the IP subnet for the VPN Pool that was previously created.  Select “Save” when complete.



    An optional configuration that can be added is a split-tunnel list.  Split tunnel allows for VPN connectivity to a remote network across a secure tunnel but also allows for local LAN access.  There are a few security concerns with allowing the use of split-tunneling but is an option.  To configure a split-tunnel list, we will create an Extended Access Control List.


    Navigate to Objects > Object Management > Access List > Extended



    Select “Add Extended Access List” at the top-right



    Provide a name for this new Access-List.

    Select “Add” at the top-right.

    Enter the inside IP space object as the source address.  Leaving all other options as their default, select “Add”, then “Save"


    The next object that is needed is a certificate that will be referenced later.


    To create a self-signed certificate, select Objects > Object Management > PKI > Cert Enrollment



    Select “Add Cert Enrollment” at the top-right



    Provide a name for this new certificate and a type of PKCS12, then save.


    The next object to create would be for authentication. 


    Cisco ASA’s offer an option to authenticate Remote Access VPN’s directly against the ASA using local authentication with users created directly on the ASA.  With v6.2, FTD only supports the use of external authentication using either RADIUS or LDAP authentication servers.  In this lab, authentication will go against a single RADIUS server running Cisco ISE (Identity Services Engine).  Of course, in a production environment, having redundant servers would be the recommended approach.  In that instance, this step would be performed twice in order to configure both authentication servers.


    To create the authentication server, open Objects > Object Management > RADIUS Server Group




    Select “Add RADIUS Server Group” at the top-right



    Provide a name (typically enter the server name here).


    Select the “plus” sign to add a server


    Enter the IP address of the RADIUS authentication server, along with the key and then save.


    If adding a second RADIUS server, repeat the process to add the redundant server. 



    Once all RADIUS servers have been added, save changes for the group.


    The final object that will be created will be the VPN Group Policy.  This Group Policy will provide various connectivity attributes for the VPN client.


    Open Objects > Object Management > VPN > Group Policy



    Select “Add Group Policy” at the top-right


    Provide a name for this Group Policy


    Next, a DNS server is defined.  General > DNS/WINS > Primary DNS Server > Add

    Enter a name and the network address of the DNS server.


    Also, on the General tab under Split Tunneling, select “Tunnel networks specified below” for IPv4, select the radio button next to “Extended Access List”, then in the drop-down, select the split tunnel list which was an object previously created named “SPLIT_TUNNEL”



    Finally, on the Group Policy, select the AnyConnect tab, select the AnyConnect Profile object previously created, then save.


    At this point, all objects are created and are now ready to run the VPN wizard.


    Navigate to Devices > VPN > Remote Access > Add



    Provide a name, then move the FTD appliance from the available devices into the selected device column.  Then click Next.

    Select AAA Only for the Authentication Method

    Select the ISE object previously created as the Authentication Server

    Select the VPN_POOL IP Pool

    Select the ANYCONNECT object for the Group Policy

    Then click Next



    Check the boxes next to each client image and verify the OS selected.  Then click Next.

    Select the outside interface as the Interface group/Security Zone

    Select the ANYCONNECT_CERT object for the Certificate Enrollment

    Click Next






    Review the summary of the changes being made and click Finish



    The next step would start the process within adding a public signed certificate that will be associated with the outside interface.


    Open Devices > Certificates



    At the top-right, select Add > PSCK12 File




    Select the FTD device


    For the Cert Enrollment, select the ANYCONNECT_CERT object


    For the PKCS12 File, select the pfx certificate and enter the passphrase.


    Click Add


    The final steps would now be to create a security policy rule as well as a NAT rule.


    Select Policies > Access Control > select the Access Control Policy that is deployed to the FTD appliance.


    Add a new rule


    Name the new policy

    Insert this policy “into Default”

    On the Zones tab, add the “outside” zone as the source and “inside” as the destination zones


    On the “Networks” tab, add the VPN object as the source network and rfc1918 as the destination network


    Click “Add” when complete.  Then Save at the top right.



    For the NAT exemption rule, open Devices > NAT


    Modify the existing NAT policy that’s applied to the FTD appliance and add a new rule


    In the Interface Objects tab, add the inside zone as the source and the outside zone as the destination.


    On the Translation tab add:

                - Original Source - internal networks (RFC1918)

                - Original Destination = Address / VPN_POOL

                - Translated Source = Address / internet networks (RFC1918)

                - Translated Destination = VPN_POOL


    Select the Advanced tab and choose the “Do not proxy ARP on Destination Interface” checkbox.  Then click “OK” then “Save” at the top right.



    At this point, the Remote Access VPN solution has been configured and is ready to be deployed to the FTD appliance.  At the top right of FMC, select “Deploy”.  Choose the FTD appliance that you are enabling remote access VPN on and Deploy the policy.  Deploying this policy takes time but can be monitored from the “Tasks” section next to the Deploy button in the menu bar.



    When the policy has been deployed successfully, remote access VPN can be tested.


    From a machine on the outside network, from the web browser, navigate to the outside IP or URL of the FTD appliance.  You should be prompted to enter user credentials.  Enter the username and password and select “Logon”



    Once successfully logged in, you may be prompted to install the AnyConnect client.  If the client is already installed, the VPN will automatically connect.  When connected, the AnyConnect client icon on the PC’s task bar will appear similar as shown below.



    To verify connectivity from within FTD, similar to an ASA, you can check status using the “show vpn-sessiondb detail anyconnect” command.


    To disconnect from the VPN, right-click on the AnyConnect client and select “Disconnect”


    As you can see, configuring a remote access VPN on FTD does have it’s limitations and does take a bit of configuration to get working but is a rock solid solution. 


    Important caution: Any commands shown in the following post are for demonstration purposes only and should always be modified accordingly and used carefully.  Do not run any of the procedures below without thorough testing and if you do not fully understand the consequences.  Please contact a representative at H.A. Inc. if you need assistance with components of your infrastructure as it relates to this posting.

Join the High Availability, Inc. Mailing List