Blog

  • Look! Up in the sky, in the clouds! It's my DR, saving the day!

    Look! Up in the sky, in the clouds! It's my DR, saving the day!

    May 31st, 2018
    Read More

    Being the superhero of IT is not an easy job.  Fighting crime, helping citizens, keeping peace, all in a day's work.  Avoiding disaster is one of the IT superhero's duties that can be quiet, and behind the scenes.  It can also be costly and risky, hard to budget and hard to plan.

    Two of DR's arch enemies - Cost and Risk!

    Fortunately, moving your DR into the cloud can fight cost and fight risk.  Let's look at some of the top reasons a cloud-based DR solution can help save money and leap tall pitfalls.

     

    Managing a DR Location
    Cost: Buying or leasing a secondary IT location can be a huge infrastructure undertaking.  Moving to the cloud often requires little infrastructure purchasing, as space, power, cooling and connectivity can all be rolled up into a single lower expense.

    Risk: Cloud based data centers are designed with risk avoidance in mind.  Redundant power and internet into the datacenter can achieve nearly 100% uptime.


    CapEx
    Cost:  Purchasing DR equipment can be a costly and shocking endeavor.  Replicating production equipment that has its own maintenance costs and software licensing cost is many times difficult to justify the huge Capital Expenditure from a budget perspective.  Cloud space, with its potential for co-location and virtualization of hardware make total economic sense.  Entire DR solutions can be leased from month to month, making an easier to justify Operational Expense.

    Risk: Many times, due to budget, space, internet speed, or other factors, legacy DR solutions can have shortcuts created to avoid some of these challenges.  Every time a shortcut is introduces, a risk is created.  Having a cloud based Operational Expensed design allows for design and implementation for a complete solution, without cutting corners.

     

    Staffing
    Cost: Hiring additional resources to manage DR equipment, and connections can be pricey.  Having a cloud provider manage the DR solution can offload the need to hire staff or have staff make frequent trips to and from a legacy DR location.

    Risk:  Having the expertise of dedicated DR professionals at the ready eliminates the risks of using internal staff that may not be comfortable with the design and implementation of maintaining a true DR solution.
     

    Complexity
    Cost:  Designing a legacy DR solution means creating, deploying and maintaining a replica of a full production environment.   Staff must coordinate with multiple hardware vendors, software vendors, and internet providers to maintain the DR environment.  Having a DR test plan and conducting test can be a coordination nightmare.  Having a single cloud service provider design, implement and maintain the full DR environment means a single vendor can be tasked with disaster testing and plan validation.

    Risk:  Relying on many partners in the event of a disaster presents challenges and risks in recovery, timing, tasks and ownership if an event should occur.  Having a single provider that can "flip the DR switch" and fail over a production environment to the DR location.

    Being a superhero can be as simple as making a call and finding a partner up in the clouds.

  • Cisco Unified Communication (UC) Server-  Hardware Options

    Cisco Unified Communication (UC) Server- Hardware Options

    May 23rd, 2018
    Read More

    These are various options when it comes to installing Cisco Unified Communication (UC) applications.  I have summarized these options with pros and cons of each:

    Business Edition (BE) 7000- BE7K or Business Edition (BE) 6000- BE6K

    • What is it?
      • The Cisco BE6K/BE7K is built on a virtualized UCS that ships ready-for-use with a pre-installed virtualization hypervisor and application installation files. The BE6K/BE7K is a UCS TRC in that UC applications have been explicitly tested on its specific UCS configuration
    • Pros
      • Easy to order- one SKU.  That SKU includes everything including VMware license
      • All OVA templates and ISO Images are preloaded with server
    • Cons
      • There is no flexibility to choose hardware and software from

     

     

    UC on Cisco Tested Reference Configuration (TRC) servers

    • What is it?
      • UCS TRCs are specific hardware configurations of UCS server components. These components include CPU, memory, hard disks (in the case of local storage), RAID controllers, and power supplies
    • Pros
      • Ordering process involve more than BE7K but it is simple compare to Spec based solution- that is check TRC specification against the actual hardware including CPU, memory, hard drive, VMware etc.
      • Provide more flexibility compare to BE6K and BE7k in terms of choosing hardware/software
    • Cons
      • There are still less options to choose hardware/ software from 
      • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
      • Require ordering VMware foundation or VMware standard license

     

    UC on Spec Based servers

    • What is it?
      • Specifications-based UCS hardware configurations are not explicitly validated with UC applications. Therefore, no prediction or assurance of UC application virtual machine performance is made when the applications are installed on UCS specs-based hardware. In those cases, Cisco provides guidance only, and ownership of assuring that the pre-sales hardware design provides the performance required by UC applications is the responsibility of the customer
    • Pros
      • Can leverage existing compute infrastructure, including 3rd party hardware
      • Provide the most flexibility in terms of hardware/software options to choose from
    • Cons
      • Ordering equipment’s based on Spec-based requires more upfront planning, validation and potential pre-testing
      • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
      • Requires VMware Vcenter

     

    UC on Cisco HyperFlex

    • What is it?

      • UC on Cisco HyperFlex is available as TRC
    • Pros
      • Same as TRC servers
      • Provide more robust and scalable solution
    • Cons
      • This could be expensive solution, unless it is part of larger Hyperflex deployment
      • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
      • Requires VMware Vcenter

     

    Who should be looking for UC on HCI (HyperFlex)?

    1. Server team with incumbent 3rd-party compute looking for alternative storage
    2. Voice/video team seeking HCI alternative to BE6K or BE7K appliance for UC
    3. UCS that is not BExK, one team in charge of everything, wants HCI instead of other approaches
    4. UCS that is not BExK, separation of duties where server team owns VMware/compute/storage, server team looking for HCI

     

    High Level Solution

     

    The Hyperflex bundle comes with four (4) HX240C nodes and pair of Cisco 6248 Fiber Interconnect.  The system is managed by HyperFlex (HX) software running on Cisco 6248.

    Following applications are supported by TRC (Tested Reference Configuration) on HyperFlex

    • (CUCM) Unified Communications Manager
    • (IMP) Unified Communications Manager – IM & Presence
    • Expressway C & Expressway E
    • (CER) Emergency Responder
    • (PCP) Prime Collaboration Provisioning
    • (PCA) Prime Collaboration Assurance
    • (PCD) Prime Collaboration Deployment
    • (PLM) Prime License Manager (standalone)
    • (CUC) Unity Connection
    • (UCCX) Unified Contact Center Express
    • (TMS) Telepresence Management Suite

     

    Sample Design

    Assumptions

    • Minimum system using HX240c M4SX TRC#1, HX 1.8.1.
    • 4x HX nodes, each with VMware vSphere ESXi 6.0
    • 2x 6200 FI switches
    • VMware vCenter 6.0 for management

     

    SAN/NAS Best Practices

    General Guidelines

    • Adapters for storage access must follow supported hardware rules

    • Cisco UC apps use a 4-kilobyte block size to determine bandwidth needs.
    • Design your deployment in accordance with the UCS High Availability guidelines  
    • 10GbE networks for NFS, FCoE or iSCSI storage access should be configured using Cisco Platinum Class QOS for the storage traffic.
    • Ethernet ports for LAN access and ethernet ports for storage access may be separate or shared. Separate ports may be desired for redundancy purposes. It is the customer's responsibility to ensure external LAN and storage access networks meet UC app latency, performance and capacity requirements
    • In absence of UCS 6100/6200, normal QoS (L3 and L2 marking) can be used starting from the first upstream switch to the storage array.
    • With UCS 6100/6200
    • FC or FCoE: no additional requirements. Automatically handled by Fabric Interconnect switch.
    • iSCSI or NFS: Follow these best practices:
      • Use a L2 CoS between the chassis and the upstream switch.
      • For the storage traffic, recommend a Platinum class QoS, CoS=5, no drop (Fiber Channel Equivalent)
      • L3 DSCP is optional between the chassis and the first upstream switch.
      • From the first upstream switch to the storage array, use the normal QoS (L3 and L2 marking). Note that iSCSI or NFS traffic is typically assigned a separate VLAN.
      • iSCSI or NFS: Ensure that the traffic is prioritized to provide the right IOPS. For a configuration example, see the FlexPod Secure Multi-Tenant (SMT) documentation (http://www.imaginevirtuallyanything.com/us/).
    • The storage array vendor may have additional best practices as well.
    • if disk oversubscription or storage thin provisioning are used, note that UC apps are designed to use 100% of their allocated vDisk, either for UC features (such as Unity Connection message store or Contact Center reporting databases) or critical operations (such as spikes during upgrades, backups or statistics writes). While thin provisioning does not introduce a performance penalty, not having physical disk space available when the app needs it can have the following harmful effects
      • degrade UC app performance, crash the UC app and/or corrupt the vDisk contents.
      • lock up all UC VMs on the same LUN in a SAN

     

    Link Provisioning and High Availability

    Consider the following example to determine the number of physical Fiber Channel (FC) or 10Gig Ethernet links required between your storage array (such as the EMC Clariion CX4 series or NetApp FAS 3000 Series) and SAN switch for example, Nexus or MDS Series SAN Switches), and between your SAN switch and the UCS Fabric Interconnect Switch. This example is presented to give a general idea of the design considerations involved. You should contact your storage vendor to determine the exact requirement.

    Assume that the storage array has a total capacity of 28,000 Input/output Operations Per Second (IOPS). Enterprise grade SAN Storage Arrays have at least two service processors (SPs) or controllers for redundancy and load balancing. That means 14,000 IOPS per controller or service processor. With the capacity of 28,000 IOPS, and assuming a 4 KByte block size, we can calculate the throughput per storage array controller as follows:

    • 14,000 I/O per second * (4000 Byte block size * 8) bits = 448,000,000 bits per second
    • 448,000,000/1024 = 437,500 Kbits per second
    • 437,500/1024 = ~428 Mbits per second

    Adding more overhead, one controller can support a throughput rate of roughly 600 Mbps. Based on this calculation, it is clear that a 4 Gbps FC interface is enough to handle the entire capacity of one Storage Array. Therefore, Cisco recommends putting four FC interfaces between the storage array and storage switch, as shown in the following image, to provide high availability.

    Note: Cisco provides storage networking and switching products that are based on industry standards and that work with storage array providers such as EMC, NetApp, and so forth. Virtualized Unified Communications is supported on any storage access and storage array products that are supported by Cisco UCS and VMware. For more details on storage networking, see http://www.cisco.com/en/US/netsol/ns747/networking_solutions_sub_program_home.html.

  • New Features and Changes in vSphere 6.7

    New Features and Changes in vSphere 6.7

    May 23rd, 2018
    Read More

    VMware just released vSphere 6.7, an incremental upgrade to the 6.x line of code. Although this is a minor release, it has lots of changes and improvements under the hood.

    HTML5 Client

    Most people I talk to hate the VMware Flash web client, preferring the old C# thick client for its ease of use and quick response. The HTML5 client offers a cleaner, more intuitive interface with better performance than the Flash version, but it is not yet feature-complete.

    With the release of 6.7, a number of features have been added that were previously missing.  All storage workflows are now available, and Update Manager in now integrated as shown below. The client is now 90-95% complete, with VMware promising full functionality by fall of 2018:

    https://blogs.vmware.com/vsphere/2018/05/fully-featured-html5-based-vsphere-client-coming-fall-2018.html

    4K Native Drive Support

    Drives with 4k sector size are supported in 6.7 with built-in 512-byte sector emulation for legacy OS support.

    https://storagehub.vmware.com/t/vsphere-storage/vsphere-6-7-core-storage-1/support-for-4kn-hdds/

    Increased Limits

    Device limits continue to increase in 6.7.

    Max Virtual Disks increased from 60 to 256

    Max ESXi number of Devices increased from 512 to 1024

    Max ESXi paths to Devices increased from 2048 to 4096

    VCSA VAMI Improvements

    The Virtual Appliance Management Interface, known as VAMI, is available at https://vcenter:5480.  There have been several major improvements. 

    Services can now be managed directly from here.

    Scheduled backups of VCSA are now available through the backup tab in VAMI

    Quickboot and Single-Reboot Upgrades

    Quickboot allows the hypervisor to restart without restarting the underlying hardware.  This is useful in the event of applying hypervisor patches and is available only from VUM.  6.7 also enables single-reboot upgrades, which eliminates the second reboot requirement and greatly speeds up the upgrade process.

    DRS initial placement improvements

    vCenter 6.7 has the improved initial placement from 6.5 but this now works with HA enabled. Performance is said to be 3x faster.

    Support for RDMA (Remote Direct Memory Access)

    RDMA allows transfer of memory contents from one system to another, bypassing the kernel.  This delivers high i/o bandwidth with low latency.  The feature requires an HCA (Host Channel Adapter) on source and destination systems.

    https://storagehub.vmware.com/t/vsphere-storage/vsphere-6-7-core-storage-1/rdma-support-in-vsphere/

    vSphere Persistent Memory

    Persistent memory, utilizing NVDIMM modules, is supported for hosts and VMs.  This technology enables incredible levels of performance, with speeds 100x faster than SSD storage.

    https://storagehub.vmware.com/t/vsphere-storage/vsphere-6-7-core-storage-1/pmem-persistant-memory-nvdimm-support-in-vsphere/

    Deprecated CPUs

    Since new vSphere 6.7 features depend on specific CPU capabilities, a lot of fairly recent legacy CPUs are no longer supported.  Check your hardware for compatibility before planning a 6.7 upgrade. 

    Release notes:

    https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-vcenter-server-67-release-notes.html

    Conclusion

    vSphere 6.7 is a solid update that addresses a lot of pain points in prior releases.  It will be a welcome improvement for businesses of all sizes.

     

     

  • Top Tips by CIOs for CIOs

    Top Tips by CIOs for CIOs

    April 24th, 2018
    Read More

    Supervising a technology team, no matter the size, is a huge responsibility. From recruiting engineers and technology technicians to transforming your data center – a Chief Information Officer must be able to handle any situation that comes across their desk while simultaneously maintaining the goals and objectives of their organization.

    We asked some of the region’s top CIOs and CTO’s for advice and tips for other technology leaders – here are our findings;

    YOUR TEAM:

    • “Surround yourself with a core group of smart, technically savvy, team oriented, and motivated professionals.  The team will grow from there….similar people find each other.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Listen to everyone’s ideas, turn your ideas into someone else’s, ask probing questions…”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Make sure that your team understands mission / objectives and that they have the training / aptitude to be successful.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
    • “Manage up and across as much as or more than down, be curious on motives and what make people tick.”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Drive a culture of partnership.  The team is greater than any individual member.”
      • Geoff Pieninck, Global IT Director, Colorcon
    • “Promote the team and their efforts, not the leaders.”
      • Marty Keane, Chief Technology Officer, Penn Capital

    CHALLENGES:

    • “Don’t major in the minors. Identify the big impact projects, issues, and opportunities and address them long term.  Filter out the noise.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Change is inevitable.  Change is uncomfortable. Drip change, don’t pour change and allow the change to disburse evenly.”
      • Marty Keane, Chief Technology Officer, Penn Capital
    • “Turn ‘No’ into challenges - be flexible.”
      • John S. Supplee, Chief Operating Officer, Haverford Trust

    LEADERSHIP:

    • “Challenge your staff with additional responsibility and exposure to new opportunities…success and recognition follows.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Practice empathy – understand employee and co-workers experiences; this encourages people to speak up if they are struggling.”
      • Ronn Temple, Director – Enterprise Technology, Dorman Products
    • ”Learn to stay intrigued, motivated and curious.  Find were you are get your inspiration from.  Innovation is forged from such foundations.”
      • Marty Keane, Chief Technology Officer, Penn Capital
    • “Stay humble.  Celebrate your successes, but don’t be complacent.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Know that a leader will hold all to ethical, moral, and performance standards.”
      • Ronn Temple, Director – Enterprise Technology, Dorman Products

    PROJECT MANAGEMENT:

    • “Right size technical and business solutions based on the needs of the business.  Biggest, best, newest is not always the answer.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “People, planning and process are critical components for ongoing team success.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
    •  “Balance technical solutions with sound financial analysis regarding costs, savings, and impact to financial statements.  Speak the language of those paying the bills.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Make bold decisions, pull the plug on failed projects no matter what the costs are…”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Aligns IT services to business needs.  Always be able to point IT actions back to a key business driver.”
      • Geoff Pieninck, Global IT Director, Colorcon
    • “Provide focus on strategic, tactical and operational needs as required to reach goals.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
  • Why HCI is the Datacenter Architecture of the Future

    Why HCI is the Datacenter Architecture of the Future

    April 9th, 2018
    Read More

    So why talk about data center architecture? As part of the IT community, it can be fun to talk and geek out about the latest technologies or trends. We can appreciate how things used to be done and are excited about where things are headed. But, at some point it must become practical to the business side of the house. Most people in your company and your customers don’t care what you are doing in your datacenter. They could not care less if you are 100% virtualized or still bare metal. They don’t know if you are running public, private, or hybrid cloud.  They might even laugh at how seriously we can use the term cloud.

    There is something to be gained by thinking of the business results of our datacenters and the IT infrastructure contained therein. We look at the result that Google, Facebook, and Amazon are getting, and we study why they made the choices they did. Each one uses an HCI approach in the datacenter.

    If a company is virtualized it is not really a question we consider anymore, but rather what small percentage is not virtual yet. With very mature virtual operations, there is now a drive to containerize applications and services. We don’t debate if virtualization is better than bare metal since we see the flexibility, consolidation, and ease of management inherent in the ability to run multiple workloads on the same physical server. Treating a server as a file that can be manipulated independent of the underlying hardware is a key feature of virtualization.

    Hyperconverged infrastructure picks up where virtualization left off. It takes the software approach to more of the IT stack. Software is consuming everything. What used to be separate products are just part of the stack or cloud. HCI treats storage as a service to be delivered to the VM or Application layer. Networking is delivered in a similar fashion. Just like the public cloud, everything is abstracted behind a management console and administered as one unit.

    HCI delivers the best of the public cloud experience – initial right sizing, incremental growth, self-service, and programmability/automation. But with the benefits of on premises infrastructure – security, performance, and control. An extra bonus is when you look at the duration of years, it is less expensive to own commodity hardware resources in your datacenter than rent it in a public cloud for deterministic workloads.

    Your IT operations is a major component of the engine that drives your business. The more these processes can be automated in software, the less friction with productivity and profits will increase. The large cloud organizations automate as much as possible, so they can iterate improvements faster. Some call this process DevOps. Tying IT processes directly to business outcomes which then feed back into the IT process. This creates a virtuous cycle that is a business enabler instead of a cost center. And the more your datacenter is software defined, the more flexibility you have in the DevOps process.

    The best HCI vendors will allow you to meld architectures. Whether you call it hybrid cloud or cloud bursting, this approach helps balance cloud costs and avoids a whole new type of vendor lock-in. You should be able to mix mode 1 legacy apps with mode 2 cloud native apps, so you can migrate as the business is able. Sound like the data center of the future, except it is possible today.

    In closing, I ask again, why talk about data center architecture? The simple answer is: if you are reading this, it is probably part of your job. Every major infrastructure vendor has some type of HCI offering. It could be software, hardware, or appliance based. With the industry rapidly evolving, it is important to have a trusted technology advisor like HA, Inc. who can help you look at all the options out there and assist in your due diligence. We can help you make an informed decision, so your data center and IT practices are evolving with the industry into a bright future.

  • Configuring AnyConnect Remote Access VPN on Cisco FTD

    Configuring AnyConnect Remote Access VPN on Cisco FTD

    April 6th, 2018
    Read More

    Cisco ASA’s have been a part of Cisco’s security product lineup since 2005 replacing the older PIX firewalls.  Over the more recent years, Cisco has really focused a great deal on security adding more and more solutions for different portions of the network.  One of the newer security solutions was brought in with the acquisition of SourceFire from back in 2013.  SourceFire, at the time of the acquisition, was one of the top leading Intrusion Prevention solutions on the market.  Shortly after that acquisition, what was previously known as Sourcefire, received a name change to Cisco FirePOWER, then to then FirePower, and more recently, Firepower.  Yes, the name changed quite a bit over the past few years. 

     

    Firepower added the Next-Generation Firewall (NGFW) solutions that are now pretty much required in networks of all sizes.  The NGFW feature-sets add additional visibility into application networking, user traffic, content filtering, vulnerability monitoring, and much more providing the security that’s needed.

     

    Cisco first added their NGFW solution to the Cisco ASA5500X products by adding a Firepower module (SFR) into the firewall appliance.  This SFR module is essentially a hard drive that runs as a Firepower sensor.  Policies are pushed to this module which directs traffic to be bounced from the ASA over to this sensor for inspection, then traffic is sent back to the ASA for processing.

     

    In addition to offering the Cisco ASA as a firewall security solution, Cisco added a newer Firepower Threat Defense (FTD) appliance.  The Cisco FTD appliance consolidates some of the ASA functionality and the NGFW features down into a single appliance.  This allows for easier management of the security solutions with having one single management interface as opposed to having to manage the ASA configuration separately from the NGFW features which are typically managed from Firepower Management Center (FMC).

     

    The Cisco FTD appliance carries most (not all) of the features that an ASA would support.  One particular feature that was brought over from the ASA is remote access VPN connectivity.  Some of the remote access features that were ported over from the ASA did not make it over to FTD.  The most notable features that are missing from this Remote Access VPN on FTD solution as of v6.2 are:

     

    • local user authentication
    • 2-factor authentication
    • ISE Posturing
    • LDAP attribute mapping
    • AnyConnect modules

     

    *reference the link below for a full list of limitations

     

    https://www.cisco.com/c/en/us/support/docs/network-management/remote-access/212424-anyconnect-remote-access-vpn-configurati.html#anc13

     

     

    In this article, I will be providing a sample of how to configure a remote access VPN solution on Cisco FTD. 

     

    This article is going to assume that the FTD appliance is already registered, licensing is acquired, and that the appliance is being managed by FMC. 

     

    To start the remote access VPN configuration, we first need to apply the AnyConnect licensing to the FTD appliance.  Navigate to System > Licenses > Smart Licenses.

     

     

    Select the “Edit Licenses” button on the upper right.

    Select the licensing that was purchased and move your FTD appliance into the right window to assign the license to the appliance.  In this case, “AnyConnect Apex” licensing was selected, and the appliance named “FTD” appliance to the right.  When complete, select “Apply” at the bottom right.

     

     

    Now that the licensing has been assigned, we can continue with the building blocks required for the RA VPN connectivity.  The next step would be to create all of the various objects (software package, profile, IP Pool, etc).  These objects will all tie together during the RA VPN config wizard.

     

    The first object we will create is the software package object.  Navigate to Objects > Object Management > VPN > AnyConnect File

     

    Here, we will add the VPN client software packages for the different required Operating Systems that will be used in the environment.

     

     

    Select “Add AnyConnect File” at the top-right.

     

     

    Enter a name, browse to the AnyConnect client package file which can be downloaded using the link below (valid Cisco contract required) and select “AnyConnect Client Image” as the file type.  When complete, select the “Save” button.  Repeat this process for each client type that will be connecting (Windows, Mac, Linux).

     

    https://software.cisco.com/download/release.html?mdfid=286281283&flowid=72322&softwareid=282364313&release=4.5.04029&relind=AVAILABLE&rellifecycle=&reltype=latest

     

     

    Within this same location, we will add the AnyConnect profile.

     

    Select “Add AnyConnect File” at the top-right once again.

     

    Enter a name, browse to the profile, select AnyConnect Client Profile from as the File Type and select “Save” when complete.

     

    • this xml profile can be created using the Cisco VPN Profile Editor tool on a Windows machine.  This Profile Editor tool can be downloaded using the same link that was provided above

     

    We will now move on to creating the IP Pool object. This IP pool will be used as the DHCP pool for remote access clients as the client connects to the FTD appliance using AnyConnect.

     

    In FMC, open Objects > Object Management > Address Pools > IPv4 Pools

     

     

    Select “Add IPv4 Pools” at the top-right

     

     

    Provide a name, enter the pool range, and subnet mask then select “Save”

     

     

    We will now configure an object-group that references this VPN IP Pool

     

    Open Objects > Object Management > Network

     

      Select “Add Network > Add Group” at the top-right
    Provide an object name then manually enter the IP subnet for the VPN Pool that was previously created.  Select “Save” when complete.

     

     

    An optional configuration that can be added is a split-tunnel list.  Split tunnel allows for VPN connectivity to a remote network across a secure tunnel but also allows for local LAN access.  There are a few security concerns with allowing the use of split-tunneling but is an option.  To configure a split-tunnel list, we will create an Extended Access Control List.

     

    Navigate to Objects > Object Management > Access List > Extended

     

     

    Select “Add Extended Access List” at the top-right

     

     

    Provide a name for this new Access-List.

    Select “Add” at the top-right.

    Enter the inside IP space object as the source address.  Leaving all other options as their default, select “Add”, then “Save"

      

    The next object that is needed is a certificate that will be referenced later.

     

    To create a self-signed certificate, select Objects > Object Management > PKI > Cert Enrollment

     

     

    Select “Add Cert Enrollment” at the top-right

     

     

    Provide a name for this new certificate and a type of PKCS12, then save.

     

    The next object to create would be for authentication. 

     

    Cisco ASA’s offer an option to authenticate Remote Access VPN’s directly against the ASA using local authentication with users created directly on the ASA.  With v6.2, FTD only supports the use of external authentication using either RADIUS or LDAP authentication servers.  In this lab, authentication will go against a single RADIUS server running Cisco ISE (Identity Services Engine).  Of course, in a production environment, having redundant servers would be the recommended approach.  In that instance, this step would be performed twice in order to configure both authentication servers.

     

    To create the authentication server, open Objects > Object Management > RADIUS Server Group

     

     

     

    Select “Add RADIUS Server Group” at the top-right

     

     

    Provide a name (typically enter the server name here).

     

    Select the “plus” sign to add a server

     

    Enter the IP address of the RADIUS authentication server, along with the key and then save.

     

    If adding a second RADIUS server, repeat the process to add the redundant server. 

     

     

    Once all RADIUS servers have been added, save changes for the group.

     

    The final object that will be created will be the VPN Group Policy.  This Group Policy will provide various connectivity attributes for the VPN client.

     

    Open Objects > Object Management > VPN > Group Policy

     

     

    Select “Add Group Policy” at the top-right

     

    Provide a name for this Group Policy

     

    Next, a DNS server is defined.  General > DNS/WINS > Primary DNS Server > Add

    Enter a name and the network address of the DNS server.

     

    Also, on the General tab under Split Tunneling, select “Tunnel networks specified below” for IPv4, select the radio button next to “Extended Access List”, then in the drop-down, select the split tunnel list which was an object previously created named “SPLIT_TUNNEL”

     

     

    Finally, on the Group Policy, select the AnyConnect tab, select the AnyConnect Profile object previously created, then save.

     

    At this point, all objects are created and are now ready to run the VPN wizard.

     

    Navigate to Devices > VPN > Remote Access > Add

     

     

    Provide a name, then move the FTD appliance from the available devices into the selected device column.  Then click Next.


    Select AAA Only for the Authentication Method

    Select the ISE object previously created as the Authentication Server

    Select the VPN_POOL IP Pool

    Select the ANYCONNECT object for the Group Policy

    Then click Next

     

     

    Check the boxes next to each client image and verify the OS selected.  Then click Next.

    Select the outside interface as the Interface group/Security Zone

    Select the ANYCONNECT_CERT object for the Certificate Enrollment

    Click Next

     

     

     

     

     

    Review the summary of the changes being made and click Finish

     

     

    The next step would start the process within adding a public signed certificate that will be associated with the outside interface.

     

    Open Devices > Certificates

     

     

    At the top-right, select Add > PSCK12 File

     

     

     

    Select the FTD device

     

    For the Cert Enrollment, select the ANYCONNECT_CERT object

     

    For the PKCS12 File, select the pfx certificate and enter the passphrase.

     

    Click Add

     

    The final steps would now be to create a security policy rule as well as a NAT rule.

     

    Select Policies > Access Control > select the Access Control Policy that is deployed to the FTD appliance.

     

    Add a new rule

     

    Name the new policy

    Insert this policy “into Default”

    On the Zones tab, add the “outside” zone as the source and “inside” as the destination zones

     

    On the “Networks” tab, add the VPN object as the source network and rfc1918 as the destination network

     

    Click “Add” when complete.  Then Save at the top right.

     

     

    For the NAT exemption rule, open Devices > NAT

     

    Modify the existing NAT policy that’s applied to the FTD appliance and add a new rule

     

    In the Interface Objects tab, add the inside zone as the source and the outside zone as the destination.

     

    On the Translation tab add:

                - Original Source - internal networks (RFC1918)

                - Original Destination = Address / VPN_POOL

                - Translated Source = Address / internet networks (RFC1918)

                - Translated Destination = VPN_POOL

     

    Select the Advanced tab and choose the “Do not proxy ARP on Destination Interface” checkbox.  Then click “OK” then “Save” at the top right.

     

     

    At this point, the Remote Access VPN solution has been configured and is ready to be deployed to the FTD appliance.  At the top right of FMC, select “Deploy”.  Choose the FTD appliance that you are enabling remote access VPN on and Deploy the policy.  Deploying this policy takes time but can be monitored from the “Tasks” section next to the Deploy button in the menu bar.

     

     

    When the policy has been deployed successfully, remote access VPN can be tested.

     

    From a machine on the outside network, from the web browser, navigate to the outside IP or URL of the FTD appliance.  You should be prompted to enter user credentials.  Enter the username and password and select “Logon”

     

     

    Once successfully logged in, you may be prompted to install the AnyConnect client.  If the client is already installed, the VPN will automatically connect.  When connected, the AnyConnect client icon on the PC’s task bar will appear similar as shown below.

     

     

    To verify connectivity from within FTD, similar to an ASA, you can check status using the “show vpn-sessiondb detail anyconnect” command.

     

    To disconnect from the VPN, right-click on the AnyConnect client and select “Disconnect”

     

    As you can see, configuring a remote access VPN on FTD does have it’s limitations and does take a bit of configuration to get working but is a rock solid solution. 

     

    Important caution: Any commands shown in the following post are for demonstration purposes only and should always be modified accordingly and used carefully.  Do not run any of the procedures below without thorough testing and if you do not fully understand the consequences.  Please contact a representative at H.A. Inc. if you need assistance with components of your infrastructure as it relates to this posting.

  • How to Become a NetApp ONTAP CLI Ninja

    How to Become a NetApp ONTAP CLI Ninja

    April 3rd, 2018
    Read More

    Over the past decade or so I’ve spent quite a few hours of my life staring at the NetApp ONTAP command line interface. 3840 hours, to be exact.  I did the math…8 hours per week, 48 weeks per year, over the last 10 years comes to 3840 hours (OK, not exact, but I’d argue a very conservative estimate).  I’ve worked on countless different versions of ONTAP containing multiple revisions of various commands.  As with anything you put 3840 hours into, I picked up a few shortcuts and tricks along the way.  My goal with this post is to share some of this knowledge to help you become more efficient in your role.  The tips below will only be relevant to those of you running Clustered Data ONTAP (aka CDOT, or just “ONTAP" now).  If you’re still running a NetApp system with a 7-mode (7m) version of ONTAP, it’s time to upgrade (*cough* we can help *cough*). 

    Almost to the fun stuff -  but first a few disclaimers.  Depending on your skill level, some of these may seem basic, but they get more advanced as you go through, I promise.  Nomenclature:

    • CLI commands will be italicized (volume would represent typing the word volume at the CLI)
    • Specific key presses will be bold (enter would represent pressing the enter key)
    • Variables will be indicated with <> brackets (<volume-name> would be a placeholder for an actual volume name)
    • Some commands require the use of special characters (! and { } and * for example).  Type these exactly as you see them displayed here.
    • All commands are always capitalization sensitive so pay close attention to upper/lower-case letters.

    Important caution: Any commands shown in the following post are for demonstration purposes only and should always be modified accordingly and used carefully.  Do not run any of the procedures below without thorough testing and if you do not fully understand the consequences.  Please contact a representative at H.A. Inc. if you need assistance with components of your infrastructure as it relates to this posting.

    Tip 1 – Use the tab key to autocomplete commands

    It’s somewhat difficult for me to show this via static screenshots and text but I think this feature is somewhat self-explanatory.  Not only will autocomplete fill commands in, it will (in most cases) fill in variables for you too.  For example, if you type the following:

    ne tab in tab sh tab -vs tab Cl tab enter

    you get the following command and associated output:

     

    Tip 2 – Consolidate output with rows 0

    You will find that some commands have a large output with some line wrapping that will require you to hit space or enter to continue.  This is inconvenient and makes a mess of the log if you’re trying to capture the output.  The default number of rows displayed can be set to infinite by entering the following command.  This will be reset back to default every time you login.

    rows 0

                    Output of a command with rows set to 10:

    Output of the same command with rows set to 0:

    Tip 3 – Use the -fields operator to show precisely what you need

    As shown by the network interface show outputs above, you can get a lot of good basic info about the LIFs using the basic show command.  But what if I wanted to show just the IP addresses and also add the allowed data protocol for each LIF? Add -fields then a comma separated list of fields you’d like to show to the command as shown below to customize the output:

    network interface show -fields address,data-protocol

    Tip 4 – Make your output CSV friendly with set -showseperator ","

    The ouput of tip 3 above is nice but not very friendly for copying out for use elsewhere (a script, a report, etc) due to inconsistent spacing and line wrapping.  Formatting the output with commas as separators is a huge help with this.  The default separator character can be set to a comma by entering the following command.  This will be reset back to default every time you login.

    set -showseperator ","

    Tip 5 – Use the wildcard * operator

    Use the asterisk as a wild card character at the beginning, end, or anywhere in the middle of the item you’re searching for (results will differ depending on location).  Let’s say I have a bunch of volumes but I’d like to get a report of volumes with “nfs” in the name:

    Or, I only want to see the volumes that are located on all aggregates that have “SSD” in the name:

    Or, how about I want to search the event log for all entries with “Dump” in the event column that occurred on “3/21/2018”

    Tip 6 – Use the NOT ! operator

    Use the exclamation point as a NOT operator for searches.  Let’s say I want to show all volumes that are not currently online:

     

    We can also combine ! with * to show all volumes that do not have “root” anywhere in the name of the volumes:

    Tip 7 – Use the greater/less than operators

    Use the greater than and less than operators during searches that involve sizes.  Let’s say I want to show all volumes with a size of more than 1GB:

    Tip 8 – Use curly brackets for extended queries

    Use curly brackets to perform a query in order to determine items to perform further action on.  For example, let’s say I want to modify only LIFs that meet the following specific criteria:

    • auto-revert set to false
    • not iscsi or fcp data-protocol

    Tip 9 – Use set -confirmations off to auto-confirm all end-user prompts

    Certain commands require the user to confirm for each occurrence.  When running a handful of commands in a row (as is often the case for repetitive/scriptable tasks), it is cumbersome to have to acknowledge after every single line.  Using set -confirmations off disables these confirmations.  This is potentially risky as the confirmations are often provided as a safety mechanism to ensure the action you are about to take is really what you intend so use this one carefully.

    Deleting a volume with set -confirmations on:

    Deleting a volume with set -confirmations off you can see the user is not prompted to confirm this action:

    Tip 10 – Naming naming naming

    As shown with tips 1-10, the powerful capabilities of the CLI really reinforces the importance of good, consistent naming conventions.  Develop good naming conventions, document them well, share them with your team, and stick to them.

  • The Network Engineer’s Favorite Dummy Address, 1.1.1.1, is Now Live on the Internet

    The Network Engineer’s Favorite Dummy Address, 1.1.1.1, is Now Live on the Internet

    March 29th, 2018
    Read More

    Most network engineers have, at some point in their career, used a “dummy” IP address of 1.1.1.1 for some reason or another. Maybe it was a placeholder in a configuration template, or maybe it was used for some sort of loopback or private connection that should never see the light of day such as a firewall failover link. Most of us have made this IP faux pas at least once.

     

    In fact, way back in 2010, when IANA allocated the 1.0.0.0/8 prefix to APNIC (the Asia-Pacific regional Internet number authority) work was done by APNIC’s research and development group to assess the feasibility of using various prefixes within the 1.0.0.0/8 block. The result of their study found that there was a substantial amount of random, garbage Internet traffic constantly flowing toward several of the prefixes within that range. Notably, IP addresses 1.1.1.1, 1.0.0.1, and 1.2.3.4 bore the brunt of this traffic barrage. The recommendation from APNIC’s R&D group at that time was not to allocate these prefixes to any Internet users due to the massive volume of background junk traffic always being directed at them.

     

    Now, fast forward to 2018. April Fool’s Day 2018, to be exact, and everyone’s favorite dummy IP addresses 1.1.1.1 and 1.0.0.1 are now live on the Internet providing a new free, privacy-oriented public DNS service. This is no prank, though, so how did it happen? Both Cloudflare (the company behind the new DNS service) and APNIC have put up blog posts discussing this joint project. The gist is that Cloudflare wanted to provide a public DNS service that, unlike Google’s 8.8.8.8 service, was not tracking/logging user activity. Because Cloudflare is already a web-scale service that provides content delivery network (CDN) services, Distributed Denial-of-Service (DDoS) attack mitigation, and other web services they were well-suited to both deal with, and study, the flood of background traffic going 1.1.1.1 and 1.0.0.1. APNIC agreed to provide these addresses to Cloudflare (for an initial period of 5 years) to home their DNS service at, so that APNIC and Cloudflare could study and gain insight from both the aggregated DNS query data and the rest of the background traffic.

     

    Should you start using 1.1.1.1 for your DNS resolution? The answer is that you could, but just like Google’s DNS service or Level3’s old standby of 4.2.2.2 there’s really no insight or control to be gained from the IT administrator’s perspective when using these upstream resolvers. There’s also no SLA guaranteeing uptime or performance. If you’d like to use reliable SLA-protected DNS services to provide an additional layer of security or content control for your users, H.A. strongly recommends looking at Cisco’s Umbrella DNS security service. Cisco Umbrella offers IT administrators the ability to provide security and web content filtering policy as well as protection against DNS-leakage for on-premises and remote users at the site, AD group, or user level, while providing statistics and activity reporting for the IT admin. For more information about Cisco Umbrella, contact your H.A. account manager.

     

    While this new 1.1.1.1 DNS service is interesting and may provide valuable Internet traffic research, there are a couple of take-aways that network engineers should keep in mind:

     

    1. We really shouldn’t ever use addresses that could be allocated on the Internet, but were not issued to us to use, for any reason. For private links such as firewall failover clustering, a piece of the RFC1918 IP space is best, or using addresses in the RFC3927 (the 169.254.0.0/16 link-local “auto-configuration” range) should also be acceptable.
       
    2. Now that 1.1.1.1 and 1.0.0.1 are live, we should avoid using them for documentation or examples. Actually, we never should have done this because the addresses were always valid as “real” Internet addresses, but there’s even more reason to avoid it now.
      If you need IP addresses to use in templates, or for examples or documentation, it’s best to use one of the three “TEST-NET” blocks defined in RFC5735. These include 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
       
    3. Be aware that some commercial products may have used 1.1.1.1 for some purpose, and that functionality could conflict with using the actual 1.1.1.1 DNS service now. An example of this is Cisco wireless LAN controllers which use 1.1.1.1 as the “virtual” interface address for serving guest portal splash pages. As a result of this, it’s not uncommon to see companies with DNS entries in their public DNS zones for 1.1.1.1 mapping to “guest.company.com” or something similar.

      Additionally, if using a Cisco Wireless LAN Controller for DHCP on a guest wireless network, the end client’s DHCP server will appear to be 1.1.1.1. In this case you would want to avoid assigning 1.1.1.1 for the guests’ DNS server since the WLC will intercept requests to 1.1.1.1 thinking it’s intended for it.
       
    4. We continue to scrape the bottom of the barrel for IPv4 address space and the activation of certain IP blocks that were always assumed to be de facto off-limits continues to prove this out. If you have not yet begun investigating IPv6 readiness in your network, it’s time to start that process.

Join the High Availability, Inc. Mailing List

Subscribe