Blog

  • New Features and Changes in vSphere 6.7

    New Features and Changes in vSphere 6.7

    May 23rd, 2018
    Read More

    VMware just released vSphere 6.7, an incremental upgrade to the 6.x line of code. Although this is a minor release, it has lots of changes and improvements under the hood.

    HTML5 Client

    Most people I talk to hate the VMware Flash web client, preferring the old C# thick client for its ease of use and quick response. The HTML5 client offers a cleaner, more intuitive interface with better performance than the Flash version, but it is not yet feature-complete.

    With the release of 6.7, a number of features have been added that were previously missing.  All storage workflows are now available, and Update Manager in now integrated as shown below. The client is now 90-95% complete, with VMware promising full functionality by fall of 2018:

    https://blogs.vmware.com/vsphere/2018/05/fully-featured-html5-based-vsphere-client-coming-fall-2018.html

    4K Native Drive Support

    Drives with 4k sector size are supported in 6.7 with built-in 512-byte sector emulation for legacy OS support.

    https://storagehub.vmware.com/t/vsphere-storage/vsphere-6-7-core-storage-1/support-for-4kn-hdds/

    Increased Limits

    Device limits continue to increase in 6.7.

    Max Virtual Disks increased from 60 to 256

    Max ESXi number of Devices increased from 512 to 1024

    Max ESXi paths to Devices increased from 2048 to 4096

    VCSA VAMI Improvements

    The Virtual Appliance Management Interface, known as VAMI, is available at https://vcenter:5480.  There have been several major improvements. 

    Services can now be managed directly from here.

    Scheduled backups of VCSA are now available through the backup tab in VAMI

    Quickboot and Single-Reboot Upgrades

    Quickboot allows the hypervisor to restart without restarting the underlying hardware.  This is useful in the event of applying hypervisor patches and is available only from VUM.  6.7 also enables single-reboot upgrades, which eliminates the second reboot requirement and greatly speeds up the upgrade process.

    DRS initial placement improvements

    vCenter 6.7 has the improved initial placement from 6.5 but this now works with HA enabled. Performance is said to be 3x faster.

    Support for RDMA (Remote Direct Memory Access)

    RDMA allows transfer of memory contents from one system to another, bypassing the kernel.  This delivers high i/o bandwidth with low latency.  The feature requires an HCA (Host Channel Adapter) on source and destination systems.

    https://storagehub.vmware.com/t/vsphere-storage/vsphere-6-7-core-storage-1/rdma-support-in-vsphere/

    vSphere Persistent Memory

    Persistent memory, utilizing NVDIMM modules, is supported for hosts and VMs.  This technology enables incredible levels of performance, with speeds 100x faster than SSD storage.

    https://storagehub.vmware.com/t/vsphere-storage/vsphere-6-7-core-storage-1/pmem-persistant-memory-nvdimm-support-in-vsphere/

    Deprecated CPUs

    Since new vSphere 6.7 features depend on specific CPU capabilities, a lot of fairly recent legacy CPUs are no longer supported.  Check your hardware for compatibility before planning a 6.7 upgrade. 

    Release notes:

    https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-vcenter-server-67-release-notes.html

    Conclusion

    vSphere 6.7 is a solid update that addresses a lot of pain points in prior releases.  It will be a welcome improvement for businesses of all sizes.

     

     

  • Top Tips by CIOs for CIOs

    Top Tips by CIOs for CIOs

    April 24th, 2018
    Read More

    Supervising a technology team, no matter the size, is a huge responsibility. From recruiting engineers and technology technicians to transforming your data center – a Chief Information Officer must be able to handle any situation that comes across their desk while simultaneously maintaining the goals and objectives of their organization.

    We asked some of the region’s top CIOs and CTO’s for advice and tips for other technology leaders – here are our findings;

    YOUR TEAM:

    • “Surround yourself with a core group of smart, technically savvy, team oriented, and motivated professionals.  The team will grow from there….similar people find each other.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Listen to everyone’s ideas, turn your ideas into someone else’s, ask probing questions…”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Make sure that your team understands mission / objectives and that they have the training / aptitude to be successful.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
    • “Manage up and across as much as or more than down, be curious on motives and what make people tick.”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Drive a culture of partnership.  The team is greater than any individual member.”
      • Geoff Pieninck, Global IT Director, Colorcon
    • “Promote the team and their efforts, not the leaders.”
      • Marty Keane, Chief Technology Officer, Penn Capital

    CHALLENGES:

    • “Don’t major in the minors. Identify the big impact projects, issues, and opportunities and address them long term.  Filter out the noise.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Change is inevitable.  Change is uncomfortable. Drip change, don’t pour change and allow the change to disburse evenly.”
      • Marty Keane, Chief Technology Officer, Penn Capital
    • “Turn ‘No’ into challenges - be flexible.”
      • John S. Supplee, Chief Operating Officer, Haverford Trust

    LEADERSHIP:

    • “Challenge your staff with additional responsibility and exposure to new opportunities…success and recognition follows.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Practice empathy – understand employee and co-workers experiences; this encourages people to speak up if they are struggling.”
      • Ronn Temple, Director – Enterprise Technology, Dorman Products
    • ”Learn to stay intrigued, motivated and curious.  Find were you are get your inspiration from.  Innovation is forged from such foundations.”
      • Marty Keane, Chief Technology Officer, Penn Capital
    • “Stay humble.  Celebrate your successes, but don’t be complacent.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Know that a leader will hold all to ethical, moral, and performance standards.”
      • Ronn Temple, Director – Enterprise Technology, Dorman Products

    PROJECT MANAGEMENT:

    • “Right size technical and business solutions based on the needs of the business.  Biggest, best, newest is not always the answer.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “People, planning and process are critical components for ongoing team success.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
    •  “Balance technical solutions with sound financial analysis regarding costs, savings, and impact to financial statements.  Speak the language of those paying the bills.”
      • John Ciccarone, Chief Technology Officer, Janney Montgomery Scott
    • “Make bold decisions, pull the plug on failed projects no matter what the costs are…”
      • John S. Supplee, Chief Operating Officer, Haverford Trust
    • “Aligns IT services to business needs.  Always be able to point IT actions back to a key business driver.”
      • Geoff Pieninck, Global IT Director, Colorcon
    • “Provide focus on strategic, tactical and operational needs as required to reach goals.”
      • Barry Steinberg, Chief Information Officer, Young Conaway Stargatt and Taylor
  • Why HCI is the Datacenter Architecture of the Future

    Why HCI is the Datacenter Architecture of the Future

    April 9th, 2018
    Read More

    So why talk about data center architecture? As part of the IT community, it can be fun to talk and geek out about the latest technologies or trends. We can appreciate how things used to be done and are excited about where things are headed. But, at some point it must become practical to the business side of the house. Most people in your company and your customers don’t care what you are doing in your datacenter. They could not care less if you are 100% virtualized or still bare metal. They don’t know if you are running public, private, or hybrid cloud.  They might even laugh at how seriously we can use the term cloud.

    There is something to be gained by thinking of the business results of our datacenters and the IT infrastructure contained therein. We look at the result that Google, Facebook, and Amazon are getting, and we study why they made the choices they did. Each one uses an HCI approach in the datacenter.

    If a company is virtualized it is not really a question we consider anymore, but rather what small percentage is not virtual yet. With very mature virtual operations, there is now a drive to containerize applications and services. We don’t debate if virtualization is better than bare metal since we see the flexibility, consolidation, and ease of management inherent in the ability to run multiple workloads on the same physical server. Treating a server as a file that can be manipulated independent of the underlying hardware is a key feature of virtualization.

    Hyperconverged infrastructure picks up where virtualization left off. It takes the software approach to more of the IT stack. Software is consuming everything. What used to be separate products are just part of the stack or cloud. HCI treats storage as a service to be delivered to the VM or Application layer. Networking is delivered in a similar fashion. Just like the public cloud, everything is abstracted behind a management console and administered as one unit.

    HCI delivers the best of the public cloud experience – initial right sizing, incremental growth, self-service, and programmability/automation. But with the benefits of on premises infrastructure – security, performance, and control. An extra bonus is when you look at the duration of years, it is less expensive to own commodity hardware resources in your datacenter than rent it in a public cloud for deterministic workloads.

    Your IT operations is a major component of the engine that drives your business. The more these processes can be automated in software, the less friction with productivity and profits will increase. The large cloud organizations automate as much as possible, so they can iterate improvements faster. Some call this process DevOps. Tying IT processes directly to business outcomes which then feed back into the IT process. This creates a virtuous cycle that is a business enabler instead of a cost center. And the more your datacenter is software defined, the more flexibility you have in the DevOps process.

    The best HCI vendors will allow you to meld architectures. Whether you call it hybrid cloud or cloud bursting, this approach helps balance cloud costs and avoids a whole new type of vendor lock-in. You should be able to mix mode 1 legacy apps with mode 2 cloud native apps, so you can migrate as the business is able. Sound like the data center of the future, except it is possible today.

    In closing, I ask again, why talk about data center architecture? The simple answer is: if you are reading this, it is probably part of your job. Every major infrastructure vendor has some type of HCI offering. It could be software, hardware, or appliance based. With the industry rapidly evolving, it is important to have a trusted technology advisor like HA, Inc. who can help you look at all the options out there and assist in your due diligence. We can help you make an informed decision, so your data center and IT practices are evolving with the industry into a bright future.

  • Configuring AnyConnect Remote Access VPN on Cisco FTD

    Configuring AnyConnect Remote Access VPN on Cisco FTD

    April 6th, 2018
    Read More

    Cisco ASA’s have been a part of Cisco’s security product lineup since 2005 replacing the older PIX firewalls.  Over the more recent years, Cisco has really focused a great deal on security adding more and more solutions for different portions of the network.  One of the newer security solutions was brought in with the acquisition of SourceFire from back in 2013.  SourceFire, at the time of the acquisition, was one of the top leading Intrusion Prevention solutions on the market.  Shortly after that acquisition, what was previously known as Sourcefire, received a name change to Cisco FirePOWER, then to then FirePower, and more recently, Firepower.  Yes, the name changed quite a bit over the past few years. 

     

    Firepower added the Next-Generation Firewall (NGFW) solutions that are now pretty much required in networks of all sizes.  The NGFW feature-sets add additional visibility into application networking, user traffic, content filtering, vulnerability monitoring, and much more providing the security that’s needed.

     

    Cisco first added their NGFW solution to the Cisco ASA5500X products by adding a Firepower module (SFR) into the firewall appliance.  This SFR module is essentially a hard drive that runs as a Firepower sensor.  Policies are pushed to this module which directs traffic to be bounced from the ASA over to this sensor for inspection, then traffic is sent back to the ASA for processing.

     

    In addition to offering the Cisco ASA as a firewall security solution, Cisco added a newer Firepower Threat Defense (FTD) appliance.  The Cisco FTD appliance consolidates some of the ASA functionality and the NGFW features down into a single appliance.  This allows for easier management of the security solutions with having one single management interface as opposed to having to manage the ASA configuration separately from the NGFW features which are typically managed from Firepower Management Center (FMC).

     

    The Cisco FTD appliance carries most (not all) of the features that an ASA would support.  One particular feature that was brought over from the ASA is remote access VPN connectivity.  Some of the remote access features that were ported over from the ASA did not make it over to FTD.  The most notable features that are missing from this Remote Access VPN on FTD solution as of v6.2 are:

     

    • local user authentication
    • 2-factor authentication
    • ISE Posturing
    • LDAP attribute mapping
    • AnyConnect modules

     

    *reference the link below for a full list of limitations

     

    https://www.cisco.com/c/en/us/support/docs/network-management/remote-access/212424-anyconnect-remote-access-vpn-configurati.html#anc13

     

     

    In this article, I will be providing a sample of how to configure a remote access VPN solution on Cisco FTD. 

     

    This article is going to assume that the FTD appliance is already registered, licensing is acquired, and that the appliance is being managed by FMC. 

     

    To start the remote access VPN configuration, we first need to apply the AnyConnect licensing to the FTD appliance.  Navigate to System > Licenses > Smart Licenses.

     

     

    Select the “Edit Licenses” button on the upper right.

    Select the licensing that was purchased and move your FTD appliance into the right window to assign the license to the appliance.  In this case, “AnyConnect Apex” licensing was selected, and the appliance named “FTD” appliance to the right.  When complete, select “Apply” at the bottom right.

     

     

    Now that the licensing has been assigned, we can continue with the building blocks required for the RA VPN connectivity.  The next step would be to create all of the various objects (software package, profile, IP Pool, etc).  These objects will all tie together during the RA VPN config wizard.

     

    The first object we will create is the software package object.  Navigate to Objects > Object Management > VPN > AnyConnect File

     

    Here, we will add the VPN client software packages for the different required Operating Systems that will be used in the environment.

     

     

    Select “Add AnyConnect File” at the top-right.

     

     

    Enter a name, browse to the AnyConnect client package file which can be downloaded using the link below (valid Cisco contract required) and select “AnyConnect Client Image” as the file type.  When complete, select the “Save” button.  Repeat this process for each client type that will be connecting (Windows, Mac, Linux).

     

    https://software.cisco.com/download/release.html?mdfid=286281283&flowid=72322&softwareid=282364313&release=4.5.04029&relind=AVAILABLE&rellifecycle=&reltype=latest

     

     

    Within this same location, we will add the AnyConnect profile.

     

    Select “Add AnyConnect File” at the top-right once again.

     

    Enter a name, browse to the profile, select AnyConnect Client Profile from as the File Type and select “Save” when complete.

     

    • this xml profile can be created using the Cisco VPN Profile Editor tool on a Windows machine.  This Profile Editor tool can be downloaded using the same link that was provided above

     

    We will now move on to creating the IP Pool object. This IP pool will be used as the DHCP pool for remote access clients as the client connects to the FTD appliance using AnyConnect.

     

    In FMC, open Objects > Object Management > Address Pools > IPv4 Pools

     

     

    Select “Add IPv4 Pools” at the top-right

     

     

    Provide a name, enter the pool range, and subnet mask then select “Save”

     

     

    We will now configure an object-group that references this VPN IP Pool

     

    Open Objects > Object Management > Network

     

      Select “Add Network > Add Group” at the top-right
    Provide an object name then manually enter the IP subnet for the VPN Pool that was previously created.  Select “Save” when complete.

     

     

    An optional configuration that can be added is a split-tunnel list.  Split tunnel allows for VPN connectivity to a remote network across a secure tunnel but also allows for local LAN access.  There are a few security concerns with allowing the use of split-tunneling but is an option.  To configure a split-tunnel list, we will create an Extended Access Control List.

     

    Navigate to Objects > Object Management > Access List > Extended

     

     

    Select “Add Extended Access List” at the top-right

     

     

    Provide a name for this new Access-List.

    Select “Add” at the top-right.

    Enter the inside IP space object as the source address.  Leaving all other options as their default, select “Add”, then “Save"

      

    The next object that is needed is a certificate that will be referenced later.

     

    To create a self-signed certificate, select Objects > Object Management > PKI > Cert Enrollment

     

     

    Select “Add Cert Enrollment” at the top-right

     

     

    Provide a name for this new certificate and a type of PKCS12, then save.

     

    The next object to create would be for authentication. 

     

    Cisco ASA’s offer an option to authenticate Remote Access VPN’s directly against the ASA using local authentication with users created directly on the ASA.  With v6.2, FTD only supports the use of external authentication using either RADIUS or LDAP authentication servers.  In this lab, authentication will go against a single RADIUS server running Cisco ISE (Identity Services Engine).  Of course, in a production environment, having redundant servers would be the recommended approach.  In that instance, this step would be performed twice in order to configure both authentication servers.

     

    To create the authentication server, open Objects > Object Management > RADIUS Server Group

     

     

     

    Select “Add RADIUS Server Group” at the top-right

     

     

    Provide a name (typically enter the server name here).

     

    Select the “plus” sign to add a server

     

    Enter the IP address of the RADIUS authentication server, along with the key and then save.

     

    If adding a second RADIUS server, repeat the process to add the redundant server. 

     

     

    Once all RADIUS servers have been added, save changes for the group.

     

    The final object that will be created will be the VPN Group Policy.  This Group Policy will provide various connectivity attributes for the VPN client.

     

    Open Objects > Object Management > VPN > Group Policy

     

     

    Select “Add Group Policy” at the top-right

     

    Provide a name for this Group Policy

     

    Next, a DNS server is defined.  General > DNS/WINS > Primary DNS Server > Add

    Enter a name and the network address of the DNS server.

     

    Also, on the General tab under Split Tunneling, select “Tunnel networks specified below” for IPv4, select the radio button next to “Extended Access List”, then in the drop-down, select the split tunnel list which was an object previously created named “SPLIT_TUNNEL”

     

     

    Finally, on the Group Policy, select the AnyConnect tab, select the AnyConnect Profile object previously created, then save.

     

    At this point, all objects are created and are now ready to run the VPN wizard.

     

    Navigate to Devices > VPN > Remote Access > Add

     

     

    Provide a name, then move the FTD appliance from the available devices into the selected device column.  Then click Next.


    Select AAA Only for the Authentication Method

    Select the ISE object previously created as the Authentication Server

    Select the VPN_POOL IP Pool

    Select the ANYCONNECT object for the Group Policy

    Then click Next

     

     

    Check the boxes next to each client image and verify the OS selected.  Then click Next.

    Select the outside interface as the Interface group/Security Zone

    Select the ANYCONNECT_CERT object for the Certificate Enrollment

    Click Next

     

     

     

     

     

    Review the summary of the changes being made and click Finish

     

     

    The next step would start the process within adding a public signed certificate that will be associated with the outside interface.

     

    Open Devices > Certificates

     

     

    At the top-right, select Add > PSCK12 File

     

     

     

    Select the FTD device

     

    For the Cert Enrollment, select the ANYCONNECT_CERT object

     

    For the PKCS12 File, select the pfx certificate and enter the passphrase.

     

    Click Add

     

    The final steps would now be to create a security policy rule as well as a NAT rule.

     

    Select Policies > Access Control > select the Access Control Policy that is deployed to the FTD appliance.

     

    Add a new rule

     

    Name the new policy

    Insert this policy “into Default”

    On the Zones tab, add the “outside” zone as the source and “inside” as the destination zones

     

    On the “Networks” tab, add the VPN object as the source network and rfc1918 as the destination network

     

    Click “Add” when complete.  Then Save at the top right.

     

     

    For the NAT exemption rule, open Devices > NAT

     

    Modify the existing NAT policy that’s applied to the FTD appliance and add a new rule

     

    In the Interface Objects tab, add the inside zone as the source and the outside zone as the destination.

     

    On the Translation tab add:

                - Original Source - internal networks (RFC1918)

                - Original Destination = Address / VPN_POOL

                - Translated Source = Address / internet networks (RFC1918)

                - Translated Destination = VPN_POOL

     

    Select the Advanced tab and choose the “Do not proxy ARP on Destination Interface” checkbox.  Then click “OK” then “Save” at the top right.

     

     

    At this point, the Remote Access VPN solution has been configured and is ready to be deployed to the FTD appliance.  At the top right of FMC, select “Deploy”.  Choose the FTD appliance that you are enabling remote access VPN on and Deploy the policy.  Deploying this policy takes time but can be monitored from the “Tasks” section next to the Deploy button in the menu bar.

     

     

    When the policy has been deployed successfully, remote access VPN can be tested.

     

    From a machine on the outside network, from the web browser, navigate to the outside IP or URL of the FTD appliance.  You should be prompted to enter user credentials.  Enter the username and password and select “Logon”

     

     

    Once successfully logged in, you may be prompted to install the AnyConnect client.  If the client is already installed, the VPN will automatically connect.  When connected, the AnyConnect client icon on the PC’s task bar will appear similar as shown below.

     

     

    To verify connectivity from within FTD, similar to an ASA, you can check status using the “show vpn-sessiondb detail anyconnect” command.

     

    To disconnect from the VPN, right-click on the AnyConnect client and select “Disconnect”

     

    As you can see, configuring a remote access VPN on FTD does have it’s limitations and does take a bit of configuration to get working but is a rock solid solution. 

     

    Important caution: Any commands shown in the following post are for demonstration purposes only and should always be modified accordingly and used carefully.  Do not run any of the procedures below without thorough testing and if you do not fully understand the consequences.  Please contact a representative at H.A. Inc. if you need assistance with components of your infrastructure as it relates to this posting.

  • How to Become a NetApp ONTAP CLI Ninja

    How to Become a NetApp ONTAP CLI Ninja

    April 3rd, 2018
    Read More

    Over the past decade or so I’ve spent quite a few hours of my life staring at the NetApp ONTAP command line interface. 3840 hours, to be exact.  I did the math…8 hours per week, 48 weeks per year, over the last 10 years comes to 3840 hours (OK, not exact, but I’d argue a very conservative estimate).  I’ve worked on countless different versions of ONTAP containing multiple revisions of various commands.  As with anything you put 3840 hours into, I picked up a few shortcuts and tricks along the way.  My goal with this post is to share some of this knowledge to help you become more efficient in your role.  The tips below will only be relevant to those of you running Clustered Data ONTAP (aka CDOT, or just “ONTAP" now).  If you’re still running a NetApp system with a 7-mode (7m) version of ONTAP, it’s time to upgrade (*cough* we can help *cough*). 

    Almost to the fun stuff -  but first a few disclaimers.  Depending on your skill level, some of these may seem basic, but they get more advanced as you go through, I promise.  Nomenclature:

    • CLI commands will be italicized (volume would represent typing the word volume at the CLI)
    • Specific key presses will be bold (enter would represent pressing the enter key)
    • Variables will be indicated with <> brackets (<volume-name> would be a placeholder for an actual volume name)
    • Some commands require the use of special characters (! and { } and * for example).  Type these exactly as you see them displayed here.
    • All commands are always capitalization sensitive so pay close attention to upper/lower-case letters.

    Important caution: Any commands shown in the following post are for demonstration purposes only and should always be modified accordingly and used carefully.  Do not run any of the procedures below without thorough testing and if you do not fully understand the consequences.  Please contact a representative at H.A. Inc. if you need assistance with components of your infrastructure as it relates to this posting.

    Tip 1 – Use the tab key to autocomplete commands

    It’s somewhat difficult for me to show this via static screenshots and text but I think this feature is somewhat self-explanatory.  Not only will autocomplete fill commands in, it will (in most cases) fill in variables for you too.  For example, if you type the following:

    ne tab in tab sh tab -vs tab Cl tab enter

    you get the following command and associated output:

     

    Tip 2 – Consolidate output with rows 0

    You will find that some commands have a large output with some line wrapping that will require you to hit space or enter to continue.  This is inconvenient and makes a mess of the log if you’re trying to capture the output.  The default number of rows displayed can be set to infinite by entering the following command.  This will be reset back to default every time you login.

    rows 0

                    Output of a command with rows set to 10:

    Output of the same command with rows set to 0:

    Tip 3 – Use the -fields operator to show precisely what you need

    As shown by the network interface show outputs above, you can get a lot of good basic info about the LIFs using the basic show command.  But what if I wanted to show just the IP addresses and also add the allowed data protocol for each LIF? Add -fields then a comma separated list of fields you’d like to show to the command as shown below to customize the output:

    network interface show -fields address,data-protocol

    Tip 4 – Make your output CSV friendly with set -showseperator ","

    The ouput of tip 3 above is nice but not very friendly for copying out for use elsewhere (a script, a report, etc) due to inconsistent spacing and line wrapping.  Formatting the output with commas as separators is a huge help with this.  The default separator character can be set to a comma by entering the following command.  This will be reset back to default every time you login.

    set -showseperator ","

    Tip 5 – Use the wildcard * operator

    Use the asterisk as a wild card character at the beginning, end, or anywhere in the middle of the item you’re searching for (results will differ depending on location).  Let’s say I have a bunch of volumes but I’d like to get a report of volumes with “nfs” in the name:

    Or, I only want to see the volumes that are located on all aggregates that have “SSD” in the name:

    Or, how about I want to search the event log for all entries with “Dump” in the event column that occurred on “3/21/2018”

    Tip 6 – Use the NOT ! operator

    Use the exclamation point as a NOT operator for searches.  Let’s say I want to show all volumes that are not currently online:

     

    We can also combine ! with * to show all volumes that do not have “root” anywhere in the name of the volumes:

    Tip 7 – Use the greater/less than operators

    Use the greater than and less than operators during searches that involve sizes.  Let’s say I want to show all volumes with a size of more than 1GB:

    Tip 8 – Use curly brackets for extended queries

    Use curly brackets to perform a query in order to determine items to perform further action on.  For example, let’s say I want to modify only LIFs that meet the following specific criteria:

    • auto-revert set to false
    • not iscsi or fcp data-protocol

    Tip 9 – Use set -confirmations off to auto-confirm all end-user prompts

    Certain commands require the user to confirm for each occurrence.  When running a handful of commands in a row (as is often the case for repetitive/scriptable tasks), it is cumbersome to have to acknowledge after every single line.  Using set -confirmations off disables these confirmations.  This is potentially risky as the confirmations are often provided as a safety mechanism to ensure the action you are about to take is really what you intend so use this one carefully.

    Deleting a volume with set -confirmations on:

    Deleting a volume with set -confirmations off you can see the user is not prompted to confirm this action:

    Tip 10 – Naming naming naming

    As shown with tips 1-10, the powerful capabilities of the CLI really reinforces the importance of good, consistent naming conventions.  Develop good naming conventions, document them well, share them with your team, and stick to them.

  • The Network Engineer’s Favorite Dummy Address, 1.1.1.1, is Now Live on the Internet

    The Network Engineer’s Favorite Dummy Address, 1.1.1.1, is Now Live on the Internet

    March 29th, 2018
    Read More

    Most network engineers have, at some point in their career, used a “dummy” IP address of 1.1.1.1 for some reason or another. Maybe it was a placeholder in a configuration template, or maybe it was used for some sort of loopback or private connection that should never see the light of day such as a firewall failover link. Most of us have made this IP faux pas at least once.

     

    In fact, way back in 2010, when IANA allocated the 1.0.0.0/8 prefix to APNIC (the Asia-Pacific regional Internet number authority) work was done by APNIC’s research and development group to assess the feasibility of using various prefixes within the 1.0.0.0/8 block. The result of their study found that there was a substantial amount of random, garbage Internet traffic constantly flowing toward several of the prefixes within that range. Notably, IP addresses 1.1.1.1, 1.0.0.1, and 1.2.3.4 bore the brunt of this traffic barrage. The recommendation from APNIC’s R&D group at that time was not to allocate these prefixes to any Internet users due to the massive volume of background junk traffic always being directed at them.

     

    Now, fast forward to 2018. April Fool’s Day 2018, to be exact, and everyone’s favorite dummy IP addresses 1.1.1.1 and 1.0.0.1 are now live on the Internet providing a new free, privacy-oriented public DNS service. This is no prank, though, so how did it happen? Both Cloudflare (the company behind the new DNS service) and APNIC have put up blog posts discussing this joint project. The gist is that Cloudflare wanted to provide a public DNS service that, unlike Google’s 8.8.8.8 service, was not tracking/logging user activity. Because Cloudflare is already a web-scale service that provides content delivery network (CDN) services, Distributed Denial-of-Service (DDoS) attack mitigation, and other web services they were well-suited to both deal with, and study, the flood of background traffic going 1.1.1.1 and 1.0.0.1. APNIC agreed to provide these addresses to Cloudflare (for an initial period of 5 years) to home their DNS service at, so that APNIC and Cloudflare could study and gain insight from both the aggregated DNS query data and the rest of the background traffic.

     

    Should you start using 1.1.1.1 for your DNS resolution? The answer is that you could, but just like Google’s DNS service or Level3’s old standby of 4.2.2.2 there’s really no insight or control to be gained from the IT administrator’s perspective when using these upstream resolvers. There’s also no SLA guaranteeing uptime or performance. If you’d like to use reliable SLA-protected DNS services to provide an additional layer of security or content control for your users, H.A. strongly recommends looking at Cisco’s Umbrella DNS security service. Cisco Umbrella offers IT administrators the ability to provide security and web content filtering policy as well as protection against DNS-leakage for on-premises and remote users at the site, AD group, or user level, while providing statistics and activity reporting for the IT admin. For more information about Cisco Umbrella, contact your H.A. account manager.

     

    While this new 1.1.1.1 DNS service is interesting and may provide valuable Internet traffic research, there are a couple of take-aways that network engineers should keep in mind:

     

    1. We really shouldn’t ever use addresses that could be allocated on the Internet, but were not issued to us to use, for any reason. For private links such as firewall failover clustering, a piece of the RFC1918 IP space is best, or using addresses in the RFC3927 (the 169.254.0.0/16 link-local “auto-configuration” range) should also be acceptable.
       
    2. Now that 1.1.1.1 and 1.0.0.1 are live, we should avoid using them for documentation or examples. Actually, we never should have done this because the addresses were always valid as “real” Internet addresses, but there’s even more reason to avoid it now.
      If you need IP addresses to use in templates, or for examples or documentation, it’s best to use one of the three “TEST-NET” blocks defined in RFC5735. These include 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
       
    3. Be aware that some commercial products may have used 1.1.1.1 for some purpose, and that functionality could conflict with using the actual 1.1.1.1 DNS service now. An example of this is Cisco wireless LAN controllers which use 1.1.1.1 as the “virtual” interface address for serving guest portal splash pages. As a result of this, it’s not uncommon to see companies with DNS entries in their public DNS zones for 1.1.1.1 mapping to “guest.company.com” or something similar.

      Additionally, if using a Cisco Wireless LAN Controller for DHCP on a guest wireless network, the end client’s DHCP server will appear to be 1.1.1.1. In this case you would want to avoid assigning 1.1.1.1 for the guests’ DNS server since the WLC will intercept requests to 1.1.1.1 thinking it’s intended for it.
       
    4. We continue to scrape the bottom of the barrel for IPv4 address space and the activation of certain IP blocks that were always assumed to be de facto off-limits continues to prove this out. If you have not yet begun investigating IPv6 readiness in your network, it’s time to start that process.
  • High Availability, Inc. Named One of 2018 Tech Elite Solution Providers by CRN®

    High Availability, Inc. Named One of 2018 Tech Elite Solution Providers by CRN®

    March 27th, 2018
    Read More

    High Availability. Inc, announced yesterday that CRN®, a brand of The Channel Company, has named High Availability, Inc. to its 2018 Tech Elite 250 list. This annual list honors an exclusive group of North American IT solution providers that have earned the highest number of advanced technical certifications from leading technology suppliers, scaled to their company size.

    To compile the annual list, The Channel Company’s research group and CRN editors work together to identify the most customer-beneficial technical certifications in the North American IT channel. Companies who have obtained these elite designations— which enable solution providers to deliver premium products, services and customer support—are then selected from a pool of online applicants.

     “Being named to CRN’s Tech Elite 250 list is no small feat,” said Bob Skelley, CEO of The Channel Company. “These companies have distinguished themselves with multiple, top-level IT certifications, specializations and partner program designations from the industry’s most prestigious technology providers. Their pursuit of deep expertise and broader skill sets in a wide range of technologies and IT practices demonstrates an impressive commitment to elevating their businesses—and to providing the best possible customer experience.”  

    “We are thrilled to be included in the 2018 CRN Tech Elite 250,” said Steve Eisenhart, Chief Executive Officer of High Availability, Inc. “Over the last 10 years we have invested more in engineering than all other departments combined.  We see immense value when it comes to on-going education, trainings and certifications for existing and emerging technology partners.  We are continuously investing in additional engineering talent to support our expansion into new practice areas like Security, Cloud and Managed Services.   We understand the value our engineers bring to our end user community and are committed to doing all we can to deliver top-notch support and service.”

    Coverage of the Tech Elite 250 will be featured in the April issue of CRN, and online at www.crn.com/techelite250.

  • High Availability, Inc. Named Top Workplace in Philadelphia

    High Availability, Inc. Named Top Workplace in Philadelphia

    March 22nd, 2018
    Read More

    The Philadelphia Media Network and The Chamber of Commerce for Greater Philadelphia has officially recognized High Availability, Inc. as a Philadelphia Top Workplace for 2018. High Availability, Inc. was among 125 companies honored at an awards ceremony that took place last week. High Availability, Inc. was ranked #22 in the “Small Companies” category, which recognizes organizations with less than 150 employees.

    To be eligible for this prestigious designation, all nominated organizations must participate in a company-wide survey facilitated by Energage, a leading research firm focused on company culture and workplace improvement.  With over 47,000 organizations surveyed since 2006, Energage is the most trusted solution for empowering organizations to work better together.

    The Energage survey consists of four main factors; alignment, effectiveness, connection, and management. Alignment focuses on the direction and ethics of the organization, effectiveness assesses overall corporate communication and project execution, connection evaluates employee appreciation and job potential, and, lastly, the manager section reviews the employee’s manager and corporate leadership.

    “The entire team at High Availability, Inc. is honored to be named as one of the top workplaces in Philadelphia,” said Steve Eisenhart, Chief Executive Officer of High Availability, Inc. “Our #1 priority at H.A., Inc. is building and maintaining the right culture.  It is the foundation that allows us to increase productivity, attract talent, and most importantly create a positive and successful work environment.  I am convinced that it is our culture that has fueled our rapid growth over the last decade.  Making this list is a tremendous accomplishment and something that means the world to all of us at H.A., Inc!

    Click Here for more information about working for High Availability, Inc. and to view open positions.

Join the High Availability, Inc. Mailing List

Subscribe