Blog

  • Tips for Meraki WLAN Deployment Part 3: Working with Floor Plans

    Tips for Meraki WLAN Deployment Part 3: Working with Floor Plans

    February 15th, 2018
    Read More

    This is the 3rd part in our series on Meraki WLAN deployment tips. The first article covered some helpful hints for assigning IP addressing to your Meraki access points. The second post had tips for AP naming, tagging, and the important of installation photos.

     

    In this final part in the series, we will discuss adding a floor plan to your Meraki WLAN deployment. Why do we need to do this? What value does it provide? What cool things can we do once we have a floor plan in our deployment? Read on to find out!

     

    To start with, let’s cover a basic fact: Adding floor plans to your Meraki WLAN deployment is completely optional. And there are at least a few cases where they are unnecessary. For example, if your deployment is covering a large outdoor space, the default Google Maps satellite view will work fine. Or if you have a small remote office with a single access point, there’s not a ton of value in having a floor plan imported. However, if your deployment involves multiple access points in the interior of a building, then adding a floor plan gives you some nice benefits.

     

    First, a populated floor plan lets you easily see AP locations and some quick stats of those APs. Take, for example, this floor plan display from the H.A. office in Audubon, PA:

     

     

    This screen is found at the Wireless > Map & floor plan menu item. As you can see, we can easily identify where our APs are physically located. Each AP also has a number overlaid on it. This is the current number of wireless clients associated with that access point. This can be useful for gauging client density, although there is a better tool for that which we will explore later.

     

    Mousing over one of the APs brings up a link to that AP’s Dashboard page, like this:

     

    Clicking the PA-AP07 link will take me to the details page for that AP.

     

    Likewise, if I want to see where a specific AP is physically deployed, I can go to that AP’s details page in Dashboard, and then go to the Location tab. The AP will be displayed on the supplied floor plan. In this example, we can easily find that PA-AP03 is the one in our reception area. If notes or tags were not added when APs were deployed, this can be very helpful!

     

     

    OK, so we can locate an AP on the map. That’s handy. What else? Well, what if we want to locate a client device within our environment? That’s easy too, once floor plans are deployed. Simply navigate to a client’s details page by going to Network wide > Clients, and then clicking on a wireless client in the list. The client detail page will be displayed, and the users most-likely location will be indicated on the floor plan:

     

     

    In this case, the user’s iPhone is believed to most likely be at the location of the blue dot. The light blue circle is the circle of uncertainty, this is the total area that the client could be in, but the system thinks the location of the blue spot is most likely. If the system can get a tighter triangulation on the user, the circle of uncertainty will get smaller to indicate higher confidence. In this eample, I know the location the device is indicated as being is exactly where this user’s cubicle is, so the system has located this user’s phone with a high degree of accuracy. This can be convenient if you need to hunt down a device/user/guest that isn’t behaving well, or if you’re asked to help locate a missing device.

     

    Locating a device in this way isn’t always perfect, but it gives you a good place to start if you’re trying to locate someone/something.

     

    Another useful thing you can do with the floor plan is see your wireless channel settings visually. If you navigate to Wireless > Radio settings, you will see a list of all access points and their assigned channels for the current network. If you hit the “Map” toggle, you will see the APs on the floor plan with their assigned channels as labels, like this:

     

     

     

    This can be helpful for manual channel planning to ensure you’ve got the maximum possible separation between APs on the same channel or adjacent channels, or even to review automatic channel assignments to ensure the system is not making a bad choice in assigning APs to certain channels.

     

    Now, one of my favorite uses of the floor plan: the location heatmap. This feature, which is found at Wireless > Location heatmap in the Dashboard, shows you a color-coded heatmap of where wireless clients are physically located in your environment.

     

     

    What can we use this for? It’s a nice real-time display of how many devices are in certain areas of your environment. It can be used to see where high-density regions might be, or if an unusual congregation of devices (and thus people) has popped up. And notice the “Play” button and timeline slider in the upper-left corner of the screen? That lets you play back an hour-by-hour view of the heatmap to see how client density has changed over time. For an office setting, that’s nifty. In a retail or industrial setting, this can be highly valuable business intelligence, letting you see where customers tend to spend time in a retail store, or by seeing patterns in movement of fork trucks in a warehouse during shifts.

     

    As you can see, taking advantage of the Dashboard features that can utilize floor plans can give you some nice perks in your Meraki deployment.

     

    Now that you have some idea of why you want floor plans configured in your Meraki WLAN networks, how do you set them up? Fortunately Meraki has already documented that procedure better than I could (with animated GIFs and everything!), so head on over to their site for the how-to.

     

    If you’re interested in a Meraki-based WLAN deployment, or need help tuning an existing deployment to get the most out of it, reach out to your High Availability Account Manager today. We’d be happy to help you!

     

  • Big Data – A Need for Speed – Hello Pure Storage FlashBlade!

    Big Data – A Need for Speed – Hello Pure Storage FlashBlade!

    February 9th, 2018
    Read More

    Fast does not always mean high IOPs. Sometimes hundreds of thousands of IOPs just is not enough. Big data applications such as machine learning, real time data processing, and backups of big data workloads needs high throughput and massive scaleout. Enter PureStorage’s FlashBlade!

    Start from 7x8TB to 75x 52TB blades!

    Have you ever asked yourself any of these questions?

    • How do you backup databases which are hundreds of TB in a timely manner?
    • How to you ingest and index Splunk, Spark or ELK logs from thousands of sources?
    • How do you ingest and process video from hundreds of video cameras?
    • How do you train machine learning systems like TensorFlow or Caffe with petabytes of source data?
    • How do you easily provide a scalable NFS, SMB, or S3-compatible storage solution to your researchers?

    If so, PureStorage FlashBlade just may be for you!

     


    PureStorage FlashBlade is winning and clever design on what others only hope to achieve: Multi-protocol, high-thoughput, scale-out storage.

    Using Intel System-on-a-chip, mixed with FPGAs and Flash chips to store data, PureStorage has taken a step away from the old-fashioned controller-based architecture, to provide a solution which can scale from 7 to 75 Storage Blades.

    Flashblade Chassis

    Flashblade Blade

    • Each 4u Chassis provides:
      •  8x 40gb, or 32x 10gb network ports
      • Space for up to 15 blades (7-blade minimum)
      • ~17GB/sec Throughput on a fully populated chassis.
      • Under 1500 watts

     

    With the latest FlashBlade PurityOS, up to 75 blades can be tied together in one fabric.  That is over 75GB/sec of read & 25GB/sec of write throughput, at just 6500 watts! 



     

    Rise of the Machines [Learning]…

    You may not have heard of machine learning, but you use it every day.

    • Yelp’s restaurant images are all sorted and categorized automatically
    • GMail’s “Smart Replies”, 12% of all replies, are all created by machine learning from all user’s past responses.
    • “Machine Vision” in self-driving cars
    • Email malware and virus detection
    • Automatic Translation on websites
    • Targeted advertising and geolocation

    Machine Learning is all about turning Unstructured Data into Actionable Information

    Machine Learning is exactly that.  You “train” the system with data, to make it learn patterns of recognition.  On many systems, these are massive data lakes or feeds.  Hundreds of thousands of images or documents.  Real time video from many sources. Millions of emails. This needs high throughput.    The ability to utilize NFS and S3 to store and feed your data, and ease of use PureStorage’s FlashBlade is the best solution to support your machine learning needs.

    If you are not integrating one of the machine learning tools such as TensorFlow, Caffe, aiWare, and the many others available into your architecture and design, you may be missing out on utilizing all the data you have to the fullest. Your competitors may just be getting the edge on this.

     



     

    Let’s talk databases and backups… 

    I have had the {mis}pleasure  of dealing with Oracle, in one way or another, for over 20 years.  It was a great joy when I came across Oracle ACE Ron Ekins’s blog about his performance using Oracle, DirectNFS, and FlashBlade.  His testing showed 4GB/sec of single node performance, and 8.8GB/sec of two-node RAC performance on over Direct NFS with PureStorage FlashBlade.  With this much available throughput, not only is processing and table scans amazingly fast, but backups are as well.  How fast are your Oracle dataloads and backups?

    RAC + FladeBlade – 8.8GB/sec

     

    Another story:

    A few months ago, we deployed a pair of PureStorage FlashBlade appliances for an enterprise customer for Splunk archive and replication tiers.  Their Hot Data and Indexers were running on

    PureStorage FlashArrays.  Within a few hours, we had hundreds of TB of scalable storage, ready to be used.   The customer was trained on the new hardware in minutes, since it is intuitive and works much like Pure1, and their FlashArray.  PureStorage has continued the ease of use that they are known for.  The customer was so happy with the performance and ease, they are filling out their chassis with blades, with many more on the horizon!

    Other customers are utilizing PureStorage FlashBlade for Veeam and Rubrik archive with great success!  Flashblade’s insanely fast throughput makes for insanely fast restore speed!

     

    If you want to hear success stories for everything from medical imaging, to genomics research, to drone data collection, let your account manager know or email info@hainc.com

  • Which Cloud Strategy is Best?

    Which Cloud Strategy is Best?

    February 1st, 2018
    Read More

    Every organization has a cloud initiative.  Advertisements for the cloud can be found everywhere; during the Super Bowl, on KYW, on the banner of your Google news feed.  You cannot escape this onslaught and neither can your CEO, CFO, or the board.  The cloud initiative is poised to eliminate all your IT woes, offering 0 downtime with money leftover in your pockets.

    As a member of our regional cloud team (more on this later) I typically find myself at the table discussing cloud initiatives.  The three most popular questions at the beginning of these conversations are:

    • What can be moved?
    • What should be moved?
    • How long will this take?

    Without failure at the end of our conversation I am asked off the record which cloud is my favorite, or which cloud is best?  Before I answer these questions, I would like to walk through why you should consider moving your applications to the cloud.  Back in 2002 with the dot-com bubble burst, many IT budgets were slashed.  Since the recovery process has skipped over the IT budget, we are asked to do more with less people and a more condensed timeline. 

    The dilemma facing IT staffs today is; do you complete the afterhours upgrades this month, deploy the new financial application, merge the latest business acquisition, or check on last week’s backups and review performance statistics of the ESX hosts.  All too often the tasks that cannot be assigned a monetary value by the business, until there is a problem, is thrown to the cutting room floor to be picked up next week. 

    So how does the cloud help make my life easier? First, you gain a standardized way of operating, risk is transferred to the cloud, you never have to upgrade your hardware again, and finally you are working in a flexible operational budget. 

    The arguments against moving to the cloud is the loss of configuration flexibility across the infrastructure for your applications, No longer able to the ability to “borrow from Peter to pay Paul”, cost (the cloud can cost more), and your CFO’s favorite is capital expense depreciation. 

    When I look at the clouds available, I break them down into 4 classes.

    1. Private Clouds – Your Data Center
      • You and only you have access to the resources in a private cloud.  You will pay for resources if they are in use or not.  You still work with vendors to spec., build, document, manage, and maintain all aspects of the infrastructure or multiple infrastructures if you require any type of redundancy.
    2. Metro or Regional Clouds – This is High Availability, Inc.
      • You determine the amount of resources your organization needs for your applications and the service provider builds a pool of resources you can carve up and manage as needed.  They are an extension of your IT staff to maintain the infrastructure, network, monitoring and backups while you perfect the applications to meet business needs.
    3. Public/Hyper Clouds – Think Azure or AWS
      • They maintain massive data centers for resource allocations and you pay by the drink and the drink and the drink.   Every component, and every aspect of your infrastructure is billed by the minute it’s turned on.   These are great for variable workloads and business to consumer applications.   If you are looking to experiment with Docker or containers, this is a great way to step into the mix without a large capital expense.   If you require CDN functionality, this is the right platform.   And keep in mind that you are still responsible for all the infrastructure upgrades and backups.  They just take care of the hardware. (by the minute)
    4. SaaS Clouds – Think O365 or Salesforce.com
      • These applications that are hosted by the software developers are great solutions for IT organizations that are already taxed with internal projects, M&A and working with the business to drive functionality to the end users.  

    Every cloud (albeit I am a bias) will not work for all applications.  So, to answer one of my early questions "Which one is my favorite"?  The answer is simple, the cloud that runs my application the best.  All too often we get caught up in the “religious wars of I.T.”  I’m an “insert OS/Cloud/Product line” fan boy or girl. 

    I challenge you to take a step back and ask the question: Which platform is best for my application?  It could be Google, private cloud, or a regional cloud like High Availability, Inc.

    Understanding the options, and making an informed decision for your business is the end goal.  Find the right cloud for you by making sure you understand what will work best for you.

    Need to discuss your options more? Reach out to your account manager or contact us at sales@hainc.com 

  • Critical Cisco ASA Security Vulnerability

    Critical Cisco ASA Security Vulnerability

    January 30th, 2018
    Read More

    On January 29th, 2018 Cisco made public a security vulnerability disclosure for the ASA and Firepower security appliances. This is a pretty severe vulnerability. This blog provides some basic information about the vulnerability and details on how to determine if your environment is at risk.

    What is the vulnerability?

    In short, it potentially allows a remote attacker to execute arbitrary code on an ASA and/or create a denial of service (DoS) condition by rebooting the firewall at their whim. It requires an attacker to send a series of crafted XML packets to the firewall, but the attacker does not need to know any authentication credentials for the firewall to be able to run the attack. Also, if the affected feature, AnyConnect VPN, is configured the attacker can’t easily be blocked from executing the attack.

    How severe is it?

    The vulnerability is pervasive in ASA code, affecting basically any release that is more than a few days to a few weeks old. It affects AnyConnect VPN, a very common feature that many customers rely on. Additionally, this vulnerability allows a remote, unauthenticated attacker to take complete control of and/or run arbitrary software code on the firewall. This combination of elements makes this a pretty severe threat. Cisco has rated this as a 10 on the CVSS scale, which is the maximum severity rating for a security vulnerability, industry-wide.

    Is this attack in the wild yet?

    It doesn’t appear to be. This disclosure was done based on the normal responsible disclosure process. The researcher that identified the vulnerability informed Cisco some time ago, and Cisco has created and published fixed software releases. On February 2 (that’s this Friday!), the researcher will present his findings at an information security conference in Europe. After that presentation, enough detail will be available that attacks in the wild will likely be possible. This gives administrators a couple days to patch their systems if they are behind on software. If you act now, you should be able to close this exposure before it is likely to hit you.

    Am I at risk?

    There are two factors that determine if your firewalls are at risk:

    1) If you are running software versions prior to the versions named in the “First Fixed Release” column of the table below.

    To find your software version, you can SSH to your firewall and run the “show version | include Version” command as shown in the example below:

    ciscoasa# show version | include Version
    Cisco Adaptive Security Appliance Software Version 9.2(1)

    The firewall in the above example would be vulnerable.

    Alternately, you can log into the ASDM GUI and look in the upper-left pane of the Device Dashboard Home tab. You’ll see your device version like this:

    In the example shown above, this firewall is running 9.6(4), which is not vulnerable.

    If you are running the Firepower Threat Defense (FTD) software on your ASA or Firepower appliance, this table shows the version info.

    You can check the version by running the “show version” command:

    > show version
    ---------------------[ ftd ]---------------------
    Model : Cisco ASA5525-X Threat Defense (75) Version 6.2.2 (Build 362)
    UUID : 2849ba3c-ecb8-11e6-98ca-b9fc2975893c
    Rules update version : 2017-03-15-001-vrt
    VDB version : 279
    ----------------------------------------------------

    2) If you are running an affected software version (as described above), and the Cisco AnyConnect SSLVPN is configured, then you are vulnerable. To determine if AnyConnect is configured, you can run the “show run webvpn” command from the command line.

    This is an example of a firewall that does not have AnyConnect enabled, and thus is not vulnerable:

    FW-5512-1# show run webvpn

    FW-5512-1#

    This is an example of a firewall that does have AnyConnect enabled, and thus is vulnerable:

    PA-FW-5512-1# show run webvpn
    webvpn

    enable OUTSIDE

    anyconnect image disk0:/anyconnect-win-4.3.02039-k9.pkg 1 regex "Windows NT"

    anyconnect enable

    tunnel-group-list enable

    PA-FW-5512-1#

    If any output is shown under the “webvpn” configuration stanza, AnyConnect is configured and the firewall is exposed if running a vulnerable version of software.

    How can I mitigate this vulnerability?

    The best way to mitigate this vulnerability is to upgrade your ASA software (or patch your FTD software) to a fixed version. ASA code upgrades are straight-forward as long as you are already on a code train that has a fixed release. If you are running a very old ASA version (like 8.x or 9.0), some analysis to determine the impact of an upgrade may be necessary and H.A. can assist with that.

    If you are running FTD, can apply the hotfix patches listed in the FTD software table above.

    Alternately, if your firewall is vulnerable and has AnyConnect (the “webvpn” command) configured, but you are absolutely sure you are not using AnyConnect VPN, you can simply disable AnyConnect by entering the “no webvpn” configuration command from configuration mode. This is only an option if you are not using AnyConnect VPN (either client-based or clientless) on your firewall. If you rely on AnyConnect VPN for your business functions and you execute the “no webvpn” command, you will probably have a bad day. If you’re unsure, please let me know and we can have an H.A. engineer help assess your situation.

    There are no other known workarounds to secure a vulnerable software release that is running the AnyConnect feature.

    Where can I find more information?

    Here is the link to Cisco’s public PSIRT alert:

    https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180129-asa1

    Here is the link to Cisco’s BugDB report (you need a Cisco login to view this):

    https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvg35618

    This is a link to the upcoming researcher’s presentation session disclosing the hack:

    https://recon.cx/2018/brussels/talks/cisco.html

    Need more assistance? Reach out to your H.A. account manager or sales@hainc.com to schedule a potentially necessary ASA/Firepower upgrade.

  • Tips For Meraki WLAN Deployment Part 2: Naming, Tagging, and Installation Photos

    Tips For Meraki WLAN Deployment Part 2: Naming, Tagging, and Installation Photos

    January 25th, 2018
    Read More

    In our last installment, we covered IP addressing for Meraki access points. In this article, we will discuss a few of the finer details of AP deployment, specifically: AP naming, AP tagging, and installation photos. Taking these elements into consideration during deployment can really help make management of the network easier down the road.

    Before deploying APs, you should have already added the access points to your Dashboard Inventory and then deployed them to a Network in your Meraki Dashboard. Now, we’re ready to go hang access points. If you just go around, hang up your APs and power them on you’re going to have an AP list that looks like this (visible in Wireless > Access points):

    While this network will work fine once SSIDs and other configuration settings are defined, it’s very challenging to live with a network where the APs have no identifiers other than their MAC address. Some of you reading this may think no one leaves their APs unnamed, but I encounter it all the time in Meraki and other vendors’ wireless environments at customers.

    Now, one way to deal with this is to take note of the Serial number or MAC address of each AP at the time of installation and then later, go through the Dashboard and re-name each AP. That works, but it requires additional tracking and paperwork. It’s far easier to take care of the name and other configuration details at install time. And the easiest way to do that is with the Meraki Mobile App.

    To use the Mobile App for deployment, download it from your respective mobile App store. It is available for both Apple and Android devices.

    Once logged into the app, you’ll see a list of your APs. What I like to do is find each AP in the list as we’re installing it. Just take a look at the last 2 bytes of the MAC address for the AP you’re hanging, like 0c:f0 in the example above.

    Now, in the app, tap that AP and look at the AP details. It will look like this (note these are iPhone screenshots, the Android flavor might look slightly different):

    As you are putting up the AP and powering it up, go ahead and click the pencil icon in the upper right corner of the screen:

    Name the access point however you’d like. In general, I recommend at least a unique identifier for the AP, like “AP04” or “AP-11” and some sort of text location key like “Reception.” In other WLAN platforms, I’ll usually recommend a big string of identifiers like a site code and floor designation, like “PHL-FL4-AP04-Reception” or something, but due to some unique features in the Meraki Dashboard you don’t have to do this if you don’t want to. Once you’ve entered the name, it will be reflected in the app (and the Dashboard):

    Why don’t we need site codes and floor designators and everything in the AP name? Well, see the next few fields in the AP details? Like Tags and Location? These help us out a ton. The location field is easy. Key in the street address of the AP (or use the “Use my Current Location” button which attempts to resolve your address via the phone’s GPS).

    When it comes to tags, clicking that field will enter a screen to search for and define tags:

    We might assign several tags, such as:

    Why apply tags this way? Well, we can use tags for multiple uses in the Meraki Dashboard. For one thing, we can search the AP list based on the tags. So perhaps we want to find all the APs on the main floor of our office in the AP list? We can just search for that tag:

    So, these tags can take the place of trying to encode floor into the device name, keeping the device names more concise and shorter. Why tag the mounting type? Well, perhaps we want to send a tech over to take some action on the AP. It might be nice to be able to easily see whether it’s mounted to the wall or ceiling. Tagging the AP to indicate it’s in an area guests may be present can be used in the Wireless > SSID Availability menu to limit the visibility of certain SSIDs to certain areas of the building, like this:

    We might choose to tag APs as being in a VIP area, or in a high-density region, or visible vs. hidden, etc. There is no limit (that I know of) to the number of tags you can assign, so go crazy! Generally, I say if more than one AP will share some attribute, it’s worth assigning a tag for it.

    Finally, while we are still at the installation point of the AP, it’s a great time to use another handy feature of the mobile app. In the AP details page is a “Photo” box. If you click that, it will open your phone’s camera so you can take a picture of the AP. Here’s a tip: Take the photo from a wide enough vantage point that you can see some detail to clue you in to where the AP is mounted. A picture of an AP on a single 2x2’ ceiling tile isn’t very helpful. One that shows that the AP is behind the reception desk or at an intersection of hallways if very helpful. The key is that getting a close-up of the AP helps nothing, but getting a picture of the area the AP is in is very helpful. Once the photo is taken, it will appear in the AP details page of the App:

    More importantly, it’s also visible in the Location tab of the AP details page in the Meraki Dashboard:

    Again, this is exceptionally helpful for easily finding the AP when doing any maintenance in the future, especially when managing remote sites where the administrator may not frequently visit. Imagine being able to send an on-site tech resource a photo of the AP you need replaced to help them identify the right one!

    Once these details have been collected, you can move on to the next AP. Of course, a team of installers can be doing this at multiple locations at once to speed installation.

    As you can see, taking advantage of Meraki’s built-in tools for installation and AP management can be very helpful in future operations. A little legwork up front when deploying can pay big dividends down the road a bit.

    Next time, we will cover the importance of adding floorplans to your Meraki WLAN configuration, and the value you can derive by doing so.

  • Intel Vulnerabilities: What We Want Our Cisco Customers to Know

    Intel Vulnerabilities: What We Want Our Cisco Customers to Know

    January 23rd, 2018
    Read More

    Earlier this month, security researcher Jann Horn from Google’s Project Zero reported the discovery of some serious vulnerabilities in most modern CPUs, most notably Intel. According to researchers, “virtually every user of a personal computer” are at risk.

    The vulnerabilities, which are now collectively being referred to as Meltdown and Spectre, exploit performance enhancement features. The vulnerabilities could give hackers access to passwords, photos, and other sensitive data if exposed. Intel CPUs, which drive most computer platforms in the market today, are most heavily affected because they have a few tweaks that AMD/ARM CPUs do not. ARM CPUs are more common in mobile devices, and they are apparently not affected by the Meltdown attack, but are mostly vulnerable to the Spectre attacks.

    These attacks allow one process to access data, including memory page contents, that would otherwise be restricted. According to Google’s research, this could even extend to the point of breaking the VM isolation boundary, e.g. a VM running malicious code for this attack may be able to access memory of another VM on the same host. This is detrimental for data centers and even worse for cloud platforms.

    Cisco was one of the first networking vendors to come forward with comments on the situation. They assured end-users that there is no attack vector on their appliance platforms unless another vulnerability to execute arbitrary code is first exploited. In other words, these attacks require local code to run on the target machine. Since routers, switches, firewalls, and load balancers generally do not allow just anyone to run a piece of software, an attacker would need to first use another exploit to enable the attacker to run a piece of arbitrary code, and then could theoretically execute one of these new attacks. Such arbitrary code execution vulnerabilities are found from time to time, but keeping up with software updates on networking appliances should prevent most attacks. This also goes for virtual appliances (such as ASAv, CSR1000V, NGFWv, etc.). Though Cisco acknowledges that if the virtual server platform these systems are running on is compromised, the virtual appliances could be impacted.

    One exception to the limited impact on routers and switches is those that can run user-specified processes in containers, such as the Open Agent container feature on some Cisco Nexus switches or Open Service containers on Cisco ASR routers. Always make sure you run software from trusted sources in containers on your network equipment.

    That leads us to where the real impact is: server platforms

    Attackers leveraging Spectre and Meltdown will primarily be targeting servers. Even though they are relatively difficult to execute, these attacks still have the potential to surface. Since server OS’ can, by nature, execute arbitrary code, these exploits can be run on a system that has that malicious code placed on them in some way. As of right now, no viruses/worms/malware are known which use these attacks, but it’s probably just a matter of time. These exploits cannot be run just by visiting a website, viewing an email, etc., they need code to run locally on the target machine. But, typical malware insertion methods (whether a worm or phony email attachments) could be used to get the code to a target machine when combined with other attacks.

    Click Here to view the Cisco PSIRT page up which has some details on exposure of their products

    UCS servers are vulnerable. This is because, like most servers today, they have Intel CPUs. This is not Cisco’s fault, but they use the impacted CPU components, like most every server issued since around 1995. Per the Cisco PSIRT linked above, there will be microcode updates available on February 18th for most Cisco server platforms, which, when applied in concert with applicable OS updates, should mitigate the vulnerabilities. The UCS update will be a BIOS update that updates the CPU microcode.

    Mitigations and Mitigation Impacts:

    For Meltdown, the easiest mitigation is an OS patch. Apple, Microsoft, and Linux all have patches available. Operating Systems derived from Linux will need to be updated once the upstream updates are incorporated into their builds. This is the case for many embedded platforms (including some Cisco routers, switches, etc.).

    There is a CPU performance impact since the mitigation is to disable performance features. The extent of that performance hit is highly dependent on workload. Apparently, a single-workload system or one which does limited context switching (e.g., an end-user laptop/desktop) will not see a significant impact on the performance front. Our engineers suspect that dedicated network appliances (routers, switches) will not see much impact even when they are eventually patched at the OS level for the same reason.

    Where is gets concerning is the server/data center/virtualization environment. Here, context switching is very frequent due to the multiple workloads running on each CPU. These mitigation patches require not only disabling some performance enhancements in the CPUs, but require doing MORE work to secure memory contents before switching to a different task. The numbers generally floating around for impact are 5-30% - that’s a lot. Again, it is hard to predict the impact on a specific workload.

    Here is a graph showing the impact to two server instances before and after OS patching:

    (Source: https://twitter.com/ApsOps/status/949251143363899392)

    You can see a 10-20% jump in CPU. Ouch.

    Click Here to see more examples of impacted workloads. Some have a negligible impact, and others are significant.

    In the link above, Redis and PostgreSQL (both database platforms) saw the most impact.

    Mitigating the Spectre attacks is more difficult and no specific mitigations have been released yet for most platforms.

    So, the potential here is that everyone loses effectively CPU capacity in their compute environments. We can’t predict how much, but potentially enough to require a larger server farm to handle a workload than was previously required. Once patches are applied, end-customers will start to see the impacts to their workloads and will have to adjust future sizing appropriately – or in the worst case, they may need to backfill for “lost” CPU capacity if their workloads were already taxing their CPU resources.

    Tips going forward:

    • Keep an eye on the PSIRT page for Cisco details
    • Conduct patch operation system procedures ASAP (despite the potential performance hit)
    • Reach out to your H.A. account manager to set up any upcoming UCS updates

     

  • A Charitable Culture

    A Charitable Culture

    January 11th, 2018
    Read More

    “If you haven't any charity in your heart, you have the worst kind of heart trouble.”

    - Bob Hope

    One of the main reasons I joined the High Availability, Inc. team was because of the company culture.  Having worked with High Availability, Inc. closely during my years at NetApp, I was able to see first-hand how strong that culture was.  In my opinion, company culture encompasses many aspects - from positive employee morale, to the readiness to support your teammates and, ultimately just having employees who love going to work every day.   However, beyond this, another important factor to me personally is a company’s willingness to give back charitably to both their community and worthy causes.

    Charity is a huge part of my life outside of work.  Since 1989 my family and close friends have run a charity supporting brain tumor research in memory of my brother called the JAG Fund. Because of this, I’ve always valued the charitable efforts in my professional life, and High Availability, Inc. is certainly no different. After joining the team, I quickly realized that giving back was just as important to High Availability, Inc.

    Here are a few examples that prove this….

    As mentioned previously, my entire family is deeply involved in the JAG Fund. Every year, we host a Black Tie-Gala to raise both support and awareness for brain tumor research. I was thrilled when my teammates at High Availability, Inc. pledged their support for the JAF Fund through a corporate sponsorship towards our Black-Tie Gala! Not only did H.A. support the JAG Fund financially, but they also attended the event. In fact, my boss, Steve Eisenhart, was rated as one of the top dancers that evening and the High Availability, Inc. table was by far the most supportive during our silent auction. They even set a record for their winning bids!

    A second example was High Availability, Inc.’s partnership with Tech Impact, a non-profit organization headquartered in Philadelphia created to provide cost-effective IT support for other non-profit organizations.  Tech Impact also helps help young urban adults move into a career in IT through their ITWorks Program.  Having supported Tech Impact prior to coming to High Availability, Inc., I knew how worthy of an organization it was, so I was thrilled when the executive team at H.A. approved a sponsorship for the annual Tech Impact Luncheon. I was fortunate enough to attend the event with several other members of our team and even a few customers.  It was amazing to see first-hand the positive impact of our contribution!

    Lastly, but most impressive in my opinion, was High Availability, Inc.’s support of Hurricane Harvey relief efforts. H.A. specifically contributed to the J.J. Watt Foundation.  The aftermath of Hurricane Harvey was absolutely devastating for the state of Texas. The Category 4 storm dumped over 27 trillion gallons of rain over Texas, and forced over 30,000 people to seek temporary shelter.

    After the storm passed, High Availability, Inc. acted immediately. The team set up a relief fund program for those affected by Hurricane Harvey, and even matched each contribution dollar-for-dollar. Within just a few minutes, the H.A. team donated over $3,000!

    “Everyone stepped up and was happy to help in any way they could,” said Greg Robertson, CFO and Founder of High Availability, Inc. “I feel very fortunate to be working with individuals who see the importance of helping those in need.”

    Within three days, High Availability, Inc. had raised exactly $10,000!

    High Availability, Inc. plans to double our charitable efforts in 2018. In fact, Steve Eisenhart, CEO and Founder of High Availability, Inc., has even implemented the first ever “H.A. Challenge”, which encourages employees to give back a full working day to a local charity through volunteering.

    Of course, these aren’t the only examples of H.A. giving back but they are the top 4 that I’ve seen since joining the organization just over 18 months ago.  Pretty impressive and a true example of the company embracing a charitable culture!

  • Tips For Meraki WLAN Deployment Part 1: AP Addressing

    Tips For Meraki WLAN Deployment Part 1: AP Addressing

    December 29th, 2017
    Read More

    Meraki cloud-managed network infrastructure has brought a new level of manageability to the network, and many of High Availability’s customers have found out just how easy it is to operate a Meraki-based network infrastructure. For this series of articles, I wanted to recap a few tips for making the deployment go smoothly as well. Listed below are five often-overlooked topics essential for a complete and trouble-free Meraki installation.

    1. AP Addressing (Static or DHCP)
    2. Naming
    3. Tagging
    4. Installation Photos
    5. Floorplans

    We will be discussing each of these topics in the next few blog articles.

    AP Addressing

    By default, Meraki access points will request an IP address through the Dynamic Host Configuration Protocol (DHCP). If you are putting your APs on a client-serving network (which can be OK in a small office environment), that’s usually all they need to get started. However, larger, more complex network designs often dictate that access points’ management interfaces live on a dedicated AP VLAN or maybe a common infrastructure management VLAN. In those cases, DHCP may not be enabled by default. There are a few options in this situation.

    First, DHCP can be enabled for the VLAN. If this is done, the approach I like to take is to set that DHCP scope up on the router or layer 3 switch serving the VLAN, rather than on a Windows AD server or similar. Why? Well, the simple fact is that VLANs used for infrastructure are easy to ignore, and sometime in the future the DHCP scopes might be migrated or the server decommissioned, with little or no attention paid to the DHCP scope defined for a non-user VLAN. In this case, the stability of a layer 3 switch to provide the DHCP may be desirable. Also, unlike client workstations, there is no strong need to have reverse-DNS PTR records registered for the APs or anything like that so putting the APs’ DHCP scope on a network device keeps all the configuration needed for the APs “in the network.”

    Now, there may be times when DHCP addressing for your access points is not feasible or desirable. One support issue I’ve encountered more than once occurs when the DHCP scope that supplies IP addresses to the APs is exhausted (this usually occurs when a client-facing subnet is used for the access points). Eventually the AP is unable to renew its IP address, and it stops working. If this is the only option for addressing, DHCP may not be a good choice. Perhaps security policy prohibits the management network from providing the DHCP service. Or maybe administrator simply prefers that all infrastructure devices have fixed IP addresses. In this case, there are two ways to assign static IPs to your Meraki APs.

    If you can initially provide an IP address via DHCP, the AP will check into the Meraki Dashboard, and assigning a static IP is simple.

    First, go into the access point details page under Wireless > Access points, and click the access point in question. To the left of the main browser pane, you will see the IP settings. Click the pencil icon to edit them:

    A small box will appear, in which you can select the addressing type. Here, you can select “Static IP:”

    The box will expand and you can enter the AP addressing details, like this:

    After filling in the appropriate details, click the Save button. The AP will reboot, and should come up at its new, static IP address. Rinse and repeat for the other APs. Note that the “VLAN” field only needs to be populated if the AP’s management VLAN is not the untagged/native VLAN for the connected switch port.

    What about an instance where you cannot bring the AP up on DHCP initially? If the AP must be statically addressed from the start, before it can even reach the Meraki Dashboard, you need to locally connect to the AP.

    In this case, power up the factory-fresh AP and it should begin broadcasting an SSID of “Meraki Setup.” Connect to this SSID, and then open your browser and go to “ap.meraki.com.” You should see a status page like this:

    Switch to the “Configure” tab at the top, and when prompted for a Username and Password, enter the serial number of the AP under the Username and click OK, like this:

    The Configure screen will allow you to choose the Uplink addressing mode:

    As before, select Static, then fill in the addressing details and click Save at the bottom of the screen. After restarting, your AP should be able to connect to the Internet and the Meraki Dashboard via the statically-programmed IP address.

    Hopefully, this blog article has been helpful in getting your Meraki access points addressed and connected to the Meraki cloud in a reliable manner. As always, the networking experts at High Availability are available to assist you with your networking project.

    Next time, we will cover tips for deploying your Meraki hardware using the Meraki mobile app and taking advantage of some special Meraki features!

Join the High Availability, Inc. Mailing List

Subscribe