• Veeam 9.5 Update 4 - The one we’ve been waiting for

    Veeam 9.5 Update 4 - The one we’ve been waiting for

    February 18th, 2019
    Read More

    A few weeks ago, Veeam released “Update 4” for version 9.5 of the Backup and Replication product.  There has been much discussion around this release, and it’s date has been anticipated for quite a while.  Last week I finally had a chance to dig into the details and kick it around in the lab.  Some of the features that were lacking in previous version are now part of the install, and I wanted to share some of the more important items from my viewpoint in the field.


    Native Object Storage Support (Veeam Cloud Tier)


    Veeam Cloud Tier is now an integral part of the scale-out backup repository, and is referred to in the GUI as the Capacity Tier.  It has the ability to use native object storage integration with Amazon S3, Azure Blob, IBM Cloud, and other S3-compatible technologies.  A policy can be created to tier backup files from a local repository (or Performance Tier) to cost effective long term object storage.  This has been one of the most asked about feature that I hear from customers.


    Restore to AWS and Azure Stack


    The direct restore to Azure process has been expanded to include the ability to restore backups to Microsoft Azure Stack as an IaaS VM.  More importantly, there is now a  Direct Restore to Amazon AWS EC2.  Both of these features are a great way to move or test VM recovery in the Cloud.


    Improved Security and Compliance (GDPR)


    Veeam Virtual Labs still exist in Update 4, but has been renamed to Data Labs.  Both Staged Restore and Secure Restore build on the proved success of Data Labs.  A staged restore can allow scripts or applications to be run agains a VM for GDPR compliance, while a Secure Restore enables a guest file system to be scanned for malware.


    Application Plug-Ins


    Veeam has been lacking a plugin for Oracle RMAN since the beginning. In Update 4 DBA’s can now use the RMAN manager to perform backup and restores to a Veeam repository.




    Update 4 has several VERY important updates to tape.  First and foremost we now have the ability to do native NDMP backups.  This has far been the most requested feature that I have come across in the field over the past few years.  Customers who have had to maintain legacy backup solutions to handle larger NAS shares can now use Veeam for NDMP.  The other items worth mentioning are support for parallel processing of FGFS media pool, and the ability to schedule specific start times for GFS tape backup jobs.



    Overall there are some great features and much needed functionality that comes out of the Update 4 release.  The list above is in no way a complete coverage of everything that has been included in the latest update, but what I consider the top items.  As Veeam continues to provide customers with a solid and scalable data availability solution, I'm certain there will be much more to come.    

  • How to update AnyConnect & Compliance Modules on Cisco Identity Services Engine (ISE)

    How to update AnyConnect & Compliance Modules on Cisco Identity Services Engine (ISE)

    January 11th, 2019
    Read More

    How to update AnyConnect & Compliance Modules on Cisco Identity Services Engine (ISE)


    I've recently had the pleasure of deploying Cisco's Identity Service Engine (ISE) as an integrated security solution for a customer.  Part of the ISE deployment involved configuring determining the security posture for VPN-connected clients, prior to allowing the client node access to the corporate network.


    In order for VPN posturing to work on the ASA firewall, there is an additional compliance module that must be installed on the ASA.  The Compliance Module (aka ISE Posture Module) is part of the AnyConnect Secure Mobility Client and offers the Cisco AnyConnect Secure Mobility Client the ability to assess an endpoint's compliance for things like antivirus, antispyware, and firewall software installed on the client endpoint. 


    In our lab environment, we deployed the windows version of the compliance module on our Cisco ASA firewall.  See diagram below:




    It is crucial that the Client Provisioning Policy within ISE references the appropriate version of both the AnyConnect and Compliance packages that are deployed on the ASA firewall.  I've seen instances where the VPN posture module does not work correctly due to the version mismatch between ISE and the ASA firewall and where the posture check does not kick off while the client endpoint is attempting to connect via VPN to the corporate network.


    Unfortunately, while the ISE administrator can edit the Compliance Module version under the AnyConnect Agent Configuration, the AnyConnect Package CANNOT be edited.  To align the AnyConnect Agent Configuration versioning name with the AnyConnect Package, I highly recommend on creating a new AnyConnect Agent Configuration.


    As far as compatibility between the AnyConnect and Compliance Module is concerned, a quick check of the compatibility matrix indicates that the AnyConnect Secure Mobility Client needs to be 4.x.  This support documentation also lists the supported versions of patch-management, anti-virus, anti-malware, etc.


    The following steps below details the step-by-step procedure on how to update both the AnyConnect and Compliance Module on the Cisco ISE Policy Administration Node (PAN).


    1. Update AnyConnect and Compliance Module Packages on Cisco ASA firewall
      1. AnyConnect and Compliance Module Packages are downloaded from Cisco Online
      2. Move the firmware to the ASA flash
    2. Download and install AnyConnect Package on Cisco ISE
      1. Policy > Policy Elements > Results > Client Provisioning > Results
      2. Click Add > Agent resources from local disk:



    1. Select "Cisco Provided Packages" and click on the "Browse" button to upload the package to ISE.  Click on the Submit button.  Another window will then prompt the ISE administrator  to confirm the MD5 hash, click on OK.



    1. Download and install the AnyConnect Compliance Module (.pkg) on ISE:
      1. Policy > Policy Elements > Results > Client Provisioning > Results
      2. Click Add > Agent resources from local disk:



    1. Select "Cisco Provided Packages" and click on the "Browse" button to upload the package to ISE.  Click on the Submit button.  Another window will then prompt the ISE administrator to confirm the MD5 hash, click on OK.




    Once the new AnyConnect and Compliance Modules have been uploaded, a new Posture Profile will need to be created.


    1. Create a new Posture Profile
    1. Policy > Policy Elements > Results > Client Provisioning > Resources
    2. Click Add > AnyConnect Configuration



    1. Select the new AnyConnect Package under the dropdown



    1. Enter the configuration name.   Include the version number in the name - ex "AnyConnect Configuration 4.5.4029.0"
    2. Select the new compliance module that was added to ISE in Step #3.
    3. Under Profile Selection, select "POSTURE_PROFILE"
    4. Leave everything else to default and click on the "Save" button.




    The final step is to modify the Client Provisioning Policy to include the new AnyConnect Agent Configuration in ISE.


    1. Modify the Client Provisioning Policies
      1. Policy > Client Provisioning
      2. Edit the Windows rule to include the new AnyConnect Agent Configuration



    1. Under Results, under Agent, select the new AnyConnect agent that was just created.




    1. Click on Save and we should be good to go.
  • Grey-Market IT Equipment: A Cautionary Tale

    Grey-Market IT Equipment: A Cautionary Tale

    December 31st, 2018
    Read More

    Grey-Market IT Equipment: A Cautionary Tale


    These days, every IT organization is under scrutiny to spend less. It is the responsibility of every IT practitioner to ensure they’re getting the best value for the money their organization spends on IT products and services. With shrinking budgets and increasing needs, some purchasers look for ways to get creative and find deals that may be just a little too good to be true.


    Recently I had an experience with a customer who some time ago needed to refresh some network components. We proposed a Meraki cloud-managed solution for them as it was a good fit for the customer profile, but the customer decided to cut costs and corners by purchasing some of their equipment via eBay. Their intent was to get the gear cheaper off eBay (supposedly new in box) and buy new Meraki licenses from H.A.


    We strongly cautioned against this approach for a variety of reasons, but in the end, Meraki does support the purchase of second-hand equipment. Here is a link to their policy on the topic.


    So, the customer bought their gear and everything seemed OK during the deployment. Fast forward about 9 months later. That recent day, their firewall shut down and disappeared from their Meraki dashboard. The customer contacted H.A., and our engineer opened a case with Meraki support. Meraki support said:


    “After additional investigation, I found that the firewall was part of a trial program, and the device was indeed shutdown after the trial period. Please confirm with your Admin network and with your  Meraki Representative for confirmation and guidance, as it is possible that the device will be removed again as it is reported as part of a trial.”


    In other words, Meraki remotely killed the device. Why? Our account manager reached out to the Meraki rep covering the customer. The response from the Meraki rep was:


    “So I don’t have good news, the gear was in fact part of a trial that was not returned or paid for. Support just did a mass shutdown of all of those somewhat “stolen" units which is why they are experiencing issues. If this unit had been properly unclaimed and resold they wouldn’t be experiencing this issue but that isn’t the case which is why grey market is always a risk.”


    In other words, someone ordered this firewall as part of a free trial, did not purchase the device in the end (e.g., ended their trial), and either unknowingly or deceitfully then did not return the device and instead sold the device on eBay. Eventually, Meraki, as part of a batch cleanup, remotely killed the device since it was, in effect, stolen.


    This is, admittedly, a rather extreme example, since the manufacturer was able to remotely disable the “stolen” equipment. However, I seen similar situations many times before where a device purchased through a private transaction or a less-than-reputable online seller may turn out to be ineligible for a support contract/warranty or some sort of a subscription renewal or a software upgrade because it has not officially been transferred to the party that now has possession of the physical device.


    Purchasing IT equipment in this way – second-hand, through private transactions or online auction sites – is usually called the “grey market” because while legal to purchase a physical object in that way, it is often a violation of the manufacturer’s licensing terms and conditions for someone other than the original purchaser to then use the software that would run on that device. In short, the physical device itself is not the question, but a non-transferrable software license or subscription is.


    To make such a purchase legitimate, most manufacturers have a process whereby the device in question can be “recertified” or “requalified” to make the transfer completely official and transfer/assign all applicable software licenses to the new owner. The problem is, most buyers don’t even realized this is required, and even if they do, they rarely bother. Worse yet, I’ve encountered many recently-hired administrators who have inherited an environment only to discover that their equipment inventory consists of grey-market gear that they can’t get manufacturer support on or obtain a much-needed software update for. A decision to turn to the grey market may become a major headache for someone down the road even if the risk is understood at the time of purchase.


    Here at High Availability, we always try to make sure that our customers get a fair price and a great value on equipment they purchase from us. And our customers get the reassurances of working with an authorized reseller to purchase new equipment that is ready to run without hassles. We always recommend working with an authorized reseller to get your equipment, but if you do decide to buy from the grey market be sure to research the policies and requirements of the manufacturer in questions, understand the risks, and caveat emptor.

  • Veeam Services You May Not Know We Can Offer

    Veeam Services You May Not Know We Can Offer

    December 10th, 2018
    Read More

    In the IT universe, managing backups can be a sore subject: tedious, time-consuming, and annoying to say the least!

    Our mission, as your partner, is to become an extension of your IT team, so that you can focus on what's most important; your business.

    Wouldn't it be nice if backups were someone else's problem?  Send your backup failure emails to us!  We aim to fix problems you didn't even know you had.


    Sound great?  As both a Veeam Gold Service Provider and Gold Reseller, here are a few highlights of some of the services we offer:


    1. Veeam Cloud Connect

    This is a cost-effective option for our clients that would like to immediately start copying their backup data off-site.

    There is no need for a VPN, or for any additional resource consumption in the H.A. cloud, making for a simplified billing process.

    Adding this service could not be simpler.  Go to your Backup Infrastructure tab, click the Service Provider tab, and click Add Service Provider.

    From here, you can point your backup or backup copy jobs into your brand new cloud repository. 

    Not much internet bandwidth available?  We also offer a seeding service where we send a NAS on-site and assist with the initial copy. Then, ship the device back to us and we take care of copying the data into your cloud repository.


    2. Veeam backup management as a service

    Over the years, we have grown from managing tens, to hundreds, to thousands of VMs protected by Veeam.  Our managed backup as a service business is in high demand for good reason. 

    We offer two flexible options:

    This one is simple:  We take over management of all aspects of your existing on-site Veeam environment.

    Or, let us architect a new Veeam infrastructure and manage it, whether hosted on-premises or in our private cloud.


    Our value added service is managing the data life-cycle; defining the path your data takes from production to backup, to backup copies, and replication.

    In both cases, we are always working on testing the integrity of your backups with routine test restores.

    Self-service restores are an option with Veeam Enterprise Manager, or feel free to submit a support ticket and have us take care of it for you.


    3. Veeam disaster recovery as a service

    We live in an always-on world and your business expects nothing less.  This offering rolls the best of our other Veeam services into the ultimate protection plan.

    Our on-demand disaster recovery service allows you to replicate your VMs to our private cloud, backed by VMware vCloud Director for self-service console access.  We can deploy Veeam from scratch or use your existing Veeam environment.  You can choose from different datacenters around the country and in different power grids for geographic diversity.  We can also easily add long term archiving pulled from the replicas so that your production environment is only taxed once.

    An approved disaster recovery process will be developed with an infrastructure recovery strategy and prioritized restoration instructions. 

  • 2-Node vSAN for ROBO Deployments

    2-Node vSAN for ROBO Deployments

    November 27th, 2018
    Read More

    2-Node vSAN for ROBO deployments

    Deploying servers for remote offices or branch offices (ROBO) always leads to compromises.  A remote office normally would only have a few servers, so a typical setup of physical servers running VMware ESXi and a shared storage array for high availability is overkill for this scenario. 

    This is where VMWare vSAN comes into play.  Although vSAN requires 3 nodes for cluster quorum, vSAN has a 2-node ROBO configuration that uses a witness appliance as the 3rd node that runs in a central datacenter which allows for minimal infrastructure at the remote site.

    The vCenter for the ROBO cluster, along with the witness appliance, runs at the central datacenter. 

    Minimum requirements

    • 2 hosts at remote office
    • Servers must be vSAN Ready nodes or have hardware on vSAN compatibility guide
    • vCenter at central datacenter
    • Witness appliance at central datacenter (cannot be on the remote site vSAN)
    • One or more flash disks per host for the cache tier (can also use PCI-E devices)
    • One or more disks per host for the capacity tier (can be flash or HDD)
    • Disks must be in JBOD or RAID passthrough mode

    Create the ROBO cluster

    In vCenter at the primary site, create the two-node cluster for the remote site and add the hosts

    Enable vSAN traffic on VMKernel ports

    Create vmkernel ports on the two hosts and enable vSAN traffic.  To use a direct-connection between vSAN  nodes, you must configure witness traffic separation in order to put the witness traffic on an interface other than the vSAN vmkernel.  We are going to enable witness traffic on the management vmkernel, vmk0, which must be done from the esxi CLI. The command to do this is:

    esxcli vsan network ip add -i vmk0 -T=witness

    For more information on witness traffic separation, see this link:



    Deploy the Witness appliance

    The witness appliance is delivered as an OVF package.  Deploy this at the central datacenter.

    Browse to the location of the OVF file you downloaded

    Give the witness VM a name and location

    Select the host or cluster that will run the witness VM.


    Accept the license agreement

    Select the configuration.  Choices here are

    • Tiny – 10 VMs or fewer
    • Medium – up to 500 VMs
    • Large – more than 500 VMs

    Choose the storage location for the witness VM and whether you want it thick or thin provisioned.

    Select the witness and management networks

    Set the password for the VM management



    Once the VM is deployed, power it on and configure the networking


    After the networking is configured, we need to add this VM as a host at the primary datacenter.  I have created a datacenter object here called “vSAN witnesses” and I’m adding the host there.  The host will have a light blue appearance in vCenter, signifying that it is a witness appliance and not an actual physical host.

    Enter the hostname or IP address of the witness VM.


    Configure vSAN storage

    Begin the vSAN configuration.

    Select configure two host vSAN cluster.  IF your vSAN is all-flash, you will also want to enable deduplication and compression.  Optional encryption is also available.

    Verify that vSAN is enabled on the vmkernel adapters.  If not, you will need to go back and do this on the esxi hosts.

    Next you will claim the disks that vSAN will use for cache and capacity.  The cache device is required to be flash storage, while the capacity drives can be either flash or HDD.

    Select the witness host.  Note that one witness host is required per 2-node vSAN.  Witness hosts can only be part of one cluster at a time, so you’ll need to deploy one per remote site.

    Claim disks on the witness host just as you did on the physical hosts.

    Click finish to complete the vSAN configuration.  This will take a few minutes to complete.

    vSAN is now up and running.





    vSAN offers a low-cost option for high availability storage for remote offices.  Consider using this if you need to deploy a small number of virtual servers for a remote location.

  • Now Available: Pumpkin Spice Machine Learning

    Now Available: Pumpkin Spice Machine Learning

    October 30th, 2018
    Read More

    Recently, High Availability was accepted as a Nvidia East Coast partner to sell and manage their Nvidia DGX-1 GPU platform, as well as reseller of PureStorage's new AIRI (AI Ready Infrastructure) and the Cisco UCS 480 ML M5 GPU Server.  

    When you think about PureStorage Orange, Winter time, and High Availability, what comes to mind?  Pumpkin Spice Lattes  

    There have been few trends as prevalent as the Pumpkin Spice craze of the last decade.  From Pumpkin Spice Lattes to Latkes, Pumpkin Spice PopTarts to PopRocks, Pumpkin Spice Beard Oil to Soaps, for 3 months of the year it's difficult to avoid. 

    When we think back upon the history of Pumpkin Pie and Pumpkin Spice, we see through the trend.  The first recorded recipes were from 1651 by Francois Pierre la Varenne, in 1763 Amelia Simmons put her spin on it by including a recipe in America's first printed cookbook, and in 1934 McCormick & Company made it mainstream by introducing their prepackaged "Pumpkin Pie Spice Blend".  

    What we are now seeing as a "trend" in AI, Machine Learning, and Deep Learning, is certainly not new.  As we can see illustrated below, the foundation math and scientific building blocks for that we now call Machine Learning and Deep Learning have existed for as long as the original "Pumpkin Pie Spice Blend". 

    For those readers which have never heard of Machine Learning or Deep Learning, or never thought about what AI really is, let's take a step back and look at them at a high level.


    • AI/Artificial Intelligence:  Give a computer the ability to follow a decision tree to make choices on input.  This can be as simple as IF-THEN statements, or Use Cases.  A basic rules engine.  If a caller presses 2#, transfer a call to Bob.
    • ML/Machine Learning: ML is part of AI, but AI may not be part of ML.  With Machine Learning, a computer or program learns over time and can be trained in a fashion.   As you use daily, machine learning will become better at keeping your mail Inbox free from spam, as you classify what spam is to you.  
    • DL/Deep Learning:  If you wanted to labeled a million pictures of dogs with their type, the program could learn the difference between a Poodle and a Pekingese, or Dachshund and a Hot Dog with mustard.  The more input, the better it gets.  Deep Learning does not stop there though.  Deep learning algorithms may reprocess aspects to get a higher "probability vector" for what it lacks confidence on.  OpenAI's FIVE video playing AI team learns as it plays and is constantly trying to best human players in DOTA, learning from new techniques that the humans throw at it.


    How does this help your Pumpkin Spice cravings?

    Farmers and Transportation companies have been early adopters of GPU accelerated machine learning.  Nearly every aspect of planting, harvest, and distribution of our food is a miracle of technology.

    • Seed Designers are using machine learning to predict preferred traits for new seed hybrids for different regions and soils
    • AI and Drones are utilized to monitor crop needs, and peak harvest times
    • Automated visual inspection of crops determines crop health
    • John Deere and Blue River have solutions to reduce pesticide and fertilizer use
    • GPS and Satellite enabled driverless tractors automatically harvest crops based on crop types and data
    • Trucking companies utilize machine learning to determine best routes and maximize transport load distribution
    • Self-driving trucks by Tesla, Volvo, Waymo, and Daimler are beginning to be delivered to trucking companies, since there is a shortage of truck drivers.  An HA customer is already testing these trucks on the road.
    • Maersk and other sea carriers utilize machine learning to route shipments and track shipping containers
    • Coffee Shops are using ML for big data analytic, customer preference and targeted advertising. 

    In the last few years, we have advanced to a level of technology where the hardware to utilize such approaches is affordable to every business and individual.   Simultaneously, the software to make best use of this hardware is much more easily integrated into business workflows. 

    The advancement in the last decade has come from Nvidia's investment of billions of dollars to scale their GPUs from gaming only cards, to the de-facto choice of computational processing.  Nvidia has created software and libraries which allow customers to integrate and expand their existing environments.  While Virtual Desktops may be the first thing legacy IT organizations think about with Nvidia GPUs in the datacenter, that is just a small piece of the pie.   

    Today, NVIDIA has a catalog of 550 applications 3rd-party which are GPU accelerated.  This does not include the tens of thousands of home-grown applications and systems in use which may use their librariescuBLAS to accelerate anything with linear algebra, or one-off algorithms from applications such as R

    The big winner is you.  Whether you are looking to use off the shelf analytical software, or develop with your own algorithms via a team of Data Scientists, we have solutions for it.  From individual GPUs, to Nvidia DGX-1 or PureStorage AIRI or Cisco UCS 480 ML M5, we can help.   If you do not have your own Data Scientists, High Availability has partnered with some of the best in the country.   Contact your local H.A. Sales Rep for more information. 

  • Everything You Need to Know About Palo Alto

    Everything You Need to Know About Palo Alto

    October 23rd, 2018
    Read More

    As many organizations realize, changes in the application and threat landscape, user behavior, and network infrastructure are changing! The security that traditional port-based firewalls once provided is often not enough. Users are accessing all types of applications using a range of device types these days. Datacenter expansion, virtualization, mobility, and cloud-based stances are forcing us to rethink how to protect networks.  

    Traditional thinking typically includes an attempt to lock down traffic through an increasing list of point technologies in addition to the firewall, which may hinder your business. Some allow all applications, which results in increased business and security risks. The challenge is that your traditional port-based firewall, even with bolt-on application blocking, does not provide an alternative to either approach. To balance between allowing everything and denying everything, the need to allow applications by using essentials such as the application identity, who is using the application, and the type of content as key firewall security policy criteria.

    A solid starting strategy is to Identify applications, not ports. Classify traffic, as soon as it hits the firewall, to determine the application identity, irrespective of protocol, encryption, or evasive tactic. Then use that identity as the basis for all security policies.        

    Customers of Palo can also link application usage to user identity, not IP address, regardless of location or device. Employ user and group information from enterprise directories and other user stores to deploy consistent enablement policies for all your users.   

    Another huge factor is the ability to protect against threats both known and unknown. Preventing known vulnerability exploits, malware, spyware, malicious URLs while analyzing traffic for, and automatically delivering protection against highly targeted and previously unknown malware is essential to a viable and long-term Firewall project.

    Many customers ask us how they can simplify policy management. With Palo Alto you can safely enable applications and reduce administrative efforts with easy-to-use graphical tools, a unified policy editor, templates, and device groups. Safe application enablement policies can help you improve your security posture, regardless of the deployment location. At the perimeter, you can reduce your threat footprint by blocking a wide range of unwanted applications and then inspecting the allowed applications for threats— both known and unknown. In the datacenter – traditional or virtualized, application enablement results in ensuring only datacenter applications are in use by authorized users, protecting the content from threats and addressing security challenges presented using the virtual infrastructure. Your enterprise branch offices and remote users are protected by the same set of enablement policies deployed at the headquarters location, thereby ensuring policy consistency.

    Businesses can enable applications with Palo Alto Networks next-generation firewalls that help address business and security risks associated with a growing number of applications in your network.

    Deployment and Management application enablement functionality is available in purpose-built hardware platform or in a virtualized form factor. When you deploy multiple Palo Alto Networks firewalls, in either hardware or virtual form factors, you can use Panorama, an optional centralized management offering to gain visibility into traffic patterns, deploy policies, generate reports and deliver content updates from a central location.

    Comprehensive applications require securing your network and growing your business that begins with in-depth knowledge of the applications on your network; who the user is, regardless of their platform or location; what content, if any, the application is carrying. With more complete knowledge of network activity, you can create more meaningful security policies that are based on elements of application, user and content that are relevant to your business. The user location, their platform and where the policy is deployed—perimeter, traditional or virtualized datacenter, branch office or remote user— make little or no difference to how the policy is created. You can now safely enable any application, any user, and any content. Complete Knowledge Means Tighter Security Policies Security best practices dictate that more complete knowledge of what’s on your network is beneficial to implementing tighter security policies.

    Enabling Applications and Reducing Risk Safe application enablement uses policy standards that include application/application function, users and groups, and content as a means determining the right option. At the perimeter, including branch offices, mobile, and remote users, policies are focused on identifying all the traffic, then selectively allowing the traffic based on user identity; then scanning the traffic for threats.

    Protecting Enabled Applications Safe application enablement means allowing access to certain applications, then applying specific policies to block known exploits, malware and spyware – known or unknown; controlling file or data transfer, and web surfing activity. Common threat evasion tactics such as port-hopping and tunneling are addressed by executing threat prevention policies using the application and protocol context generated by the decoders in App-ID. In contrast, UTM solutions take a silo-based approach to threat prevention, with each function, firewall, IPS, AV, URL filtering, all scanning traffic without sharing any context, making them more susceptible to evasive behavior.

    Block Known Threats: IPS and Network Antivirus/Anti-spyware. A uniform signature format and a stream-based scanning engine enables you to protect your network from a broad range of threats. Intrusion prevention system (IPS) features block network and application-layer vulnerability exploits, buffer overflows, DoS attacks, and port scans. Antivirus/Anti-spyware protection blocks millions of malware variants, as well as any malware-generated command-and-control traffic, PDF viruses, and malware hidden within compressed files or web traffic (compressed HTTP/HTTPS). Policy-based SSL decryption across any application on any port protects you against malware moving across SSL encrypted applications.

    Block Unknown, Targeted Malware: Wildfire. Unknown or targeted malware is identified and analyzed by WildFire, which directly executes and observes unknown files in a cloud-based, virtualized sandbox environment. WildFire monitors for more than 100 malicious behaviors and the result is delivered immediately to the administrator in the form of an alert.

    Data filtering features also enable your administrators to implement policies that will reduce the risks associated with unauthorized file and data transfers. File transfers can be controlled by looking inside the file to determine if the transfer action should be allowed or not. Executable files, typically found in downloads can be blocked, by this means protecting your network from unseen malware. Data filtering features can detect and control the flow of sensitive data patterns (credit card or social security numbers).

    Ongoing Management and Analysis Security say that your administrators should balance between proactively managing the firewall, whether it is a single device or many hundreds, and being reactive, analyzing, and reporting on security incidents.

    Each Palo Alto Networks platform can be managed individually via a command line interface (CLI) or full-featured browser-based interface. For larger deployments, Panorama can be licensed and deployed as a centralized management solution that enables you to balance global, centralized control. Role-based management is supported across all channels, allowing you to assign features and functions to specific persons. Predefined reporting can be used as-is, customized, or grouped together as one report to suit the specific requirements. All reports can be exported to CSV or PDF format and can be executed and emailed on a scheduled basis.

    Real-time log filtering facilitates rapid forensic investigation into every session traversing your network. Log filter results can be exported to a CSV file or sent to a syslog server for offline archival or additional analysis.

    Palo Alto Networks offers a full line of purpose-built hardware or virtualized platforms that range from the PA-200 designed for remote offices, to the PA-5060, which is designed for high-speed datacenters. All this is based on a software engine and uses processing for networking, security, threat prevention and management to deliver you predictable performance. Please consider HA Inc as your enterprise level networking solution provider as you approach future projects or have interest in learning more about what Palo Alto has to offer!

Join the High Availability, Inc. Mailing List