Cisco Unified Communication (UC) Server- Hardware Options

May 23rd, 2018
Cisco Unified Communication (UC) Server- Hardware Options

These are various options when it comes to installing Cisco Unified Communication (UC) applications.  I have summarized these options with pros and cons of each:

Business Edition (BE) 7000- BE7K or Business Edition (BE) 6000- BE6K

  • What is it?
    • The Cisco BE6K/BE7K is built on a virtualized UCS that ships ready-for-use with a pre-installed virtualization hypervisor and application installation files. The BE6K/BE7K is a UCS TRC in that UC applications have been explicitly tested on its specific UCS configuration
  • Pros
    • Easy to order- one SKU.  That SKU includes everything including VMware license
    • All OVA templates and ISO Images are preloaded with server
  • Cons
    • There is no flexibility to choose hardware and software from

 

 

UC on Cisco Tested Reference Configuration (TRC) servers

  • What is it?
    • UCS TRCs are specific hardware configurations of UCS server components. These components include CPU, memory, hard disks (in the case of local storage), RAID controllers, and power supplies
  • Pros
    • Ordering process involve more than BE7K but it is simple compare to Spec based solution- that is check TRC specification against the actual hardware including CPU, memory, hard drive, VMware etc.
    • Provide more flexibility compare to BE6K and BE7k in terms of choosing hardware/software
  • Cons
    • There are still less options to choose hardware/ software from 
    • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
    • Require ordering VMware foundation or VMware standard license

 

UC on Spec Based servers

  • What is it?
    • Specifications-based UCS hardware configurations are not explicitly validated with UC applications. Therefore, no prediction or assurance of UC application virtual machine performance is made when the applications are installed on UCS specs-based hardware. In those cases, Cisco provides guidance only, and ownership of assuring that the pre-sales hardware design provides the performance required by UC applications is the responsibility of the customer
  • Pros
    • Can leverage existing compute infrastructure, including 3rd party hardware
    • Provide the most flexibility in terms of hardware/software options to choose from
  • Cons
    • Ordering equipment’s based on Spec-based requires more upfront planning, validation and potential pre-testing
    • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
    • Requires VMware Vcenter

 

UC on Cisco HyperFlex

  • What is it?

    • UC on Cisco HyperFlex is available as TRC
  • Pros
    • Same as TRC servers
    • Provide more robust and scalable solution
  • Cons
    • This could be expensive solution, unless it is part of larger Hyperflex deployment
    • Client or partner still has to manually obtain and upload OVA template ISO image for each UC applications
    • Requires VMware Vcenter

 

Who should be looking for UC on HCI (HyperFlex)?

  1. Server team with incumbent 3rd-party compute looking for alternative storage
  2. Voice/video team seeking HCI alternative to BE6K or BE7K appliance for UC
  3. UCS that is not BExK, one team in charge of everything, wants HCI instead of other approaches
  4. UCS that is not BExK, separation of duties where server team owns VMware/compute/storage, server team looking for HCI

 

High Level Solution

 

The Hyperflex bundle comes with four (4) HX240C nodes and pair of Cisco 6248 Fiber Interconnect.  The system is managed by HyperFlex (HX) software running on Cisco 6248.

Following applications are supported by TRC (Tested Reference Configuration) on HyperFlex

  • (CUCM) Unified Communications Manager
  • (IMP) Unified Communications Manager – IM & Presence
  • Expressway C & Expressway E
  • (CER) Emergency Responder
  • (PCP) Prime Collaboration Provisioning
  • (PCA) Prime Collaboration Assurance
  • (PCD) Prime Collaboration Deployment
  • (PLM) Prime License Manager (standalone)
  • (CUC) Unity Connection
  • (UCCX) Unified Contact Center Express
  • (TMS) Telepresence Management Suite

 

Sample Design

Assumptions

  • Minimum system using HX240c M4SX TRC#1, HX 1.8.1.
  • 4x HX nodes, each with VMware vSphere ESXi 6.0
  • 2x 6200 FI switches
  • VMware vCenter 6.0 for management

 

SAN/NAS Best Practices

General Guidelines

  • Adapters for storage access must follow supported hardware rules

  • Cisco UC apps use a 4-kilobyte block size to determine bandwidth needs.
  • Design your deployment in accordance with the UCS High Availability guidelines  
  • 10GbE networks for NFS, FCoE or iSCSI storage access should be configured using Cisco Platinum Class QOS for the storage traffic.
  • Ethernet ports for LAN access and ethernet ports for storage access may be separate or shared. Separate ports may be desired for redundancy purposes. It is the customer's responsibility to ensure external LAN and storage access networks meet UC app latency, performance and capacity requirements
  • In absence of UCS 6100/6200, normal QoS (L3 and L2 marking) can be used starting from the first upstream switch to the storage array.
  • With UCS 6100/6200
  • FC or FCoE: no additional requirements. Automatically handled by Fabric Interconnect switch.
  • iSCSI or NFS: Follow these best practices:
    • Use a L2 CoS between the chassis and the upstream switch.
    • For the storage traffic, recommend a Platinum class QoS, CoS=5, no drop (Fiber Channel Equivalent)
    • L3 DSCP is optional between the chassis and the first upstream switch.
    • From the first upstream switch to the storage array, use the normal QoS (L3 and L2 marking). Note that iSCSI or NFS traffic is typically assigned a separate VLAN.
    • iSCSI or NFS: Ensure that the traffic is prioritized to provide the right IOPS. For a configuration example, see the FlexPod Secure Multi-Tenant (SMT) documentation (http://www.imaginevirtuallyanything.com/us/).
  • The storage array vendor may have additional best practices as well.
  • if disk oversubscription or storage thin provisioning are used, note that UC apps are designed to use 100% of their allocated vDisk, either for UC features (such as Unity Connection message store or Contact Center reporting databases) or critical operations (such as spikes during upgrades, backups or statistics writes). While thin provisioning does not introduce a performance penalty, not having physical disk space available when the app needs it can have the following harmful effects
    • degrade UC app performance, crash the UC app and/or corrupt the vDisk contents.
    • lock up all UC VMs on the same LUN in a SAN

 

Link Provisioning and High Availability

Consider the following example to determine the number of physical Fiber Channel (FC) or 10Gig Ethernet links required between your storage array (such as the EMC Clariion CX4 series or NetApp FAS 3000 Series) and SAN switch for example, Nexus or MDS Series SAN Switches), and between your SAN switch and the UCS Fabric Interconnect Switch. This example is presented to give a general idea of the design considerations involved. You should contact your storage vendor to determine the exact requirement.

Assume that the storage array has a total capacity of 28,000 Input/output Operations Per Second (IOPS). Enterprise grade SAN Storage Arrays have at least two service processors (SPs) or controllers for redundancy and load balancing. That means 14,000 IOPS per controller or service processor. With the capacity of 28,000 IOPS, and assuming a 4 KByte block size, we can calculate the throughput per storage array controller as follows:

  • 14,000 I/O per second * (4000 Byte block size * 8) bits = 448,000,000 bits per second
  • 448,000,000/1024 = 437,500 Kbits per second
  • 437,500/1024 = ~428 Mbits per second

Adding more overhead, one controller can support a throughput rate of roughly 600 Mbps. Based on this calculation, it is clear that a 4 Gbps FC interface is enough to handle the entire capacity of one Storage Array. Therefore, Cisco recommends putting four FC interfaces between the storage array and storage switch, as shown in the following image, to provide high availability.

Note: Cisco provides storage networking and switching products that are based on industry standards and that work with storage array providers such as EMC, NetApp, and so forth. Virtualized Unified Communications is supported on any storage access and storage array products that are supported by Cisco UCS and VMware. For more details on storage networking, see http://www.cisco.com/en/US/netsol/ns747/networking_solutions_sub_program_home.html.

Join the High Availability, Inc. Mailing List

Subscribe