Cisco Nexus Windows Nlb Multicast

  1. Nexus Windows Phone
  2. Nexus Windows Download
Nlb

Windows Network Load Balancing is a pretty popular (free!) solution for quickly setting up load balancers. You must choose either Unicast or Multicast operational mode. Unicast – Each NLB cluster node replaces its real (hard coded) MAC address with a new one (generated by the NLB software) and each node in the NLB cluster uses the same (virtual) MAC.

Posted by6 years ago
  • Symptom: Microsoft NLB traffic in multicast mode is punted to the CPU and may be subject to CoPP.Conditions: - Nexus 3000 - Microsoft NLB multicast mode. For this, the Nexus has the following configured: mac address-table static 03bf.xxxx.xxxx vlan interface interface Vlan ip arp 03bf.xxxx.xxxx.
  • Hello Plamena, I've asked this question to Cisco Partner Help Line before and below is their answer. ' ACI Mode does NOT support MS Windows NLB and it is also not on the roadmap. Sorry that I can't give you better news. Initially support for this feature was planned and in development, but then they cancelled the development, because of development issues.
Archived

Nexus Windows Phone

Today we added the standard static arp entry to our Cisco switch to support 4 MS NLB Virtual Machines on ESXi hosts. Done this several times before without a problem, however not today!

There are some differences to our existing configuration so I wonder if you can help reddit?

Multicast
  • firstly this VMware environment is not ours and I don't have access to it (long story). However I am not aware of any configuration required at the VMware level either with Standard DV or the Nexus 1000 (at least we didn't need to do anything on our production network at this level and the VMware/Cisco docs don't mention any required changes for multicast at this level)

  • this environment is on a different VLAN to those previously configured, although again the online docs that I have found do not mention any additional changes other than adding the static arp entry

  • this environment physically connects to a layer 2 Cisco switch (4948) which is connected by fiber to our Nexus 7000s (where the static arp entry is added). This differs from our production network in that we use HP 6120XGs as the layer 2 switches (no changes have been made on them to support multicast), they also connect directly to the nexus 7k via fiber

Normally when you set up an NLB address, even without the static arp entry, I thought you could still connect to the multicast address from within the same VLAN. However on this occasion I observed you can only connect to it from within the VMware environment (I.e. physical servers on that vlan can not ping or connect to any ports on the load balanced address only other VMs), admitidaly though we never tested this before so it may be normal. Access is obviously not available from anywhere else.

My network colleague has no idea what the problem could be, but did notice we had 'Rendezvous points' configured on the VLANs that are working.

Can you help? I've tried google of course without luck and my expertise is not with networking!

Does the rendezvous theory seems sound? Have any of you experienced this before and have any ideas?

Thanks for your help

Update: looks like we have PIM sparse mode configured on the vlan and so the rendezvous point is required! Will update again if all working after configured.

Thanks

8 comments

Hello,

I'm dealing with the following situation: I have 2 servers running window 2012 R2 and 1 server running windows 2008 R2. I need to build a Microsoff Network LoadBalancer Cluster using the 3 servers; servers are connected to a cisco Nexus Switch; servers are using Intel as well as Broadcom Nic cards. The servers are in the same subnet and the Network LoadBalancer VIP of the newly formed cluster is also in the same subnet. As long as wireshark is running on the servers the cluster is working (setting the nic interfaces to promiscuous mode) - members are detecting / seeing each other and passing each other the Primary / Master role when configured via the Network Load Balancer UI. How can this be possible ? Another strange thing: if I take one of the member offline from the cluster and run a wireshark capture, I can see the Microsoft Network Load Balancer hearthbeat broadcast messages being received from that host; nevertheless, as soon as I add the host again in the cluster, and I restart Wireshark, then the Network Load balancer cluster is working but in the Wireshark capture I don't see any Network Load Balancer hearthbeat messages received any longer .... Moreover, each time I take any host offline from the Network Load Balancer cluster or add it to the Network Load Balancer cluster, wireshark stops the formerly ongoing capture and displays a notification message stating that the ongoing capture cannot continue as the NIC formerly in use is no longer available ....

I'm kind of puzzled ....

Nexus Windows Download

Thank for any help.