Category Archives: Virtualization

Yes! Intel NUC 8i5BEH accepts 64 GB RAM!

So my 4-year old Intel NUC 5I5MYHE which I have been using as an ESXi server finally decided to give up the ghost. While on the lookout for a replacement, I came across William Lam’s excellent post at https://www.virtuallyghetto.com/2019/03/64gb-memory-on-the-intel-nucs.html where he tested 2x 32GB SODIMMs on his Hades Canyon NUC (supported), and found that it was also possible to run 64GB of RAM on his older 6th Gen NUC (not technically supported). He speculated that later generations of NUCs would be capable of running 64GB RAM too.

After a bit more research, I wound up choosing the NUC 8i5BEH because it had 4 physical cores, and with hyper-threading could present up to 8 vCPUs on ESXi. God knows I’ve been needing at least 6 vCPUs for the longest time to run some lab VMs. The only unknown was whether the NUC 8i5 would support the Samsung DDR4 32GB DIMMs (P/N M471A4G43MB1), and whether it could finally support 2x 32GB for 64GB RAM. More RAM is always a good thing, right?

I bought the NUC locally in Singapore, but had to get the RAM module from Amazon US. It simply wasn’t available anywhere else here. Finally, when everything arrived, it was time to unbox and start assembling.

Fresh from the store
Unboxed NUC
Post install: The single 32GB DIMM is installed together with an mSATA SSD, which was recovered from the late NUC5I5.

Assembly done, I tried booting up the NUC and immediately ran into issues. I didn’t manage to capture a screenshot, but booting ESXi 6.7u1 would always fail when it was loading some drivers. With nothing left to lose, I thought a BIOS upgrade might help. I downloaded version 0066 for NUC8i5, and proceeded to run the upgrade.

Flashing the BIOS from 0051 to 0066

After that, ESXi booted up without issues, and went straight to work with 32GB RAM installed. No fuss!

NUC8i5 running ESXi with 32GB of RAM

In any case, the first gamble of using the 32GB DIMM paid off. I immediately ordered another 32GB DIMM off Amazon, which took an agonizing 9 days to arrive. It was somewhat my fault, I wasn’t around for the first few delivery attempts.

Ta-da! NUC shown with second 32GB SODIMM before installation
Both 32GB DDR4 SODIMMs installed

So, the moment of truth: Does the Intel NUC8i5BEH support 64GB of RAM? Happily, the answer was “Yes”.

That’s 64 JeeBees of goodness right there, folks.

I’ve been running this for a few days with multiple VMs powered on, and this baby has been rock solid so far. Definitely a very viable home lab solution!

vSphere Metro Storage Cluster Networking: Part 3

This post has been much delayed for a number of reasons, namely because some feasible solutions became End of Sale, while others, based on field experience were not practically seen or deployed. In the meantime, other newer solutions which can address some of the issues we discussed earlier have now become available, so here is Part 3.

So back in Part 1, I blogged about considerations for the L2 DCI link for a vSphere Metro Cluster. In Part 2, I covered the potential routing pitfalls of stretching L2 networks across sites.

In Part 3, I’m going to discuss the methods which can be used to workaround the some of issues which we talked about in Part 2. Just to recap, the issues with stretched networks were:

  • Asymmetrical traffic flow across DC sites
  • Inability of network services (eg firewalls) to handle asymmetric traffic flow
  • Lack of VM site-awareness for optimized routing
  • Inefficient use of the DCI

VMware NSX Distributed Firewall with Asymmetrical Traffic Flows

In Part 2, I mentioned that it is possible for a VM to move between sites, with the result being that traffic to the VM (ingress traffic) could come in on say DC1, while traffic from the VM (egress traffic) could exit on DC2. Such a situation would cause issues with traditional firewalls, since these need to see traffic flows in both directions in order to allow or deny traffic correctly.

vMSC Invalid Firewall State

Perimeter Firewalls do not see consistent flow state

In the diagram above, the firewall at DC1 sees the “in” state of the flow from both User 1 and User 2 to VM1, which happens to have vMotioned to DC2. Assuming we’ve tweaked the setup for local egress, the VM will send traffic out via the DC2 router. As a consequence, the firewall at DC2 sees only the “out” state of the flow. This means that firewalls at both sites would observe any or all of the following issues and start dropping traffic because of state inconsistencies:

  • Incomplete TCP handshake / termination
  • Inconsistent sequence numbers
  • Unidirectional traffic flow

With NSX for vSphere, it’s actually possible to deploy a stateful firewall at the VM level using the Distributed Firewall (DFW) feature. NSX DFW works by having security policy defined centrally via NSX, which is then pushed down to corresponding VMs for enforcement at the micro level. With this being the case, we’ve brought the firewall closer to the VM itself by enforcing policy at the vNIC level.

NSX DFW sees flow state

NSX Distributed Firewall sees full flow state

Looking at the diagram above, the network ingress and egress paths of traffic to the VM are still inconsistent. However, the firewall enforcement point is at the vNIC level, which is tied to the VM. At the vNIC level, the DFW will always observe all traffic entering and exiting the VM. The DFW filter will have full information on the network traffic flows of the VM, and be able to appropriately apply stateful firewall policies, regardless of where the VM is or moves to, or how traffic arrives and departs from it. We’ve effectively resolved the problem of stateful perimeter firewalls not working due to not seeing the full traffic flow, by moving the firewall to the VM vNIC.

Other Methods

It bears mentioning that there are/were other methods of addressing some of the other network considerations that come with stretching networks. When writing both Part 1 and 2, I  considered writing more on these methods, however it appears that they are not quite feasible in the real world. Here is just a summary of what might have been.

Locator ID Separation Protocol (LISP): As you may have realized, there doesn’t seem to be a solution which has VM site awareness, so there is no way to optimize ingress routing to VMs according to which site they are located on (potentially also reducing DCI traffic). The fact is, LISP was supposed to address this issue, by being able to insert granular routes to VMs depending on where they resided. The biggest challenge with utilizing LISP in order to optimize ingress routing to the VM is that it requires ISPs to support LISP within their infrastructure. It is quite rare to come across such ISPs in the real world. Also, LISP plays a lot with insertion of host routes, which is its own set of network black magic.

DNS Optimization with Cisco ACE Load Balancers: Cisco also developed an orchestration solution utilizing its global and local load balancers to dynamically update DNS A records to point to wherever a VM was vMotioned to. This would enable new connections to directly reach the VM at it’s new location, thus also ensuring new connections do not have to traverse the DCI. It’s really quite a creative hack, though unfortunately the Cisco ACE product line was EoS’ed not long after the solution was published.

vSphere Distributed Virtual Switch: Packet analysis using ERSPAN

Packet analysis is invaluable in troubleshooting network issues and network monitoring. While packet analysis used to be used only in the domain of physical networks, that is no longer the case.

The vSphere Distributed Virtual Switch is now able to produce dumps of specific virtual network traffic and transport using ERSPAN to packet monitoring consoles. Yes, that’s right, using the Distributed Virtual Switch you can monitor network traffic in the virtual realm even if the traffic doesn’t actually hit the physical wire.

I didn’t quite see much material covering this so far, so I thought I’d show how this would work. For this blog post, I used the following:

  • Distributed Virtual Switch (vSphere Enterprise Plus)
  • Wireshark installed in a monitoring console (my personal laptop)
  • A VM which we want to monitor (a Windows 7 VM which is my jump box VM)

Let’s start with setting up Wireshark for packet capturing on the monitoring console. Opening Wireshark, go to Capture -> Interfaces.

That should open up a list of interfaces which we can capture from. Now I’d like to capture using the “Local Area Connection”, though it’s probably a good idea to find out what the IP address for that interface is. We’ll need to set it as a receiver for ERSPAN captured traffic. Click on “Options”.

We look out again for the “Local Area Connection” and note the IP address associated with the chosen receiving interface. In the case, it’s 10.2.1.110. We’ll checkbox the interface, and then click on “Start”.

Just like that, Wireshark will start dumping out all the traffic it gets on the interface. In this case, we only want to monitor traffic capture via ERSPAN on the Distributed Virtual Switch. Since ERSPAN encapsulates traffic in GRE, that’s what we’ll filter for. Type in “gre” into the filter field and click on “apply”, which should immediately filter out all the “noise” packets.

Now Wireshark is set up correctly, let’s turn our attention to setting up ERSPAN. Select the Distributed Virtual Switch, which in my example is “dvSwitch”. Go to Settings->Port Mirroring. Click on “New”.

In the pop-up dialog, select “Encapsulated Remote Mirroring (L3) Source”. This enables us to remotely send dump out traffic to a monitoring console over IP. Click “Next”.

Give the session a name, and change it’s status to “Enabled” to activate ERSPAN port-monitoring. Click “Next”.

Add a network traffic source by clicking on the icon below. A source is the object which we’d like to capture network traffic from.

A list of ports will pop up. You can choose one more more VMs, or one or more VMKnics to monitor and capture from. In this case, let’s choose a Windows 7 desktop VM. Click “OK”.

You can see the the VM we’ve selected is now shown as one of the sources of traffic for packet capture. Click “Next”.

Now, we’ll need to tell the Distributed Virtual Switch where to send the captured traffic. Click on the “+” sign to add a destination.

This is where we type in the IP address of the monitoring console which has Wireshark enabled. If you remember, we noted back when we were setting up Wireshark that the IP address for the receiving interface was 10.2.1.110, so that’s what we type in and click “OK”.

That done, we’ll review the settings one last time, and make sure everything is in order. Click “Finish” when everything is good to go.

Jumping back to the Wireshark console, we should immediately see traffic being dumped over the network by the Distributed Virtual Switch and showing up on the screen. On the Windows 7 desktop, I used a browser to access this http://kacangisnuts.com/, and the result is quite a bit of captured HTTP traffic. Just for fun, I selected a HTTP GET request and followed its TCP communication stream to see what was being communicated.

The captured traffic stream is pieced together and shown below; it’s the HTTP conversation between the Windows 7 desktop’s browser and my blog server.

So there you have it, this is a quick blog entry I whipped up to show how to use ERSPAN for packet capture and analysis. This shows how, with the right tools, you can still use good ol’ conventional network analysis and troubleshooting methodologies to monitor and solve problems for VMs, just like you would for systems in the physical network world.