SECONDARY NIC WITH 2016 NUC SKULL CANYON
When I first discovered the 2016 NUC Skull Canyon, my primary hesitation was that this NUC had only one Gigabit Ethernet Network Interface Card (NIC). In my ESXi lab, I have been using several Late 2012 Mac Minis with two GigabitEthernet NICs each. I used the primary (on-board) NIC for management and VM traffic and the secondary (Thunderbolt-based) NIC for iSCSI traffic to the iSCSI SAN. Unfortunately, the NUC Skull Canyon has only one on-board NIC, which in my opinion is a significant oversight on Intel’s part. Granted, the Late 2012 Mac Mini also comes with one on-board NIC, so it has the same “design flaw” as the 2016 Skull Canyon NUC.
Fortunately, the 2016 NUC Skull Canyon has an on-board Thunderbolt 3 controller, so I had some hope that I could possibly add a Thunderbolt-based GigabitEthernet NIC to the NUC and that such a NIC would be supported in VMware ESXi.
With my Mac Minis used as VMware ESXi hosts, I was able to utilize the Apple Thunderbolt to Ethernet (GigabitEthernet) adapter as a secondary NIC for iSCSI traffic to the SAN thanks to William Lam’s efforts. William for years compiled custom images of ESXi 5.0 and ESXi 5.5 for the Mac Mini that supported the Apple Thunderbolt to Ethernet adapter as a secondary NIC. It was from one of William’s posts that I learned about ESXi 6.0 being able to run natively on the 2012 Mac Mini without having to use a custom ESXi image.
Because ESXi 6.0 natively supports the Apple Thunderbolt to Ethernet adapter, my hope was that this adapter would work with the 2016 NUC Skull Canyon. The only problem that remained to be solved was the lack of the Thunderbolt/Thunderbolt 2 port in the 2016 NUC Skull Canyon, which has a combined USB 3.1/Thunderbolt 3 connection behind a USB-C port.
After additional research, I discovered that there were two Thunderbolt 3 to Thunderbolt adapters (made by StarTech and Kanex) about to be released in May/June 2016. I placed an order for a 2016 NUC Skull Canyon and a StarTech Thunderbolt 3 to Thunderbolt adapter and received both by late May 2016.
In the future, a direct Thunderbolt 3 to Ethernet adapter will most likely be released by Apple as well as other manufacturers, such as StarTech and Kanex. Hopefully, at least one of those adapters will be based on a chipset supported natively in ESXi (or perhaps someone will compile an ESXi driver for that chipset), so there will no longer be a need to daisy chain two adapters in order to gain a secondary NIC with the 2016 Skull Canyon NUC.
APPLE THUNDERBOLT TO ETHERNET ADAPTER WITH 2016 NUC SKULL CANYON
In order to use the Apple Thunderbolt to Ethernet Adapter with the 2016 NUC Skull Canyon, another adapter must be used. This other adapter is a Thunderbolt 3 to Thunderbolt adapter. At the time of this writing (June 2016), there were at least two companies that manufactured such adapters: StarTech and Kanex. The following was observed when connecting the Apple Thunderbolt to Ethernet adapter to the 2016 Skull Canyon NUC via the StarTech Thunderbolt 3 to Thunderbolt adapter.
1. Cold Boot (power disconnected from the NUC)
When power is applied to the NUC, the NUC automatically powers on but fails to discover the connected Thunderbolt adapter. It appears that in order to get the NUC to detect the connected Thunderbolt to GigabitEthernet adapter from a cold boot, the NUC must be immediately turned off with the power button. After a 5-second wait (make it 10 seconds to be safe), the NUC must be turned on using the power button.
The NUC Skull Canyon will now detect the Apple Thunderbolt to Ethernet Adapter, and a message to that effect will be displayed in the upper left corner of the NUC’s POST screen. This seems to be caused by the failure of the NUC to supply power through the Thunderbolt port after a cold boot. However, once the power button is pressed to turn the NUC off the NUC seems to apply power to the Thunderbolt adapter within a few seconds, which is evident from the link LED on the switch port turning on exactly at this point.
Note: I have addressed this issue to both Intel and Startech, and Intel replied that this issue was something that Startech must fix in their Thunderbolt 3 to Thunderbolt adapter. Startech replied to me that their Thunderbolt 3 to Thunderbolt adapter had passed the Intel certification, but they did not specifically test this adapter with the 2016 Skull Canyon NUC.
2. Hot Unplug
If the Apple Thunderbolt to Ethernet adapter is unplugged while the 2016 NUC Skull Canyon is powered up, the hot plug does not result in the Apple Thunderbolt to Ethernet adapter being detected. For this adapter to be detected again, a warm-boot scenario must be invoked. In ESXi this can be achieved by rebooting the NUC. Alternatively, you can power off the NUC with the power button and power it back on.
3. Hot Plug
If the Apple Thunderbolt to GigabitEthernet adapter is plugged in while the NUC is running, the Thunderbolt adapter will not be detected. For the Thunderbolt adapter to be detected, a warm-boot scenario must be invoked. In ESXi this can be achieved by rebooting the NUC. Alternatively, you can power off the NUC with the power button and power it back on.
4. BIOS Power settings Affecting the Detection of the Apple Thunderbolt to Ethernet Adapter in 2016 Skull Canyon NUC
In Skull Canyon NUC BIOS, Navigate to Advanced > Power
- Deep S4/S5: With this setting enabled, when NUC is shut down from ESXi, it loses the Thunderbolt adapter upon the subsequent boot. Therefore, this setting should be disabled if you want to be able to shut down ESXi and then power up the NUC without losing the Apple Thunderbolt to Gigabit Ethernet Adapter. If you keep the Deep S4/S5 setting enabled, you will have to follow the same procedure to get the NUC to detect the Thunderbolt adapter as in the cold-boot scenario (described above).
- Native ACPI OS PCIe Support: This setting can be enabled. When ESXi is shut down with this setting enabled, the NUC powers off and the fan no longer runs. The NUC can be turned on with a power button, and the Thunderbolt adapter is detected properly.
- PCIe ASPM Support – should be disabled. When this mode is enabled, shutting down NUC from ESXi is inconsistent. Sometimes it works fine, and other times the NUC’s power button LED turns off but the NUC continues to spin its fan, and the connected display continues to show the ESXi screen image.
QUIETING DOWN THE NUC SKULL CANYON
The network rack with my SOHO lab is located directly behind my desk, so I sit about two feet away from the rack. Over the years, I have developed low tolerance for loud fans installed inside of my lab equipment, which caused me to replace all of my lab equipment either with fanless models or with models that have extremely quiet fans. I sit behind my desk for at least eight hours every day, so having a lot of fan noise right behind my ears is something I am not willing to accept.
Unfortunately, the 2016 NUC Skull Canyon is too loud for my taste the way it comes configured out of the box. When I started testing the 2016 NUC Skull Canyon with ESXi 6.0 U2, I could hear the fan from two feet away even when the NUC was idle with no VMs running. As I would power up several VMs, the fan noise became exceedingly loud to the point that I considered returning the NUC and abandoning the entire idea of using the 2016 NUC Skull Canyon as an ESXi host in my lab. However, after some experimentation, I found the settings in BIOS that helped me quiet down the 2016 NUC Skull Canyon.
In my case, the NUC CPU utilization stays under 30% with 12 VMs running concurrently, and the settings listed below allowed me to quiet down the NUC’s fan to the levels that I cannot hear from two feet away. However, when I power up a VM or shut down several VMs at a time, the NUC’s CPU utilization rises well above 30%, and the chassis fan spins up and becomes rather loud. Once the CPU utilization settles down under 30%, the chassis fan spins down and the NUC becomes quiet again.
Adjusting Cooling Settings in BIOS
I changed three settings in the 2016 NUC Skull Canyon BIOS Cooling tab as follows:
- Minimum Duty Cycle: 15%
- Minimum Temperature: 76C
- Duty Cycle Increment: 5%
These settings mean that the fan will always spin at at least 15% of its maximum RPM, and as long as the CPU temperature does not exceed 76C, the fan will be spinning at 15% of its maximum RPM. As the CPU temperature rises above 76C, with each additional degree Celsius, the fan will add 5% (of its maximum RPM) to its speed. The CPU thermal throttling threshold for this CPU is 100 C, so the goal is to spin the chassis fan at 100% well before the CPU temperature reaches 100 C to preclude any possibility of the CPU temperature reaching its thermal throttling threshold. With the settings listed above, the chassis fan in the 2016 NUC Skull Canyon will reach its 100% RPM at 93 C, which in my opinion will leave enough “room” for the fan to prevent the CPU temperature from reaching 100 C.
I have been using the 2016 NUC Skull Canyon for several weeks with the Cooling settings set to the above values, while the NUC is running 12 VMs concurrently, and so far the chassis fan in the NUC has been spinning at the RPM that allow me to sit two feet away from the NUC and not hear its CPU fan. The NUC spins up its fan occasionally for a few seconds and becomes loud enough that I can hear the fan noise, but it immediately spins the fan down again. So, every now and then, I can hear a low-level 5-10 second fan noise coming from the NUC, but most of the time, the NUC is quiet. I can deal with this solution for now until a fanless case for the 2016 NUC Skull Canyon is released. I will then transfer the internals of the 2016 NUC Skull Canyon to the fanless case and will no longer have to worry about the fan noise or Cooling settings in BIOS.
Adjusting Power Efficiency Policy (PL1/PL2 Levels) in BIOS
Because I manipulated the default Cooling settings in BIOS, I decided to play it on the safe side and lower the CPU’s PL1/PL2 levels in BIOS. These settings can either be changed manually or they can be modified as a macro. I decided to use the macro pre-defined by Intel and simply changed the Processor Power Efficiency Policy on the Power tab in the 2016 NUC Skull Canyon’s BIOS to Low Power Enabled. I believe this macro lowers the PL1/PL2 levels to 35Watt/45Watt.
My reasoning behind this was that because I am using the 2016 NUC Skull Canyon NUC as an ESXi host, the ability of the CPU to burst to higher speeds at the expense of higher power consumption (and significantly higher CPU temperatures) is not as important to me as the quiet work environment. The VMs that I run on the NUC are for lab purposes only, and most of the time, the VMs sit idle. Even the occasional rises in CPU load by a few VMs at a time should not put enough demand on the CPU to burst above 2.6 GHz and get much hotter, causing the chassis fan to speed up. So, to keep the NUC cooler at lower fan RPMs, I decided to lower the L1/L2 values.
So far, I am completely satisfied with the performance that I am getting from the 2016 NUC Skull Canyon with the Power settings in BIOS set to Low Power Enabled. The VMs that I run on the NUC – various Cisco Unified Communications VMs – are very responsive both in their functionality and the speed of their web-based GUI management interfaces. I see a tremendous improvement in the performance of these VMs vs the same VMs running on the 2012 quad-core 2.6 GHz Mac Mini boxes. Even with 12 VMs running concurrently on the 2016 Skull Canyon NUC, the performance of each VM is better compared to their performance when running on the 2012 Mac Minis, which could not run more than 6 VMs each. Therefore, in my opinion, lowering the CPU L1/L2 levels to help bring down the CPU temperatures was well worth it in the virtualization environment. Whether or not changing the L1/L2 levels actually made any effect on lowering the CPU temperatures is hard to tell, as proving it would require more testing as well as having to rely on some CPU temperature monitoring software rather than relying on empirical data.
Hi Telecastle, thank you kindly for committing your “work-in-progress” details to your blog. I am currently going down a similar path with ESXi 6.5 (although I may be forced to return to 6.0) on my NUC i7 Skull Canyon and am pleased to report the following progress:-
1. I am able to passthru the Iris Pro Graphics to a Windows 10 64bit VM with the latest Intel driver installed. While I have yet to obtain a connected monitor to any of the ports, I can confirm that the Win10 VM can make use of the iGPU’s Quick Sync capabilities (which was not available in ESXi 6.0)
2. Initially I had trouble exposing the on-board Bluetooth capability of the NUC but this was soon remedied by disabling the new ESXi 6.5 USB driver (courtesy of a fellow NUC user) using esxcli system module set -m=vmkusb -e=FALSE
3. I have had a number of attempts to expose the Thunderbolt port via the Kanex Thunderbolt 3 to Thunderbolt adapter attached to the Apple Thunderbolt to Ethernet adapter without any success so far. Unlike your experience with the Startech adapter, the Kanex passes power to the Apple adapter as soon as power is available on the NUC and the port lights on my switch immediately come to life.
As I am keen to get Thunderbolt working (if possible) on ESXi 6.5, I have ordered the Startech version of the Thunderbolt adapter which should arrive any day now. It will be interesting to see if there is any differences between the two adapters.
Of course, the problem could well be ESXi 6.5 however I am reluctant to start going backwards until I have explored all possible options. I will be sure to get back to you with any further progress.
Awesome stuff, thanks for writing this up so nicely! Do you by chance know if anyone has tested using the Apple Thunderbolt 3 to Thunderbolt 2 adapter instead of the Startech one (in serial with the Apple Thinderbolt2Gbit adapter)? It’s about half the price, and easier to come by… I reckon it should work, it’s just a “dumb” driverless component, isn’t it?
Yes, Apple Thunderbolt 3 to Thunderbolt 2 adapter instead of the Startech adapter works fine.