In order to allow for installation of a nested Hyper-V on ESXi 5.5, there is a requirement to configure the virtual machine settings once the guest operating system has been deployed.
Firstly, we need to add two items to the existing virtual machine configuration file in a powered off state. Prior to making these changes we will make a backup of the current virtual machine configuration in the event we need to roll back.
cp /vmfs/volumes/<datastore>/<virtual machine>.vmx /vmfs/volumes/<datastore>/<virtual machine>.vmx.backup
In order to enable nested Virtualization Technology to run 64-bit virtual machines the following is required to added to the configuration file using a text editor.
vhv.enabled = "TRUE"
Now in order to run a hypervisor inside a virtual machine, we will add the following item to override the default setting. This prevents the error message “Hyper-V cannot be installed: A hypervisor is already running” if you attempt to install the Hyper-V server role in the guest operating system.
hypervisor.cpuid.v0 = "FALSE"
Finally, we need to expose hardware virtualization to the guest operating system so that the processors support the required virtualization capabilities. If you do not expose the hardware virtualization you will receive the error message “The processor does not have the required virtualization capabilities” on installing the Hyper-V server role.
This can be performed using the vSphere Web Client to edit the virtual machine CPU settings as below:
Once , the above configuration changes have been applied to the virtual machine you should be able to install the Hyper-V server role.
I was recently configuring Lockdown Mode in my lab environment when I discovered an issue where I could not configure the status on a single ESXi host system where the state was Disabled from the vSphere Web Client and greyed out from the DCUI.
I discovered that the symptom was caused as the permissions for the DCUI user were removed from the ESXi host system. In order to resolve the issue I performed the following on the ESXi host system.
1) Connect to the ESXi host system using the vSphere Client.
2) Select the ‘Permissions’ tab.
3) Right click and select ‘Add Permission’.
4) Select the DCUI user and assign the Administrator role to the user account.
5) Select ‘OK’.
Following the above change I was able to modify the Lockdown Mode from the DCUI and then manage the ESXi host system from the vSphere Web Client.
In order to join ESXi hosts to the domain Likewise agents are used to join to the Active Directory domain and for user authentication requests. In order to troubleshoot Active Directory integration you will need to enable logging of the agent as by default they do not generate a log file.
This can be performed by enabling logging for the netlogond, lwoid and lsassd daemons.
1) Modify the file ‘/etc/init.d/netlogond’ file to change the line ‘PROG_ARGS=”–start-as-daemon –syslog“‘ to one of the below, if you are using a scratch partition use the second option:
PROG_ARGS="--start-as-daemon --logfile /var/log/netlogond.log --loglevel debug"
PROG_ARGS="--start-as-daemon --logfile /scratch/log/netlogond.log --loglevel debug"
2) Modify the file ‘/etc/init.d/lwoid‘ file to change the line ‘PROG_ARGS=”–start-as-daemon –syslog“ to one of the below, if you are using a scratch partition use the second option:
PROG_ARGS="--start-as-daemon --logfile /var/log/lwiod.log --loglevel trace"
PROG_ARGS="--start-as-daemon --logfile /scratch/log/lwiod.log --loglevel trace"
3) Modify the file ‘/etc/init.d/lsassd‘ file to change the line ‘PROG_ARGS=”–start-as-daemon –syslog”‘ to one of the below, if you are using a scratch partition use the second option:
PROG_ARGS="--start-as-daemon --logfile /var/log/lsassd.log --loglevel trace"
PROG_ARGS="--start-as-daemon --logfile /scratch/log/lsassd.log --loglevel trace"
4) Restart each of the services to apply the changes:
I was recently joining ESXi (5.5.0, 1892794) hosts to a domain to which the task would fail with the status ‘Errors in Active Directory operations’. On further investigation of the Likewise agent log on the impacted ESXi host, the following was being written to the log file:
ERROR:[SMBSocketReaderMain() /build/mts/release/bora-1471401/likewise/esxi-esxi/src/linux/lwio/server/rdr/socket.c:660] Error when handling SMB socket
This issue is due to the size of the Kerberos Ticket Granting Service (TGS) being very high. From the network capture for SMB errors in the Likewise agent logs where the ‘Security Blob Length’ and ‘Byte Count’ values are greater than the Max Buffer Size on the domain controller to which the ESXi host is setting up a SMB session which by default is 16644 bytes or 4356 bytes if total memory is less than or equal to 512 MB on the host.
Below is an example of the above values in an SMB network capture:
Security Blob Length: 19314
Byte Count (BCC): 19371
In order to resolve this issue, I was required to add a DWORD value name ‘SizeReqBuf’ to the registry key ‘HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters’ where the the value data (Decimal) is greater than the values being returned from the network capture and then restart the domain controller(s).
In order to change the root password of an ESXi Host using PowerCLI, firstly you will need to connect to the host in question as the root user:
Connect-VIServer esxi1.domain.local-User root-Password assdKIUU1235
Now by invoking the ‘Set-VMHostAccount’ cmdlet, we can change the root password as follows:
Set-VMHostAccount –UserAccount root –Password 123GHYJ!rys