Saturday, February 6, 2016

How to fix Sleep Mode issue on Windows 10

Last week I updated my laptop's OS from windows 7 to Windows 10 (64-bit) as well as installed all available patches.
Today while working on my laptop, left it for some time and when returned, noticed Sleep Mode having issues and the system didn't woke up from sleep then I had to press and hold the power button to power off/reboot the laptop. Then just to cross-check I closed the lid of laptop to put the OS in sleep mode and then again it it didn't woke up (yes there was no issue in power option config).

This is when I started looking for a fix and found a related discussion on HP Support forums 
however there its listed for Windows 8.1 but the same worked for my Windows 10 64-bit OS too.

This Sleep mode issue on Windows 10 is related to Intel Management Engine Interface (MEI) Driver and to resolve the issue do the following:
  • Download version 9 or 10 version of the Intel Management Engine Interface (MEI) Driver form your respective System vendor site or directly from Intel itself (in case of mine, its a Lenovo 40-70 model laptop).
Look for the download in the "Driver-Chipset" category. If an Intel Management Engine Interface Driver with a version number starting in 9 or 10 is listed, download it (if you didn't find the Intel IME driver for Windows 10, then like me download the one listed for windows 8.1)

  • If an MEI driver with a version number starting in 9 or 10 is NOT listed on system vendor site, then download the appropriate driver directly from Intel. [Version 9.5.24.1790 1.5M]
  • Install version 9 or 10 of the Intel Management Engine Interface (MEI) Driver.
If you receive a dialog warning about replacing a newer version of the software, accept it. 
NOTE: You do NOT need to uninstall version 11 before installing version 9 or 10. The presence of some version of the driver is required in order to "upgrade" (or in this case downgrade) it.

If you already have a newer version of Intel IME installed then the setup will prompt for overwrite permission, click Yes.
  • Once you Installed the Intel MEI driver version 9 or 10, Sleep Mode/Hibernate will start working again.
To avoid this issue in future, you may need to prevent the Intel MEI driver updating to version 11 via Windows update.To do so, 
  • Download the Windows 10 "Show or Hide Updates" Troubleshooter Package
Change the Windows Update Services setting from Automatic to Manual.

 
  •  Run the Windows 10 "Show or Hide Updates" Troubleshooter Package and hide updates to the Intel Management Engine Interface (MEI) Driver. (Doing so will block your system from automatically reinstalling or showing updates for version 11 of the driver.)
 Click Next,
  • Now change the Windows Update Services setting from Manual back to Automatic and you are done.
That's it... :)


Friday, February 5, 2016

VMware vExpert 2016 Announced

VMware has announced the list of vExpert 2016…. I am very honored to be named a VMware vExpert again…..Congrats to all those named.


Here is the full list of vExpert 2016... http://blogs.vmware.com/vmtn/2016/02/vexpert-2016-award-announcement.html#comment-9968 

That's it... :)


Wednesday, February 3, 2016

Some useful storage related VMware Esxi commands

In this post I will talk about some useful storage related commands but before that I would like to mention about, getting help in Esxi local/remote shell or vCLI.

At any time to get help about any command or list all available namespace and options for a particular command we can use --help option.
We can use the --help option in same way for any other command.

Note: You can run these commands over local/remote shell or using vCLI. When using vCLI then you will also need to provide connection parameter and the command syntax would be as follows,
#esxcli --server ESXi_host_IP_or_Name --username noor --password noor2122 [<namespace> ...] <cmd> [cmd options]
                                                                 or 
#esxcli -s ESXi_host_IP_or_Name -u noor -p noor2122 [<namespace> ...] <cmd> [cmd options]

Now lets talk about some some useful storage related commands,

Command to view all LUNs presented to a host
#esxcfg-scsidevs -c  
And to check about a specific LUN,
#esxcfg-scsidevs -c | grep naa.id

To find the unique identifier of the LUN,  you may run this command:
# esxcfg-scsidevs -m
To find associated datastore using a LUN id
#esxcfg-scsidevs -m|grep naa.id

To get a list of RDM disks, you may run following command,
#find /vmfs/volumes/ -type f -name '*.vmdk' -size -1024k -exec grep -l '^createType=.*RawDeviceMap' {} \; > /Datastore123/rdmsluns.txt  This command will save the list of all RDM disk to a text file rdmluns.txt and save it to Datastore123.

Now Run following command to find the associated LUNs,             
#for i in `cat /tmp/rdmsluns.txt`; do vmkfstools -q $i; done
This command will give you the vml.id of rdm luns,
Now use following cmd to map vml.id to naa.id
#ls -luth /dev/disks/ |grep vml.id    in output of this command you will get LUN id/naa.id.

To mark an RDM device as perennially reserved:
#esxcli storage core device setconfig -d naa.id --perennially-reserved=true   you may create an script to mark all RDMs as perennially reserved in one go.

Confirm that the correct devices are marked as perennially reserved by running this command on the host:
#esxcli storage core device list |less

To verify about an specific lun/device, run this command:
#esxcli storage core device list -d naa.id

The configuration is permanently stored with the ESXi host and persists across restarts.

To remove the perennially reserved flag, run this command
#esxcli storage core device setconfig -d naa.id --perennially-reserved=false

To obtain LUN multipathing information from the ESXi host command line:

To get detailed information regarding the paths.
#esxcli storage core path list
or To list the detailed information of the corresponding paths for a specific device,
#esxcli storage core path list -d naa.ID

To figure out if the device is managed by VMware’s native multipath plugin, the NMP or it is managed by a third-party plugin
#esxcli storage nmp device list -d naa.id  
This command not only confirms that the device is managed by NMP, but will also display the Storage Array Type Plugin (SATP) for path failover and the Path Selection Policy (PSP) for load balancing.

To list LUN multipathing information,
#esxcli storage nmp device list

To check the existing path selection policy
#esxcli storage nmp satp list

To change the multipathing policy
# esxcli storage nmp device set --device naa_id --psp path_policy

(VMW_PSP_MRU or VMW_PSP_FIXED or VMW_PSP_RR)
Note: These pathing policies apply to VMware's Native Multipathing (NMP) Path Selection Plug-ins (PSP). Third-party PSPs have their own restrictions 

To generate a list of all LUN paths currently connected to the ESXi host.
#esxcli storage core path list command

For the detail path information of a specific device
#esxcli storage core path list -d naa.id

To generate a list of extents for each volume and mapping from device name to UUID,
#esxcli storage vmfs extent list command

or To generate a compact list of the LUNs currently connected to the ESXi host, including VMFS version.
#esxcli storage filesystem list

To list the possible targets for certain storage operations,
#ls -alh /vmfs/devices/disks

To rescan all HBA Adapters,
#esxcli storage core adapter rescan --all

To rescan a specific HBA.
#esxcli storage core adapter rescan --adapter <vmkernel SCSI adapter name>  Where <vmkernel SCSI adapter name> is the vmhba# to be rescanned.

To get a list of all HBA adapters,
#esxcli storage core adapter list command
Note: There may not be any output if there are no changes.

To search for new VMFS datastores, run this command,
#vmkfstools -V

To check which VAAI primitives are supported.
#esxcli storage core device vaai status get -d naa.id

The esxcli storage san namespace has some very useful commands. In the case of fiber channel you can get information about which adapters are used for FC, and display the WWNN (nodename) and WWPN (portname) information, speed and port state
#esxcli storage san fc list

To display FC event information:
# esxcli storage san fc events get

VML ID
For example: vml.02000b0000600508b4000f57fa0000400002270000485356333630
Breaking apart the VML ID for a closer understanding: The first 4 digits are VMware specific and
the next 2 digits are the LUN identifier in hexadecimal.

In the preceding example, the LUN is mapped to LUN ID 11 (hex 0b).

NAA id
NAA stands for Network Addressing Authority identifier. EUI stands for Extended Unique Identifier. The number is guaranteed to be unique to that LUN.

The NAA or EUI identifier is the preferred method of identifying LUNs and the number is generated by the storage device. Since the NAA or EUI is unique to the LUN, if the LUN is presented the same way across all ESXi hosts, the NAA or EUI identifier remains the same.

Path Identifier: vmhba<Adapter>:C<Channel>:T<Target>:L<LUN> 
This identifier is now used exclusively to identify a path to the LUN. When ESXi detects that paths associated to one LUN, each path is assigned this Path Identifier. The LUN also inherits the same name as the first path, but it is now used an a Runtime Name, and not used as readily as the above mentioned identifiers as it may be different depending on the host you are using. This identifier is generally used for operations with utilities such as vmkfstools.
Example: vmhba1:C0:T0:L0 = Adapter 1, Channel 0, Target 0, and LUN 0.


That's it... :)


Monday, February 1, 2016

Some useful VMware Esxi commands

In this two part series I will summarize some useful Esxi commands that we use time to time. We can run any of these commands from local or remote console.
To run any of these cmd, first connect to host over ssh using putty  (I believe ssh is already enabled on host), once connected to host then, you can run the given command to complete the respective task:

To restart management agents:
#/etc/init.d/hostd restart
#/etc/init.d/vpxa restart
or
To restart all agents at once
#Services.sh
Note: this cmd restart all agents so could take a long time to complete that is why its preferable to restart individual agent

To Power off / Reboot an unresponsive VM: 
Using vim-cmd,
#vim-cmd vmsvc/getallvms  This command will give you the VM Name and respective VM ID.

Now, to see the power state of VM,
#vim-cmd vmsvc/power.getstate VMID

To shutdown the VM,
#vim-cmd vmsvc/shutdown VMID

If the above doesn't work then, use fowwing command to power off the VM,
#vim-cmd vmsvc/poweroff VMID
Using esxcli:
To get VM name and respective wordNumber
#esxcli vm process list 

To power off a VM : There are three power-off methods available with esxcli. Soft is the most graceful, hard performs an immediate shutdown, and force should be used as a last resort.
#esxcli vm process kill -t [ soft,hard,force] -w WorldNumber

Reload a vmx file without removing the virtual machine from inventory: First get to VMID as described above, then 
#vim-cmd vmsvc/reload VMID

To get a list of running tasks on the host,
#vim-cmd vmsvc/task_list 

To get a list of tasks associated with a specific VM,
#vim-cmd vmsvc/get.tasklist VMID

To get information about the status of a particular task, run the command
#vim-cmd vimsvc/task_info task_identifier 

We can also use ps command to find VM related running processes.
#ps |grep VM_name    this command will give you process id and if you want to kill the process, then
#kill process_id   wait for some time, if process is still there then use
#kill -9 process_id

To ping Any host/VM etc: 
#vmkping ip_or_hostname

To enter maintenance mode using the command line interface
vimsh -n -e /hostsvc/maintenance_mode_enter      or   vim-cmd /hostsvc/maintenance_mode_enter or  esxcli system maintenanceMode set --enable true
To check if host is in maintenance mode
#vim-cmd /hostsvc/hostsummary | grep inMaintenanceMode     or 
vimsh -n -e /hostsvc/hostsummary | grep inMaintenanceMode or
esxcli system maintenanceMode get
To exit maintenance mode
#vimsh -n -e /hostsvc/maintenance_mode_exit    or   
vim-cmd /hostsvc/maintenance_mode_exit or
esxcli system maintenanceMode set --enable false

To install an update or any third party vib file stored in any datastore:
#esxcli software vib install -d /vmfs/volumes/..../downloaded.vib or downloaded_vib_bundle.zip

To list the software and drivers currently installed on the ESXi host: 
#esxcli software vib list
 
To avoid slow host boot,  
1. Mark RDM device as perennially reserved: First get the naa.id of respective RDM LUN and use below command to mark it perennially then host wouldn't look for it during boot,
#esxcli storage core device setconfig -d naa.id --perennially-reserved=true

#To verify that the device is perennially reserved, run this command:
#esxcli storage core device list -d naa.id
2. If you are not intended to use USB then you might like to stop usbarbitrator services as it slowdowns host startup 
#chkconfig usbarbitrator off
To check the status of the USB arbitrator, run the following command:
#chkconfig --list | grep -i usb

And if you want to start the usbarbitrator service, replace the Off with on
#chkconfig usbarbitrator off

Note: For ESXi 5.1 and 5.5, a restart is not required. Restarting the management agents on the system will allow for accessing the devices

To check the status of physical NIC connectivity, run this command:
# esxcfg-nics –l

To check installed Esxi version :
#esxcli system version get
or
#vmware -vl

I will add more command to this list in future

That's it... :)