You’d be surprised how many times I see datastore that’s just been un-presented from hosts rather than decommissioned correctly – in one notable case I saw a distributed switch crippled for a whole cluster because the datastore in question was being used to store the VDS configuration. This is the process that I follow to ensure datastores are decommissioned without any issues – they need to comply with these requirements
It’s been a really great year so far and incredibly busy (no complaints though!) VMware products have featured very high on my to-do list so far this year, with new hosting and DR solutions either completed or well underway. The simplicity, resilience and strength of vSphere never gets old! I have also had the privilege to attend several London VMUG meetings all of which have been excellent! They have been superb opportunities to meet new people, put faces to Twitter names and learn more about current and forthcoming technologies orientated around visualization.
The vSphere UMDS provides a way to download patches for VMware servers that have an air-gap, or for some reason aren’t allowed to go out to the internet themselves – in my case a security policy prevented a DMZ vCenter Server from connecting to the internet directly. The solution is to use UMDS to download the updates to a 2nd server that was hosted in the DMZ and then update the vCenter Server from there.
It’s no secret that installing certificates from an internal CA is a pain in the…vCenter, but having just gone through the process of updating 3 vCenter installations with the 5-7 certificates required for each server I was asked “just why is it we need to do this again?” Why does it require multiple certificates for my vCenter server? In short, each service requires a certificate because it could feasibly be on a server (or servers) of it’s own - take this hypothetical design - each role is hosted on it’s own VM, and there are 7 certificates required - SSO, Inventory Service, vCenter Server, Orchestrator, Web Client, Log Browser and Update Manager.
Updating vCenter Server certificates has always been a pain - it has only got worse with the sheer number of services that are running under vSphere 5.1 - each service requiring a unique certificate and to be installed in many complex steps. Fortunately , with the release of the SSL Certificate Automation Tool, VMware have gone some way to reducing the headache. Gather all the components you need OpenSSL installer: http://slproweb.
As some of you read previously, I had been experiencing disk latency issues on our SAN and tried many initial methods to troubleshoot and understand the root cause. Due to other more pressing issues this was placed aside until we started to experience VMs being occasionaly restarted by vSphere HA as the lock had been lost on a given VMDK file. (NOT GOOD!!) The Environment:- 3x vSphere 5.1 Hosts
PowerCLI Script to set RDM LUNs to Perennially Reserved – Fixes Slow Boot of ESXi 5.1 with MSCS RDMs
I’ve previously posted around this topic as part of another problem but having had to figure out the process again I think it’s worth re-posting a proper script for this. VMware KB 1016106 is snappily titled “ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to boot or during LUN rescan” and describes the situation where booting ESXi (5.1 in my case) takes a huge amount of time to boot because it’s attempting to gain a SCSI reservation on an RDM disk used by MS Clustering Services.
This article originally started off life as a record of how I managed to get this working, as a lot of my posts do, but this time it appears I am foiled. Last week, I had 3 vCenter Servers that appeared to be happily talking to each other in Linked Mode sharing a singe Multi-site SSO domain without any real issues. I had a single-pane-of-glass view of all 3 and I could manage them all from the one client.
Today while creating new VMs from a template I got the error “the server fault invalidargument had no message” when editing the VM settings, the settings were modified successfully but the error was present whether a change had been made or not to the settings of the VM. A quick search of the web suggested removing said VM from the inventory and re-adding from the datastore, for many this fixed the issue but not for me.
Had a strange one after deploying an XP VM from a template today - the VM would not power on and threw the following error: An error was received from the ESX host while powering on VM [VM name]. cpuid.coresPerSocket must be a number between 1 and 8 Digging around on google the error seemed to be related to over-allocating vCPUs (e.g. assigning 8 vCPUs on a VM with 4 physical CPU cores).