Help Center
Choose product document...
Veeam Management Pack 8.0 Update 5 for System Center
Veeam MP for VMware Reference

vCenter Events

ID

Severity

Group

Message Catalog Text

com.vmware.cl.CopyLibraryItemEvent

info

VC

com.vmware.cl.CopyLibraryItemEvent|Copied Library Item {targetLibraryItemName} to Library {targetLibraryName}({targetLibraryId}). Source Library Item {sourceLibraryItemName}({sourceLibraryItemId}), source Library {sourceLibraryName}({sourceLibraryId}).

Since 6.0

com.vmware.cl.CopyLibraryItemFailEvent

error

VC

com.vmware.cl.CopyLibraryItemFailEvent|Failed to copy Library Item {targetLibraryItemName}.

Since 6.0

com.vmware.cl.CreateLibraryEvent

info

VC

com.vmware.cl.CreateLibraryEvent|Created Library {libraryName}

Since 6.0 

com.vmware.cl.CreateLibraryFailEvent

error

VC

com.vmware.cl.CreateLibraryFailEvent|Failed to create Library {libraryName}

Since 6.0 

com.vmware.cl.CreateLibraryItemEvent

info

VC

com.vmware.cl.CreateLibraryItemEvent|Created Library Item {libraryItemName} in Library {libraryName}({libraryId}).

Since 6.0 

com.vmware.cl.CreateLibraryItemFailEvent

error

VC

com.vmware.cl.CreateLibraryItemFailEvent|Failed to create Library Item {libraryItemName}.

Since 6.0 

com.vmware.cl.DeleteLibraryEvent

info

VC

com.vmware.cl.DeleteLibraryEvent|Deleted Library {libraryName}

Since 6.0 

com.vmware.cl.DeleteLibraryFailEvent

error

VC

com.vmware.cl.DeleteLibraryFailEvent|Failed to delete Library

Since 6.0 

com.vmware.cl.DeleteLibraryItemEvent

info

VC

com.vmware.cl.DeleteLibraryItemEvent|Deleted Library Item {libraryItemName} in Library {libraryName}({libraryId}).

Since 6.0 

com.vmware.cl.DeleteLibraryItemFailEvent

error

VC

com.vmware.cl.DeleteLibraryItemFailEvent|Failed to delete Library Item.

Since 6.0

com.vmware.cl.UpdateLibraryEvent

info

VC

com.vmware.cl.UpdateLibraryEvent|Updated Library {libraryName}

Since 6.0 

com.vmware.cl.UpdateLibraryFailEvent

error

VC

com.vmware.cl.UpdateLibraryFailEvent|Failed to update Library

Since 6.0 

com.vmware.cl.UpdateLibraryItemEvent

info

VC

com.vmware.cl.UpdateLibraryItemEvent|Updated Library Item {libraryItemName} in Library {libraryName}({libraryId}).

Since 6.0

com.vmware.cl.UpdateLibraryItemFailEvent

error

VC

com.vmware.cl.UpdateLibraryItemFailEvent|Failed to update Library Item.

Since 6.0 

com.vmware.license.HostLicenseExpiredEvent

warning

VC

com.vmware.license.HostLicenseExpiredEvent|Expired host license or evaluation period.

Since 6.0 

com.vmware.license.HostSubscriptionLicenseExpiredEvent

warning

VC

com.vmware.license.HostSubscriptionLicenseExpiredEvent|Expired host time-limited license.

Since 6.0

com.vmware.license.VcLicenseExpiredEvent

warning

VC

com.vmware.license.VcLicenseExpiredEvent|Expired vCenter Server license or evaluation period.

Since 6.0

com.vmware.license.VcSubscriptionLicenseExpiredEvent

warning

VC

com.vmware.license.VcSubscriptionLicenseExpiredEvent|Expired vCenter Server time-limited license.

Since 6.0 

com.vmware.license.vsan.HostSsdOverUsageEvent

warning

VC

com.vmware.license.vsan.HostSsdOverUsageEvent|The capacity of the flash disks on the host exceeds the limit of the Virtual SAN license.

Since 6.0

com.vmware.license.vsan.LicenseExpiryEvent

warning

VC

com.vmware.license.vsan.LicenseExpiryEvent|Expired Virtual SAN license or evaluation period.

Since 6.0 

com.vmware.license.vsan.SubscriptionLicenseExpiredEvent

warning

VC

com.vmware.license.vsan.SubscriptionLicenseExpiredEvent|Expired Virtual SAN time-limited license.

Since 6.0 

com.vmware.pbm.profile.associate

info

VC

com.vmware.pbm.profile.associate|Associated storage policy: {ProfileId} with entity: {EntityId}

Since 6.0 

com.vmware.pbm.profile.delete

info

VC

com.vmware.pbm.profile.delete|Deleted storage policy: {ProfileId}

Since 6.0 

com.vmware.pbm.profile.dissociate

info

VC

com.vmware.pbm.profile.dissociate|Dissociated storage policy: {ProfileId} from entity: {EntityId}

Since 6.0 

com.vmware.pbm.profile.updateName

info

VC

com.vmware.pbm.profile.updateName|Storage policy name updated for {ProfileId}. New name: {NewProfileName}

Since 6.0 

com.vmware.rbd.activateRuleSet

info

VC

com.vmware.rbd.activateRuleSet|Activate Rule Set

Since 6.0

com.vmware.rbd.fdmPackageMissing

warning

VC

com.vmware.rbd.fdmPackageMissing|A host in a HA cluster does not have the 'vmware-fdm' package in its image profile

Since 6.0 

com.vmware.rbd.hostProfileRuleAssocEvent

warning

VC

com.vmware.rbd.hostProfileRuleAssocEvent|A host profile associated with one or more active rules was deleted.

Since 6.0 

com.vmware.rbd.ignoreMachineIdentity

warning

VC

com.vmware.rbd.ignoreMachineIdentity|Ignoring the AutoDeploy.MachineIdentity event, since the host is already provisioned through Auto Deploy

Since 6.0

com.vmware.rbd.pxeBootNoImageRule

info

VC

com.vmware.rbd.pxeBootNoImageRule|Unable to PXE boot host since it does not match any rules

Since 6.0 

com.vmware.rbd.pxeBootUnknownHost

info

VC

com.vmware.rbd.pxeBootUnknownHost|PXE Booting unknown host

Since 6.0

com.vmware.rbd.pxeProfileAssoc

info

VC

com.vmware.rbd.pxeProfileAssoc|Attach PXE Profile

Since 6.0 

com.vmware.rbd.vmcaCertGenerationFailureEvent

error

VC

com.vmware.rbd.vmcaCertGenerationFailureEvent|Failed to generate host certificates using VMCA

Since 6.0 

com.vmware.vc.certmgr.HostCaCertsAndCrlsUpdatedEvent

info

VC

com.vmware.vc.certmgr.HostCaCertsAndCrlsUpdatedEvent|CA Certificates were updated on {hostname}

Since 6.0 

com.vmware.vc.certmgr.HostCertExpirationImminentEvent

warning

VC

com.vmware.vc.certmgr.HostCertExpirationImminentEvent|Host Certificate expiration is imminent on {hostname}. Expiration Date: {expiryDate}

Since 6.0 

com.vmware.vc.certmgr.HostCertExpiringEvent

warning

VC

com.vmware.vc.certmgr.HostCertExpiringEvent|Host Certificate on {hostname} is nearing expiration. Expiration Date: {expiryDate}

Since 6.0 

com.vmware.vc.certmgr.HostCertExpiringShortlyEvent

warning

VC

com.vmware.vc.certmgr.HostCertExpiringShortlyEvent|Host Certificate on {hostname} will expire soon. Expiration Date: {expiryDate}

Since 6.0 

com.vmware.vc.certmgr.HostCertManagementModeChangedEvent

info

VC

com.vmware.vc.certmgr.HostCertManagementModeChangedEvent|Host Certificate Management Mode changed from {previousMode} to {presentMode}

Since 6.0 

com.vmware.vc.certmgr.HostCertMetadataChangedEvent

info

VC

com.vmware.vc.certmgr.HostCertMetadataChangedEvent|Host Certificate Management Metadata changed

Since 6.0 

com.vmware.vc.certmgr.HostCertRevokedEvent

warning

VC

com.vmware.vc.certmgr.HostCertRevokedEvent|Host Certificate on {hostname} is revoked.

Since 6.0 

com.vmware.vc.certmgr.HostCertUpdatedEvent

info

VC

com.vmware.vc.certmgr.HostCertUpdatedEvent|Host Certificate was updated on {hostname}, new thumbprint: {thumbprint}

Since 6.0

com.vmware.vc.certmgr.HostMgmtAgentsRestartedEvent

info

VC

com.vmware.vc.certmgr.HostMgmtAgentsRestartedEvent|Management Agents were restarted on {hostname}

Since 6.0 

com.vmware.vc.HA.ClusterFailoverInProgressEvent

Warning

VC

com.vmware.vc.HA.ClusterFailoverInProgressEvent|vSphere HA failover operation in progress in cluster {computeResource.name} in datacenter {datacenter.name}: {numBeingPlaced} VMs being restarted, {numToBePlaced} VMs waiting for a retry, {numAwaitingResource} VMs waiting for resources, {numAwaitingVsanVmChange} inaccessible Virtual SAN VMs

Since 6.0 

com.vmware.vc.HA.ConnectedToMaster

info

VC

com.vmware.vc.HA.ConnectedToMaster|vSphere HA agent on host {host.name} connected to the vSphere HA master on host {masterHostName} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 6.0

com.vmware.vc.HA.CreateConfigVvolFailedEvent

error

VC

com.vmware.vc.HA.CreateConfigVvolFailedEvent|vSphere HA failed to create a configuration vVol for this datastore and so will not be able to protect virtual machines on the datastore until the problem is resolved. Error: {fault}

Since 6.0 

com.vmware.vc.HA.CreateConfigVvolSucceededEvent

info

VC

com.vmware.vc.HA.CreateConfigVvolSucceededEvent|vSphere HA successfully created a configuration vVol after the previous failure

Since 6.0 

com.vmware.vc.HA.VcCannotCommunicateWithMasterEvent

warning

VC

com.vmware.vc.HA.VcCannotCommunicateWithMasterEvent|vCenter Server cannot communicate with the master vSphere HA agent on {hostname} in cluster {computeResource.name} in {datacenter.name}

Since 6.0

com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore

warning

VC

com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore|vSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore}

Since 6.0 

com.vmware.vc.HA.VmcpStorageFailureCleared

info

VC

com.vmware.vc.HA.VmcpStorageFailureCleared|Datastore {ds.name} mounted on host {host.name} was inaccessible. The condition was cleared and the datastore is now accessible

Since 6.0 

com.vmware.vc.HA.VmcpStorageFailureDetectedForVm

warning

VC

com.vmware.vc.HA.VmcpStorageFailureDetectedForVm|vSphere HA detected that a datastore mounted on host {host.name} in cluster {computeResource.name} in {datacenter.name} was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore

Since 6.0 

com.vmware.vc.HA.VmcpTerminateVmAborted

error

VC

com.vmware.vc.HA.VmcpTerminateVmAborted|vSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries

Since 6.0 

com.vmware.vc.HA.VmcpTerminatingVm

warning

VC

com.vmware.vc.HA.VmcpTerminatingVm|vSphere HA attempted to terminate VM {vm.name} on host{host.name} in cluster {computeResource.name} in {datacenter.name} because the VM was affected by an inaccessible datastore

Since 6.0 

com.vmware.vc.HA.VmDasResetAbortedEvent

error

VC

com.vmware.vc.HA.VmDasResetAbortedEvent|vSphere HA was unable to reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries

Since 6.0 

com.vmware.vc.host.problem.DeprecatedVMFSVolumeFound

warning

VC

com.vmware.vc.host.problem.DeprecatedVMFSVolumeFound|Deprecated VMFS volume(s) found on the host. Please consider upgrading volume(s) to the latest version.

Since 6.0 

com.vmware.vc.iofilter.FilterInstallationFailedEvent

error

VC

com.vmware.vc.iofilter.FilterInstallationFailedEvent|vSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed

Since 6.0 

com.vmware.vc.iofilter.FilterInstallationSuccessEvent

info

VC

com.vmware.vc.iofilter.FilterInstallationSuccessEvent|vSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful

Since 6.0 

com.vmware.vc.iofilter.FilterUninstallationFailedEvent

error

VC

com.vmware.vc.iofilter.FilterUninstallationFailedEvent|vSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed

Since 6.0

com.vmware.vc.iofilter.FilterUninstallationSuccessEvent

info

VC

com.vmware.vc.iofilter.FilterUninstallationSuccessEvent|vSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful

Since 6.0 

com.vmware.vc.iofilter.FilterUpgradeFailedEvent

error

VC

com.vmware.vc.iofilter.FilterUpgradeFailedEvent|vSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed

Since 6.0 

com.vmware.vc.iofilter.FilterUpgradeSuccessEvent

info

VC

com.vmware.vc.iofilter.FilterUpgradeSuccessEvent|vSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has succeeded

Since 6.0

com.vmware.vc.iofilter.HostVendorProviderRegistrationFailedEvent

error

VC

com.vmware.vc.iofilter.HostVendorProviderRegistrationFailedEvent|vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} registration has failed. Reason : {fault.msg}.

Since 6.0 

com.vmware.vc.iofilter.HostVendorProviderRegistrationSuccessEvent

info

VC

com.vmware.vc.iofilter.HostVendorProviderRegistrationSuccessEvent|vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} has been successfully registered

Since 6.0 

com.vmware.vc.iofilter.HostVendorProviderUnregistrationFailedEvent

error

VC

com.vmware.vc.iofilter.HostVendorProviderUnregistrationFailedEvent|Failed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name}. Reason : {fault.msg}.

Since 6.0 

com.vmware.vc.iofilter.HostVendorProviderUnregistrationSuccessEvent

info

VC

com.vmware.vc.iofilter.HostVendorProviderUnregistrationSuccessEvent|Failed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name}

Since 6.0

com.vmware.vc.sms.ObjectTypeAlarmClearedEvent

info

VC

com.vmware.vc.sms.ObjectTypeAlarmClearedEvent|Storage provider [{providerName}] cleared a Storage Alarm of type 'Object' on {eventSubjectId} : {msgTxt}

Since 6.0 

com.vmware.vc.sms.ObjectTypeAlarmErrorEvent

error

VC

com.vmware.vc.sms.ObjectTypeAlarmErrorEvent|Storage provider [{providerName}] raised an alert type 'Object' on {eventSubjectId} : {msgTxt}

Since 6.0 

com.vmware.vc.sms.ObjectTypeAlarmWarningEvent

warning

VC

com.vmware.vc.sms.ObjectTypeAlarmWarningEvent|Storage provider [{providerName}] raised a warning of type 'Object' on {eventSubjectId} : {msgTxt}

Since 6.0 

com.vmware.vc.sms.VasaProviderCertificateHardLimitReachedEvent

error

VC

com.vmware.vc.sms.VasaProviderCertificateHardLimitReachedEvent|Certificate for storage provider {providerName} will expire very shortly. Expiration date : {expiryDate}

Since 6.0 

com.vmware.vc.sms.VasaProviderCertificateSoftLimitReachedEvent

warning

VC

com.vmware.vc.sms.VasaProviderCertificateSoftLimitReachedEvent|Certificate for storage provider {providerName} will expire soon. Expiration date : {expiryDate}

Since 6.0 

com.vmware.vc.sms.VasaProviderCertificateValidEvent

info

VC

com.vmware.vc.sms.VasaProviderCertificateValidEvent|Certificate for storage provider {providerName} is valid

Since 6.0 

com.vmware.vc.sms.VasaProviderConnectedEvent

info

VC

com.vmware.vc.sms.VasaProviderConnectedEvent|Storage provider {providerName} is connected

Since 6.0 

com.vmware.vc.sms.VasaProviderDisconnectedEvent

error

VC

com.vmware.vc.sms.VasaProviderDisconnectedEvent|Storage provider {providerName} is disconnected

Since 6.0 

com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsFailure

error

VC

com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsFailure|Refreshing CA certificates and CRLs failed for VASA providers with url : {providerUrls}

Since 6.0 

com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsSuccess

info

VC

com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsSuccess|Refreshing CA certificates and CRLs succeeded for all registered VASA providers.

Since 6.0 

com.vmware.vc.spbm.ServiceErrorEvent

error

VC

com.vmware.vc.spbm.ServiceErrorEvent|Configuring storage policy failed for VM {entityName}. Verify that SPBM service is healthy. Fault Reason : {errorMessage}

Since 6.0 

com.vmware.vc.vm.DstVmMigratedEvent

info

VC

com.vmware.vc.vm.DstVmMigratedEvent|Virtual machine {vm.name} {newMoRef} in {computeResource.name} in {datacenter.name} was migrated from {oldMoRef}

Since 6.0 

com.vmware.vc.vm.PowerOnAfterCloneErrorEvent

error

VC

com.vmware.vc.vm.PowerOnAfterCloneErrorEvent|Virtual machine {vm.name} failed to power on after cloning on host {host.name} in datacenter {datacenter.name}

Since 6.0 

com.vmware.vc.vm.SrcVmMigratedEvent

info

VC

com.vmware.vc.vm.SrcVmMigratedEvent|Virtual machine {vm.name} {oldMoRef} in {computeResource.name} in {datacenter.name} was migrated to {newMoRef}

Since 6.0 

com.vmware.vc.vm.VmAdapterResvNotSatisfiedEvent

error

VC

com.vmware.vc.vm.VmAdapterResvNotSatisfiedEvent|Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is not satisfied

Since 6.0 

com.vmware.vc.vm.VmAdapterResvSatisfiedEvent

info

VC

com.vmware.vc.vm.VmAdapterResvSatisfiedEvent|Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is satisfied

Since 6.0 

com.vmware.vc.vsan.ChecksumDisabledHostFoundEvent

error

VC

com.vmware.vc.vsan.ChecksumDisabledHostFoundEvent|Found a checksum disabled host {host.name} in a checksum protected vCenter Server cluster {computeResource.name} in datacenter {datacenter.name}

Since 6.0 

com.vmware.vc.vsan.ChecksumNotSupportedDiskFoundEvent

error

VC

com.vmware.vc.vsan.ChecksumNotSupportedDiskFoundEvent|Virtual SAN disk {disk} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} does not support checksum

Since 6.0

com.vmware.vc.vsan.TurnDiskLocatorLedOffFailedEvent

error

VC

com.vmware.vc.vsan.TurnDiskLocatorLedOffFailedEvent|Failed to turn off the locator LED of disk {disk.path}. Reason : {fault.msg}

Since 6.0 

com.vmware.vc.vsan.TurnDiskLocatorLedOnFailedEvent

error

VC

com.vmware.vc.vsan.TurnDiskLocatorLedOnFailedEvent|Failed to turn on the locator LED of disk {disk.path}. Reason : {fault.msg}

Since 6.0

com.vmware.vc.vsan.VsanHostNeedsUpgradeEvent

warning

VC

com.vmware.vc.vsan.VsanHostNeedsUpgradeEvent|Virtual SAN cluster {computeResource.name} has one or more hosts that need disk format upgrade: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation

Since 6.0 

DrsSoftRuleViolationEvent

info

VC

{vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host soft affinity rule

Since 6.0 

esx.audit.account.locked

warning

VC

esx.audit.account.locked|Remote access for ESXi local user account '{1}' has been locked for {2} seconds after {3} failed login attempts.

Since 6.0 

esx.audit.account.loginfailures

warning

VC

esx.audit.account.loginfailures|Multiple remote login failures detected for ESXi local user account '{1}'.

Since 6.0 

esx.audit.lockdownmode.exceptions.changed

info

VC

esx.audit.lockdownmode.exceptions.changed|List of lockdown exception users has been changed.

Since 6.0

esx.audit.vsan.net.vnic.added

info

VC

esx.audit.vsan.net.vnic.added|Virtual SAN virtual NIC has been added.

Since 6.0

esx.clear.coredump.configured2

info

VC

esx.clear.coredump.configured2|At least one coredump target has been configured. Host core dumps will be saved.

Since 6.0

esx.clear.vob.vsan.pdl.online

info

ESXHostStorage

Virtual SAN device {1} has come online.

Since 6.0 

esx.clear.vsan.vsan.network.available

info

ESXHostStorage

Virtual SAN now has a usable network configuration. Earlier reported connectivity problems, if any, can now be ignored because they are resolved.

Since 6.0 

esx.clear.vsan.vsan.vmknic.ready

info

ESXHostStorage

vmknic {1} now has an IP address. Earlier reported connectivity problems, if any, can now be ignored because they are resolved.

Since 6.0 

esx.problem.coredump.capacity.insufficient

warning

VC

esx.problem.coredump.capacity.insufficient|The storage capacity of the coredump targets is insufficient to capture a complete coredump. Recommended coredump capacity is {1} MiB.

Since 6.0 

esx.problem.coredump.copyspace

warning

VC

esx.problem.coredump.copyspace|The free space available in default coredump copy location is insufficient to copy new coredumps. Recommended free space is {1} MiB.

Since 6.0 

esx.problem.coredump.extraction.failed.nospace

warning

VC

esx.problem.coredump.extraction.failed.nospace|The given partition has insufficient amount of free space to extract the coredump. At least {1} MiB is required.

Since 6.0

esx.problem.coredump.unconfigured2

warning

VC

esx.problem.coredump.unconfigured2|No coredump target has been configured. Host core dumps cannot be saved.

Since 6.0 

esx.problem.scratch.partition.size.small

warning

VC

esx.problem.scratch.partition.size.small|Size of scratch partition {1} is too small. Recommended scratch partition size is {2} MiB.

Since 6.0 

esx.problem.scratch.partition.unconfigured

warning

VC

esx.problem.scratch.partition.unconfigured|No scratch partition has been configured. Recommended scratch partition size is {} MiB.

Since 6.0 

esx.problem.scsi.scsipath.badpath.unreachpe

error

VC

esx.problem.scsi.scsipath.badpath.unreachpe|Sanity check failed for path {1}. The path is to a vVol PE, but it goes out of adapter {2} which is not PE capable. Path dropped.

Since 6.0 

esx.problem.scsi.scsipath.badpath.unsafepe

error

VC

esx.problem.scsi.scsipath.badpath.unsafepe|Sanity check failed for path {1}. Could not safely determine if the path is to a vVol PE. Path dropped.

Since 6.0 

esx.problem.vmfs.ats.incompatibility.detected

error

VC

esx.problem.vmfs.ats.incompatibility.detected|Multi-extent ATS-only volume '{1}' ({2}) is unable to use ATS because HardwareAcceleratedLocking is disabled on this host: potential for introducing filesystem corruption. Volume should not be used from other hosts.

Since 6.0 

esx.problem.vmfs.lockmode.inconsistency.detected

error

VC

esx.problem.vmfs.lockmode.inconsistency.detected|Inconsistent lockmode change detected for VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. Protocol error during ATS transition. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

Since 6.0 

esx.problem.vmfs.spanned.lockmode.inconsistency.detected

error

VC

esx.problem.vmfs.spanned.lockmode.inconsistency.detected|Inconsistent lockmode change detected for spanned VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. All operations on this volume will fail until this host unmounts and remounts the volume.

Since 6.0

esx.problem.vmfs.spanstate.incompatibility.detected

error

VC

esx.problem.vmfs.spanstate.incompatibility.detected|Incompatible span change detected for VMFS volume '{1} ({2})': volume was not spanned at time of open but now it is, and this host is using ATS-only lockmode but the volume is not ATS-only. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume.

Since 6.0 

esx.problem.vob.vsan.lsom.componentthreshold

warning

ESXHostStorage

Virtual SAN Node: {1} reached threshold of {2} %% opened components ({3} of {4}).

Since 6.0 

esx.problem.vob.vsan.lsom.diskerror

error

ESXHostStorage

Virtual SAN device {1} is under permanent failure.

Since 6.0 

esx.problem.vob.vsan.lsom.diskgrouplimit

error

ESXHostStorage

Failed to create new disk group {1}. The system has reached the maximum amount of disks groups allowed {2} for the current amount of memory {3}. Add more memory.

Since 6.0 

esx.problem.vob.vsan.lsom.disklimit

error

ESXHostStorage

Failed to add disk {1} to disk group. The system has reached the maximum amount of disks allowed {2} for the current amount of memory {3} GB. Add more memory.

Since 6.0 

esx.problem.vob.vsan.pdl.offline

error

ESXHostStorage

Virtual SAN device {1} has gone offline.

Since 6.0

esx.problem.vsan.lsom.congestionthreshold

info

ESXHostStorage

LSOM {1} Congestion State: {2}. Congestion Threshold: {3} Current Congestion: {4}.

Since 6.0

hbr.primary.RpoOkForServerEvent

info

VC

hbr.primary.RpoOkForServerEvent|VR Server is compatible with support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 6.0

hbr.primary.RpoTooLowForServerEvent

warning

VC

hbr.primary.RpoTooLowForServerEvent|VR Server does not support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 6.0

NetCompressionNotOkForServerEvent

error

VC

NetCompressionNotOkForServerEvent|event.NetCompressionNotOkForServerEvent.fullFormat

Since 6.0 

NetCompressionOkForServerEvent

info

VC

NetCompressionOkForServerEvent|event.NetCompressionOkForServerEvent.fullFormat

Since 6.0 

vim.event.SubscriptionLicenseExpiredEvent

warning

VC

vim.event.SubscriptionLicenseExpiredEvent|The time-limited license on host {host.name} has expired. To comply with the EULA, renew the license at http://my.vmware.com

Since 6.0 

VmGuestOSCrashedEvent

error

VC

{vm.name} on {host.name}: Guest operating system has crashed.

Since 6.0 

AccountCreatedEvent

info

VC

An account was created on host {host.name}

Since 2.0 Reference

AccountRemovedEvent

info

VC

Account {account} was removed on host {host.name}

Since 2.0 Reference

AccountUpdatedEvent

info

VC

An account was updated on host {host.name}

Since 2.0 Reference

ad.event.ImportCertEvent

info

VC

ad.event.ImportCertEvent| Import certificate succeeded.

Since 5.0 Reference

ad.event.ImportCertFailedEvent

error

VC

ad.event.ImportCertFailedEvent| Import certificate failed.

Since 5.0 Reference

ad.event.JoinDomainEvent

info

VC

ad.event.JoinDomainEvent| Join domain succeeded.

Since 5.0 Reference

ad.event.JoinDomainFailedEvent

error

VC

ad.event.JoinDomainFailedEvent| Join domain failed.

Since 5.0 Reference

ad.event.LeaveDomainEvent

info

VC

ad.event.LeaveDomainEvent| Leave domain succeeded.

Since 5.0 Reference

ad.event.LeaveDomainFailedEvent

error

VC

ad.event.LeaveDomainFailedEvent| Leave domain failed.

Since 5.0 Reference

AdminPasswordNotChangedEvent

info

VC

The default password for the root user on the host {host.name} has not been changed

Since 2.5 Reference

AlarmAcknowledgedEvent

info

VC

Acknowledged alarm '{alarm.name}' on {entity.name}

Since 5.0 Reference

AlarmActionTriggeredEvent

info

VC

Alarm '{alarm.name}' on {entity.name} triggered an action

Since 2.0 Reference

AlarmClearedEvent

info

VC

Manually cleared alarm '{alarm.name}' on {entity.name} from {from.@enum.ManagedEntity.Status}

Since 5.0 Reference

AlarmCreatedEvent

info

VC

Created alarm '{alarm.name}' on {entity.name}

Since 2.0 Reference

AlarmEmailCompletedEvent

info

VC

Alarm '{alarm.name}' on {entity.name} sent email to {to}

Since 2.0 Reference

AlarmEmailFailedEvent

error

VC

Alarm '{alarm.name}' on {entity.name} cannot send email to {to}

Since 2.0 Reference

AlarmReconfiguredEvent

info

VC

Reconfigured alarm '{alarm.name}' on {entity.name}

Since 2.0 Reference

AlarmRemovedEvent

info

VC

Removed alarm '{alarm.name}' on {entity.name}

Since 2.0 Reference

AlarmScriptCompleteEvent

info

VC

Alarm '{alarm.name}' on {entity.name} ran script {script}

Since 2.0 Reference

AlarmScriptFailedEvent

error

VC

Alarm '{alarm.name}' on {entity.name} did not complete script: {reason.msg}

Since 2.0 Reference

AlarmSnmpCompletedEvent

info

VC

Alarm '{alarm.name}' on entity {entity.name} send SNMP trap

Since 2.0 Reference

AlarmSnmpFailedEvent

error

VC

Alarm '{alarm.name}' on entity {entity.name} did not send SNMP trap: {reason.msg}

Since 2.0 Reference

AlarmStatusChangedEvent

info

VC

Alarm '{alarm.name}' on {entity.name} changed from {from.@enum.ManagedEntity.Status} to {to.@enum.ManagedEntity.Status}

Since 2.0 Reference

AllVirtualMachinesLicensedEvent

info

VC

All running virtual machines are licensed

Since 2.5 Reference

AlreadyAuthenticatedSessionEvent

info

VC

User cannot logon since the user is already logged on

Since 2.0 Reference

BadUsernameSessionEvent

warning

VC

Cannot login {userName}@{ipAddress}

Since 2.0 Reference

CanceledHostOperationEvent

info

VC

The operation performed on host {host.name} in {datacenter.name} was canceled

Since 2.0 Reference

ChangeOwnerOfFileEvent

info

VC

Changed ownership of file name {filename} from {oldOwner} to {newOwner} on {host.name} in {datacenter.name}.

Since 5.1 Reference

ChangeOwnerOfFileFailedEvent

error

VC

Cannot change ownership of file name {filename} from {owner} to {attemptedOwner} on {host.name} in {datacenter.name}.

Since 5.1 Reference

ClusterComplianceCheckedEvent

info

VC

Checked cluster for compliance

Since 4.0 Reference

ClusterCreatedEvent

info

VC

Created cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

ClusterDestroyedEvent

info

VC

Removed cluster {computeResource.name} in datacenter {datacenter.name}

Since 2.0 Reference

ClusterOvercommittedEvent

warning

Cluster

Insufficient capacity in cluster {computeResource.name} to satisfy resource configuration in {datacenter.name}

Since 2.0 Reference

ClusterReconfiguredEvent

info

VC

Reconfigured cluster {computeResource.name} in datacenter {datacenter.name}

Since 2.0 Reference

ClusterStatusChangedEvent

info

VC

Configuration status on cluster {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name}

Since 2.0 Reference

com.vmware.license.AddLicenseEvent

info

VC

com.vmware.license.AddLicenseEvent| License {licenseKey} added to VirtualCenter

Since 4.0 Reference

com.vmware.license.AssignLicenseEvent

info

VC

com.vmware.license.AssignLicenseEvent| License {licenseKey} assigned to asset {entityName}

Since 4.0 Reference

com.vmware.license.DLFDownloadFailedEvent

warning

VC

com.vmware.license.DLFDownloadFailedEvent| Failed to download license information from the host {hostname} due to {errorReason.@enum.com.vmware.license.DLFDownloadFailedEvent.DLFDownloadFailedReason}

Since 4.1 Reference

com.vmware.license.LicenseAssignFailedEvent

error

VC

com.vmware.license.LicenseAssignFailedEvent| License assignment on the host fails. Reasons: {errorMessage.@enum.com.vmware.license.LicenseAssignError}.

Since 4.0 Reference

com.vmware.license.LicenseCapacityExceededEvent

warning

VC

com.vmware.license.LicenseCapacityExceededEvent| The current license usage ({currentUsage} {costUnitText}) for {edition} exceeds the license capacity ({capacity} {costUnitText})

Since 5.0 Reference

com.vmware.license.LicenseExpiryEvent

error

VC

com.vmware.license.LicenseExpiryEvent| Your host license will expire in {remainingDays} days. The host will be disconnected from VC when its license expires.

Since 4.0 Reference

com.vmware.license.LicenseUserThresholdExceededEvent

warning

VC

com.vmware.license.LicenseUserThresholdExceededEvent| Current license usage ({currentUsage} {costUnitText}) for {edition} exceeded the user-defined threshold ({threshold} {costUnitText})

Since 4.1 Reference

com.vmware.license.RemoveLicenseEvent

info

VC

com.vmware.license.RemoveLicenseEvent| License {licenseKey} removed from VirtualCenter

Since 4.0 Reference

com.vmware.license.UnassignLicenseEvent

info

VC

com.vmware.license.UnassignLicenseEvent| License unassigned from asset {entityName}

Since 4.0 Reference

com.vmware.vc.cim.CIMGroupHealthStateChanged

info

VC

com.vmware.vc.cim.CIMGroupHealthStateChanged| Health of [data.group] changed from [data.oldState] to [data.newState].

Since 4.0 Reference

com.vmware.vc.datastore.UpdatedVmFilesEvent

info

VC

com.vmware.vc.datastore.UpdatedVmFilesEvent| Updated VM files on datastore {ds.name} using host {hostName}

Since 4.1 Reference

com.vmware.vc.datastore.UpdateVmFilesFailedEvent

error

VC

com.vmware.vc.datastore.UpdateVmFilesFailedEvent| Failed to update VM files on datastore {ds.name} using host {hostName}

Since 4.1 Reference

com.vmware.vc.datastore.UpdatingVmFilesEvent

info

VC

com.vmware.vc.datastore.UpdatingVmFilesEvent| Updating VM files on datastore {ds.name} using host {hostName}

Since 4.1 Reference

com.vmware.vc.dvs.LacpConfigInconsistentEvent

info

VC

com.vmware.vc.dvs.LacpConfigInconsistentEvent| Single Link Aggregation Control Group is enabled on Uplink Port Groups while enhanced LACP support is enabled.

Since 5.5 Reference

com.vmware.vc.ft.VmAffectedByDasDisabledEvent

warning

VirtualMachine

com.vmware.vc.ft.VmAffectedByDasDisabledEvent| VMware HA has been disabled in cluster {computeResource.name} of datacenter {datacenter.name}. HA will not restart VM {vm.name} or its Secondary VM after a failure.

Since 4.1 Reference

com.vmware.vc.guestOperations.GuestOperation

info

VC

com.vmware.vc.guestOperations.GuestOperation| Guest operation {operationName.@enum.com.vmware.vc.guestOp} performed on Virtual machine {vm.name}.

Since 5.0 Reference

com.vmware.vc.guestOperations.GuestOperationAuthFailure

warning

VirtualMachine

com.vmware.vc.guestOperations.GuestOperationAuthFailure| Guest operation authentication failed for operation {operationName.@enum.com.vmware.vc.guestOp} on Virtual machine {vm.name}.

Since 5.0 Reference

com.vmware.vc.HA.AllHostAddrsPingable

info

VC

com.vmware.vc.HA.AllHostAddrsPingable| All vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.AllIsoAddrsPingable

info

VC

com.vmware.vc.HA.AllIsoAddrsPingable| All vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent

warning

VirtualMachine

com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent| Lock-lost question on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} was answered by vSphere HA

Since 5.0 Reference

com.vmware.vc.HA.AnsweredVmTerminatePDLEvent

warning

VirtualMachine

com.vmware.vc.HA.AnsweredVmTerminatePDLEvent| vSphere HA answered a question from host {host.name} in cluster {computeResource.name} about terminating virtual machine {vm.name}

Since 5.1 Reference

com.vmware.vc.HA.AutoStartDisabled

info

VC

com.vmware.vc.HA.AutoStartDisabled| The automatic Virtual Machine Startup/Shutdown feature has been disabled on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Automatic VM restarts will interfere with vSphere HA when reacting to a host failure.

Since 5.0 Reference

com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore

warning

Cluster

com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore| vSphere HA did not reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} because the VM had files on inaccessible datastore(s)

Since 5.5 Reference

com.vmware.vc.HA.ClusterContainsIncompatibleHosts

warning

Cluster

com.vmware.vc.HA.ClusterContainsIncompatibleHosts| vSphere HA Cluster {computeResource.name} in {datacenter.name} contains ESX/ESXi 3.5 hosts and more recent host versions, which isn't fully supported.

Since 5.0 Reference

com.vmware.vc.HA.ClusterFailoverActionCompletedEvent

info

VC

com.vmware.vc.HA.ClusterFailoverActionCompletedEvent| HA completed a failover action in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent

warning

Cluster

com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent| HA initiated a failover action in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasAgentRunningEvent

info

VC

com.vmware.vc.HA.DasAgentRunningEvent| HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is running

Since 4.1 Reference

com.vmware.vc.HA.DasFailoverHostFailedEvent

error

Cluster

com.vmware.vc.HA.DasFailoverHostFailedEvent| HA failover host {host.name} in cluster {computeResource.name} in {datacenter.name} has failed

Since 4.1 Reference

com.vmware.vc.HA.DasFailoverHostIsolatedEvent

warning

Cluster

com.vmware.vc.HA.DasFailoverHostIsolatedEvent| Host {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.DasFailoverHostPartitionedEvent

warning

Cluster

com.vmware.vc.HA.DasFailoverHostPartitionedEvent| Failover Host {host.name} in {computeResource.name} in {datacenter.name} is in a different network partition than the master

Since 5.0 Reference

com.vmware.vc.HA.DasFailoverHostUnreachableEvent

warning

Cluster

com.vmware.vc.HA.DasFailoverHostUnreachableEvent| The vSphere HA agent on the failover host {host.name} in cluster {computeResource.name} in {datacenter.name} is not reachable from vCenter Server

Since 5.0 Reference

com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent

error

Cluster

com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent| All shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent

error

Cluster

com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent| All VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasHostFailedEvent

error

Cluster

com.vmware.vc.HA.DasHostFailedEvent| A possible host failure has been detected by HA on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasHostIsolatedEvent

warning

Cluster

com.vmware.vc.HA.DasHostIsolatedEvent| Host {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.DasHostMonitoringDisabledEvent

warning

Cluster

com.vmware.vc.HA.DasHostMonitoringDisabledEvent| No virtual machine failover will occur until Host Monitoring is enabled in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.DasTotalClusterFailureEvent

error

Cluster

com.vmware.vc.HA.DasTotalClusterFailureEvent| HA recovered from a total cluster failure in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.HA.FailedRestartAfterIsolationEvent

error

VirtualMachine

com.vmware.vc.HA.FailedRestartAfterIsolationEvent| vSphere HA was unable to restart virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} after it was powered off in response to a network isolation event. The virtual machine should be manually powered back on.

Since 5.0 Reference

com.vmware.vc.HA.HeartbeatDatastoreChanged

info

VC

com.vmware.vc.HA.HeartbeatDatastoreChanged| Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.HeartbeatDatastoreNotSufficient

warning

Cluster

com.vmware.vc.HA.HeartbeatDatastoreNotSufficient| The number of heartbeat datastores for host {host.name} in cluster {computeResource.name} in {datacenter.name} is {selectedNum}, which is less than required: {requiredNum}

Since 5.0 Reference

com.vmware.vc.HA.HostAgentErrorEvent

warning

Cluster

com.vmware.vc.HA.HostAgentErrorEvent| vSphere HA Agent for host {host.name} has an error in {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason}

Since 5.0 Reference

com.vmware.vc.HA.HostDasAgentHealthyEvent

info

VC

com.vmware.vc.HA.HostDasAgentHealthyEvent| HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is healthy

Since 4.1 Reference

com.vmware.vc.HA.HostDasErrorEvent

warning

Cluster

com.vmware.vc.HA.HostDasErrorEvent| vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error: {reason.@enum.HostDasErrorEvent.HostDasErrorReason}

Since 5.0 Reference

com.vmware.vc.HA.HostDoesNotSupportVsan

error

VC

com.vmware.vc.HA.HostDoesNotSupportVsan| vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in {datacenter.name} because vCloud Distributed Storage is enabled but the host does not support that feature

Since 5.5 Reference

com.vmware.vc.HA.HostHasNoIsolationAddrsDefined

warning

Cluster

com.vmware.vc.HA.HostHasNoIsolationAddrsDefined| Host {host.name} in cluster {computeResource.name} in {datacenter.name} has no isolation addresses defined as required by vSphere HA.

Since 5.0 Reference

com.vmware.vc.HA.HostHasNoMountedDatastores

error

Cluster

com.vmware.vc.HA.HostHasNoMountedDatastores| vSphere HA cannot be configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because there are no mounted datastores.

Since 5.1 Reference

com.vmware.vc.HA.HostHasNoSslThumbprint

error

Cluster

com.vmware.vc.HA.HostHasNoSslThumbprint| vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified.

Since 5.0 Reference

com.vmware.vc.HA.HostIncompatibleWithHA

error

Cluster

com.vmware.vc.HA.HostIncompatibleWithHA| The product version of host {host.name} in cluster {computeResource.name} in {datacenter.name} is incompatible with HA.

Since 5.0 Reference

com.vmware.vc.HA.HostPartitionedFromMasterEvent

warning

Cluster

com.vmware.vc.HA.HostPartitionedFromMasterEvent| Host {host.name} is in a different network partition than the master {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.HostStateChangedEvent

info

VC

com.vmware.vc.HA.HostStateChangedEvent| The vSphere HA availability state of the host {host.name} has changed to {newState.@enum.com.vmware.vc.HA.DasFdmAvailabilityState} in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.HostUnconfiguredWithProtectedVms

warning

Cluster

com.vmware.vc.HA.HostUnconfiguredWithProtectedVms| Host {host.name} in cluster {computeResource.name} in {datacenter.name} is disconnected, but contains {protectedVmCount} protected virtual machine(s)

Since 5.0 Reference

com.vmware.vc.HA.HostUnconfigureError

warning

Cluster

com.vmware.vc.HA.HostUnconfigureError| There was an error unconfiguring the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}. To solve this problem, connect the host to a vCenter Server of version 5.0 or later.

Since 5.0 Reference

com.vmware.vc.HA.InvalidMaster

warning

Cluster

com.vmware.vc.HA.InvalidMaster| vSphere HA Agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised.

Since 5.0 Reference

com.vmware.vc.HA.NotAllHostAddrsPingable

warning

Cluster

com.vmware.vc.HA.NotAllHostAddrsPingable| The vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} cannot reach some of the management network addresses of other hosts, and thus vSphere HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs}

Since 5.0 Reference

com.vmware.vc.HA.StartFTSecondaryFailedEvent

info

VirtualMachine

com.vmware.vc.HA.StartFTSecondaryFailedEvent| vSphere HA agent failed to start Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name} in {datacenter.name}. Reason : {fault.msg}. vSphere HA agent will retry until it times out.

Since 5.0 Reference

com.vmware.vc.HA.StartFTSecondarySucceededEvent

info

VC

com.vmware.vc.HA.StartFTSecondarySucceededEvent| vSphere HA agent successfully started Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name}.

Since 5.0 Reference

com.vmware.vc.HA.UserHeartbeatDatastoreRemoved

warning

Cluster

com.vmware.vc.HA.UserHeartbeatDatastoreRemoved| Datastore {dsName} is removed from the set of preferred heartbeat datastores selected for cluster {computeResource.name} in {datacenter.name} because the datastore is removed from inventory

Since 5.0 Reference

com.vmware.vc.HA.VcCannotFindMasterEvent

warning

Cluster

com.vmware.vc.HA.VcCannotFindMasterEvent| vCenter Server is unable to find a master vSphere HA Agent in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.VcConnectedToMasterEvent

warning

VC

com.vmware.vc.HA.VcConnectedToMasterEvent| vCenter Server is connected to the master vSphere HA Agent running on host {hostname} in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.VcDisconnectedFromMasterEvent

warning

VC

com.vmware.vc.HA.VcDisconnectedFromMasterEvent| vCenter Server is disconnected from the master vSphere HA Agent running on host {hostname} in {computeResource.name} in {datacenter.name}

Since 5.0 Reference

com.vmware.vc.HA.VMIsHADisabledIsolationEvent

info

VC

com.vmware.vc.HA.VMIsHADisabledIsolationEvent| vSphere HA did not perform an isolation response for {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled

Since 5.1 Reference

com.vmware.vc.HA.VMIsHADisabledRestartEvent

info

VC

com.vmware.vc.HA.VMIsHADisabledRestartEvent| vSphere HA did not attempt to restart {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled

Since 5.1 Reference

com.vmware.vc.HA.VmNotProtectedEvent

warning

VirtualMachine

com.vmware.vc.HA.VmNotProtectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} failed to become vSphere HA Protected and vSphere HA may not attempt to restart it after a failure.

Since 5.0 Reference

com.vmware.vc.HA.VmProtectedEvent

info

VC

com.vmware.vc.HA.VmProtectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} is vSphere HA Protected and vSphere HA will attempt to restart it after a failure.

Since 5.0 Reference

com.vmware.vc.ha.VmRestartedByHAEvent

warning

VirtualMachine

com.vmware.vc.ha.VmRestartedByHAEvent| Virtual machine {vm.name} was restarted on host {host.name} in cluster {computeResource.name} by vSphere HA

Since 5.0 Reference

com.vmware.vc.HA.VmUnprotectedEvent

warning

VirtualMachine

com.vmware.vc.HA.VmUnprotectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} is not vSphere HA Protected.

Since 5.0 Reference

com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull

info

VC

com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull| vSphere HA has unprotected virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} because it ran out of disk space

Since 5.1 Reference

com.vmware.vc.host.AutoStartReconfigureFailedEvent

error

VC

com.vmware.vc.host.AutoStartReconfigureFailedEvent| Reconfiguring autostart rules for virtual machines on {host.name} in datacenter {datacenter.name} failed

Since 5.0 Reference

com.vmware.vc.host.clear.vFlashResource.inaccessible

info

VC

com.vmware.vc.host.clear.vFlashResource.inaccessible| Host's vSphere Flash resource is restored to be accessible.

Since 5.5 Reference

com.vmware.vc.host.clear.vFlashResource.reachthreshold

info

VC

com.vmware.vc.host.clear.vFlashResource.reachthreshold| Host's vSphere Flash resource usage dropped below {1}%.

Since 5.5 Reference

com.vmware.vc.host.problem.vFlashResource.inaccessible

warning

VC

com.vmware.vc.host.problem.vFlashResource.inaccessible| Host's vSphere Flash resource is inaccessible.

Since 5.5 Reference

com.vmware.vc.host.problem.vFlashResource.reachthreshold

warning

VC

com.vmware.vc.host.problem.vFlashResource.reachthreshold| Host's vSphere Flash resource usage is more than {1}%.

Since 5.5 Reference

com.vmware.vc.host.vFlash.defaultModuleChangedEvent

info

VC

com.vmware.vc.host.vFlash.defaultModuleChangedEvent| Any new vFlash cache configuration request will use {vFlashModule} as default vSphere Flash module. All existing vFlash cache configurations remain unchanged.

Since 5.5 Reference

com.vmware.vc.host.vFlash.modulesLoadedEvent

info

VC

com.vmware.vc.host.vFlash.modulesLoadedEvent| vSphere Flash modules are loaded or reloaded on the host

Since 5.5 Reference

com.vmware.vc.host.vFlash.SsdConfigurationFailedEvent

error

ESXHostStorage

com.vmware.vc.host.vFlash.SsdConfigurationFailedEvent| {1} on disk '{2}' failed due to {3}

Since 5.5 Reference

com.vmware.vc.host.vFlash.VFlashResourceCapacityExtendedEvent

info

VC

com.vmware.vc.host.vFlash.VFlashResourceCapacityvSphere Flash resource capacity is extended

Since 5.5 Reference

com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent

info

VC

com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent| vSphere Flash resource is configured on the host

Since 5.5 Reference

com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent

info

VC

com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent| vSphere Flash resource is removed from the host

Since 5.5 Reference

com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent

info

VC

com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent| Network passthrough is active on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.npt.VmAdapterExitedPassthroughEvent

info

VC

com.vmware.vc.npt.VmAdapterExitedPassthroughEvent| Network passthrough is inactive on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent| Failed to clone state for the entity '{entityName}' on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent| Failed to retrieve OVF environment sections for VM '{vm.name}' from extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent| Powering on VM '{vm.name}' after cloning was blocked by an extension. Message: {description}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent| Failed to register entity '{entityName}' on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent| Failed to unregister entities on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.ovfconsumers.ValidateOstErrorEvent

warning

VC

com.vmware.vc.ovfconsumers.ValidateOstErrorEvent| Failed to validate OVF descriptor on extension {extensionName}

Since 5.0 Reference

com.vmware.vc.profile.AnswerFileExportedEvent

info

VC

com.vmware.vc.profile.AnswerFileExportedEvent| Answer file for host {host.name} in datacenter {datacenter.name} has been exported

Since 5.0 Reference

com.vmware.vc.profile.AnswerFileUpdatedEvent

info

VC

com.vmware.vc.profile.AnswerFileUpdatedEvent| Answer file for host {host.name} in datacenter {datacenter.name} has been updated

Since 5.0 Reference

com.vmware.vc.rp.ResourcePoolRenamedEvent

info

VC

com.vmware.vc.rp.ResourcePoolRenamedEvent| Resource pool '{oldName}' has been renamed to '{newName}'

Since 5.1 Reference

com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent| The datastore maintenance mode operation has been canceled

Since 5.0 Reference

com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent

info

VC

com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent| Configured storage DRS on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.ConsistencyGroupViolationEvent

warning

VC

com.vmware.vc.sdrs.ConsistencyGroupViolationEvent| Datastore cluster {objectName} has datastores that belong to different SRM Consistency Groups

Since 5.1 Reference

com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent| Datastore {ds.name} has entered maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent| Datastore {ds.name} is entering maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent

info

VC

com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent| Datastore {ds.name} has exited maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent

warning

VC

com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent| Datastore cluster {objectName} has one or more datastores: {datastore} shared across multiple datacenters

Since 5.0 Reference

com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent

error

VC

com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent| Datastore {ds.name} encountered errors while entering maintenance mode

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsDisabledEvent

info

VC

com.vmware.vc.sdrs.StorageDrsDisabledEvent| Disabled storage DRS on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsEnabledEvent

info

VC

com.vmware.vc.sdrs.StorageDrsEnabledEvent| Enabled storage DRS on datastore cluster {objectName} with automation level {behavior.@enum.storageDrs.PodConfigInfo.Behavior}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent

error

VC

com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent| Storage DRS invocation failed on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent

info

VC

com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent| A new storage DRS recommendation has been generated on datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent

warning

VC

com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent| Datastore cluster {objectName} is connected to one or more hosts: {host} that do not support storage DRS

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsRecommendationApplied

info

VC

com.vmware.vc.sdrs.StorageDrsRecommendationApplied| All pending recommendations on datastore cluster {objectName} were applied

Since 5.5 Reference

com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent

info

VC

com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent| Storage DRS migrated disks of VM {vm.name} to datastore {ds.name}

Since 5.0 Reference

com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent

info

VC

com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent| Storage DRS placed disks of VM {vm.name} on datastore {ds.name}

Since 5.0 Reference

com.vmware.vc.sdrs.StoragePodCreatedEvent

info

VC

com.vmware.vc.sdrs.StoragePodCreatedEvent| Created datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sdrs.StoragePodDestroyedEvent

info

VC

com.vmware.vc.sdrs.StoragePodDestroyedEvent| Removed datastore cluster {objectName}

Since 5.0 Reference

com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent

warning

VC

com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent| SIOC has detected that a host: {host} connected to a SIOC-enabled datastore: {objectName} is running an older version of ESX that does not support SIOC. This is an unsupported configuration.

Since 5.0 Reference

com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent

info

VC

com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent| Virtual disk {diskKey} on {vmName} connected to datastore {datastore.name} in {datacenter.name} is compliant from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent

error

VirtualMachine

com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent| Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent

warning

VC

com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent| Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.LunCapabilityInitEvent

info

VC

com.vmware.vc.sms.LunCapabilityInitEvent| Storage provider system default capability event

Since 5.0 Reference

com.vmware.vc.sms.LunCapabilityMetEvent

info

VC

com.vmware.vc.sms.LunCapabilityMetEvent| Storage provider system capability requirements met

Since 5.0 Reference

com.vmware.vc.sms.LunCapabilityNotMetEvent

info

VC

com.vmware.vc.sms.LunCapabilityNotMetEvent| Storage provider system capability requirements not met

Since 5.0 Reference

com.vmware.vc.sms.provider.health.event

info

VC

com.vmware.vc.sms.provider.health.event| {msgTxt}

Since 5.0 Reference

com.vmware.vc.sms.provider.system.event

info

VC

com.vmware.vc.sms.provider.system.event| {msgTxt}

Since 5.0 Reference

com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent

info

VC

com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent| Storage provider thin provisioning capacity threshold reached

Since 5.0 Reference

com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent

info

VC

com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent| Storage provider thin provisioning capacity threshold crossed

Since 5.0 Reference

com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent

info

VC

com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent| Storage provider thin provisioning default capacity event

Since 5.0 Reference

com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent

info

VC

com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is compliant from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent

error

VC

com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent

warning

VC

com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}.

Since 5.5 Reference

com.vmware.vc.spbm.ProfileAssociationFailedEvent

error

VC

com.vmware.vc.spbm.ProfileAssociationFailedEvent| Profile association/dissociation failed for {entityName}

Since 5.5 Reference

com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent

info

VC

com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent| Quick stats on {host.name} in {computeResource.name} in {datacenter.name} is not up-to-date

Since 5.0 Reference

com.vmware.vc.VCHealthStateChangedEvent

info

VC

com.vmware.vc.VCHealthStateChangedEvent| vCenter Service overall health changed from '{oldState}' to '{newState}'

Since 4.1 Reference

com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent

info

VC

com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent| HA VM Component Protection protects virtual machine {vm.name} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because the FT state is disabled

Since 4.1 Reference

com.vmware.vc.vcp.FtFailoverEvent

info

VC

com.vmware.vc.vcp.FtFailoverEvent| FT Primary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is going to fail over to Secondary VM due to component failure

Since 4.1 Reference

com.vmware.vc.vcp.FtFailoverFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.FtFailoverFailedEvent| FT virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to failover to secondary

Since 4.1 Reference

com.vmware.vc.vcp.FtSecondaryRestartEvent

info

VC

com.vmware.vc.vcp.FtSecondaryRestartEvent| HA VM Component Protection is restarting FT secondary virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to component failure

Since 4.1 Reference

com.vmware.vc.vcp.FtSecondaryRestartFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.FtSecondaryRestartFailedEvent| FT Secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart

Since 4.1 Reference

com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent

info

VC

com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent| HA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too long

Since 4.1 Reference

com.vmware.vc.vcp.TestEndEvent

info

VC

com.vmware.vc.vcp.TestEndEvent| VM Component Protection test ends on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vcp.TestStartEvent

info

VC

com.vmware.vc.vcp.TestStartEvent| VM Component Protection test starts on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vcp.VcpNoActionEvent

info

VC

com.vmware.vc.vcp.VcpNoActionEvent| HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration setting

Since 4.1 Reference

com.vmware.vc.vcp.VmDatastoreFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.VmDatastoreFailedEvent| Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore}

Since 4.1 Reference

com.vmware.vc.vcp.VmNetworkFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.VmNetworkFailedEvent| Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network}

Since 4.1 Reference

com.vmware.vc.vcp.VmPowerOffHangEvent

error

VirtualMachine

com.vmware.vc.vcp.VmPowerOffHangEvent| HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep trying

Since 4.1 Reference

com.vmware.vc.vcp.VmRestartEvent

info

VC

com.vmware.vc.vcp.VmRestartEvent| HA VM Component Protection is restarting virtual machine {vm.name} due to component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vcp.VmRestartFailedEvent

error

VirtualMachine

com.vmware.vc.vcp.VmRestartFailedEvent| Virtual machine {vm.name} affected by component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart

Since 4.1 Reference

com.vmware.vc.vcp.VmWaitForCandidateHostEvent

error

VirtualMachine

com.vmware.vc.vcp.VmWaitForCandidateHostEvent| HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep trying

Since 4.1 Reference

com.vmware.vc.vm.VmRegisterFailedEvent

error

VC

com.vmware.vc.vm.VmRegisterFailedEvent| Virtual machine {vm.name} registration on {host.name} in datacenter {datacenter.name} failed

Since 5.0 Reference

com.vmware.vc.vm.VmStateFailedToRevertToSnapshot

error

VirtualMachine

com.vmware.vc.vm.VmStateFailedToRevertToSnapshot| Failed to revert the execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} to snapshot {snapshotName}, with ID {snapshotId}

Since 5.0 Reference

com.vmware.vc.vm.VmStateRevertedToSnapshot

info

VC

com.vmware.vc.vm.VmStateRevertedToSnapshot| The execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId}

Since 5.0 Reference

com.vmware.vc.vmam.AppMonitoringNotSupported

warning

VC

com.vmware.vc.vmam.AppMonitoringNotSupported| Application monitoring is not supported on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent

warning

VC

com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent| Application heartbeat status changed to {status} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.vmam.VmAppHealthStateChangedEvent

warning

VirtualMachine

com.vmware.vc.vmam.VmAppHealthStateChangedEvent| vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 5.5 Reference

com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent

warning

VirtualMachine

com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent| Application heartbeat failed for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 4.1 Reference

com.vmware.vc.VmCloneFailedInvalidDestinationEvent

error

VC

com.vmware.vc.VmCloneFailedInvalidDestinationEvent| Cannot clone {vm.name} as {destVmName} to invalid or non-existent destination with ID {invalidMoRef}: {fault}

Since 5.0 Reference

com.vmware.vc.VmCloneToResourcePoolFailedEvent

error

VC

com.vmware.vc.VmCloneToResourcePoolFailedEvent| Cannot clone {vm.name} as {destVmName} to resource pool {destResourcePool}: {fault}

Since 5.0 Reference

com.vmware.vc.VmDiskConsolidatedEvent

info

VC

com.vmware.vc.VmDiskConsolidatedEvent| Virtual machine {vm.name} disks consolidated successfully on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

com.vmware.vc.VmDiskConsolidationNeeded

info

VC

com.vmware.vc.VmDiskConsolidationNeeded| Virtual machine {vm.name} disks consolidation is needed on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

com.vmware.vc.VmDiskConsolidationNoLongerNeeded

info

VC

com.vmware.vc.VmDiskConsolidationNoLongerNeeded| Virtual machine {vm.name} disks consolidation is no longer needed on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.1 Reference

com.vmware.vc.VmDiskFailedToConsolidateEvent

error

VirtualMachine

com.vmware.vc.VmDiskFailedToConsolidateEvent| Virtual machine {vm.name} disks consolidation failed on {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

com.vmware.vc.vsan.DatastoreNoCapacityEvent

error

VC

com.vmware.vc.vsan.DatastoreNoCapacityEvent| VSAN datastore {datastoreName} in cluster {computeResource.name} in datacenter {datacenter.name} does not have capacity

Since 5.5 Reference

com.vmware.vc.vsan.HostCommunicationErrorEvent

error

ESXHost

com.vmware.vc.vsan.HostCommunicationErrorEvent| event.com.vmware.vc.vsan.HostCommunicationErrorEvent.fullFormat

Since 5.5 Reference

com.vmware.vc.vsan.HostNotInClusterEvent

error

VC

com.vmware.vc.vsan.HostNotInClusterEvent| {host.name} with the VSAN service enabled is not in the vCenter cluster {computeResource.name} in datacenter {datacenter.name}

Since 5.5 Reference

com.vmware.vc.vsan.HostNotInVsanClusterEvent

error

VC

com.vmware.vc.vsan.HostNotInVsanClusterEvent| {host.name} is in a VSAN enabled cluster {computeResource.name} in datacenter {datacenter.name} but does not have VSAN service enabled

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent

error

VC

com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent| Vendor provider {host.name} deregistration failed

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent

info

VC

com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent| Vendor provider {host.name} deregistration succeeded

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent

error

VC

com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent| Vendor provider {host.name} registration failed

Since 5.5 Reference

com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent

info

VC

com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent| Vendor provider {host.name} registration succeeded

Since 5.5 Reference

com.vmware.vc.vsan.NetworkMisConfiguredEvent

error

ESXHostNetwork

com.vmware.vc.vsan.NetworkMisConfiguredEvent| VSAN network is not configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}

Since 5.5 Reference

com.vmware.vc.vsan.RogueHostFoundEvent

error

VC

com.vmware.vc.vsan.RogueHostFoundEvent| Found another host participating in the VSAN service in cluster {computeResource.name} in datacenter {datacenter.name} which is not a member of this host's vCenter cluster

Since 5.5 Reference

com.vmware.vim.eam.agency.create

info

VC

com.vmware.vim.eam.agency.create| {agencyName} created by {ownerName}

Since 5.0 Reference

com.vmware.vim.eam.agency.destroyed

info

VC

com.vmware.vim.eam.agency.destroyed| {agencyName} removed from the vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.agency.goalstate

info

VC

com.vmware.vim.eam.agency.goalstate| {agencyName} changed goal state from {oldGoalState} to {newGoalState}

Since 5.0 Reference

com.vmware.vim.eam.agency.statusChanged

info

VC

com.vmware.vim.eam.agency.statusChanged| Agency status changed from {oldStatus} to {newStatus}

Since 5.1 Reference

com.vmware.vim.eam.agency.updated

info

VC

com.vmware.vim.eam.agency.updated| Configuration updated {agencyName}

Since 5.0 Reference

com.vmware.vim.eam.agent.created

info

VC

com.vmware.vim.eam.agent.created| Agent added to host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.destroyed

info

VC

com.vmware.vim.eam.agent.destroyed| Agent removed from host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.destroyedNoHost

info

VC

com.vmware.vim.eam.agent.destroyedNoHost| Agent removed from host ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn

info

VC

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn| Agent VM {vm.name} has been powered on. Mark agent as available to proceed agent workflow ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning

info

VC

com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning| Agent VM {vm.name} has been provisioned. Mark agent as available to proceed agent workflow ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.statusChanged

info

VC

com.vmware.vim.eam.agent.statusChanged| Agent status changed from {oldStatus} to {newStatus}

Since 5.1 Reference

com.vmware.vim.eam.agent.task.deleteVm

info

VC

com.vmware.vim.eam.agent.task.deleteVm| Agent VM {vmName} is deleted on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.deployVm

info

VC

com.vmware.vim.eam.agent.task.deployVm| Agent VM {vm.name} is provisioned on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.powerOffVm

info

VC

com.vmware.vim.eam.agent.task.powerOffVm| Agent VM {vm.name} powered off, on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.powerOnVm

info

VC

com.vmware.vim.eam.agent.task.powerOnVm| Agent VM {vm.name} powered on, on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.vibInstalled

info

VC

com.vmware.vim.eam.agent.task.vibInstalled| Agent installed VIB {vib} on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.agent.task.vibUninstalled

info

VC

com.vmware.vim.eam.agent.task.vibUninstalled| Agent uninstalled VIB {vib} on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.cannotAccessAgentOVF

warning

VC

com.vmware.vim.eam.issue.cannotAccessAgentOVF| Unable to access agent OVF package at {url} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.cannotAccessAgentVib

warning

VC

com.vmware.vim.eam.issue.cannotAccessAgentVib| Unable to access agent VIB module at {url} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.hostInMaintenanceMode

warning

VC

com.vmware.vim.eam.issue.hostInMaintenanceMode| Agent cannot complete an operation since the host {host.name} is in maintenance mode ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.hostInStandbyMode

warning

VC

com.vmware.vim.eam.issue.hostInStandbyMode| Agent cannot complete an operation since the host {host.name} is in standby mode ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.hostPoweredOff

warning

VC

com.vmware.vim.eam.issue.hostPoweredOff| Agent cannot complete an operation since the host {host.name} is powered off ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.incompatibleHostVersion

warning

VC

com.vmware.vim.eam.issue.incompatibleHostVersion| Agent is not deployed due to incompatible host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.insufficientIpAddresses

warning

VC

com.vmware.vim.eam.issue.insufficientIpAddresses| Insufficient IP addresses in IP pool in agent's VM network ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.insufficientResources

warning

VC

com.vmware.vim.eam.issue.insufficientResources| Agent cannot be provisioned due to insufficient resources on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.insufficientSpace

warning

VC

com.vmware.vim.eam.issue.insufficientSpace| Agent on {host.name} cannot be provisioned due to insufficient space on datastore ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.missingAgentIpPool

warning

VC

com.vmware.vim.eam.issue.missingAgentIpPool| No IP pool in agent's VM network ({agencyname})

Since 5.0 Reference

com.vmware.vim.eam.issue.missingDvFilterSwitch

warning

VC

com.vmware.vim.eam.issue.missingDvFilterSwitch| dvFilter switch is not configured on host {host.name} ({agencyname})

Since 5.0 Reference

com.vmware.vim.eam.issue.noAgentVmDatastore

warning

VC

com.vmware.vim.eam.issue.noAgentVmDatastore| No agent datastore configuration on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.noAgentVmNetwork

warning

VC

com.vmware.vim.eam.issue.noAgentVmNetwork| No agent network configuration on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.noCustomAgentVmDatastore

error

VC

com.vmware.vim.eam.issue.noCustomAgentVmDatastore| Agent datastore(s) {customAgentVmDatastoreName} not available on host {host.name} ({agencyName})

Since 5.5 Reference

com.vmware.vim.eam.issue.noCustomAgentVmNetwork

error

VC

com.vmware.vim.eam.issue.noCustomAgentVmNetwork| Agent network(s) {customAgentVmNetworkName} not available on host {host.name} ({agencyName})

Since 5.1 Reference

com.vmware.vim.eam.issue.orphandedDvFilterSwitch

warning

VC

com.vmware.vim.eam.issue.orphandedDvFilterSwitch| Unused dvFilter switch on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.orphanedAgency

warning

VC

com.vmware.vim.eam.issue.orphanedAgency| Orphaned agency found. ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.ovfInvalidFormat

warning

VC

com.vmware.vim.eam.issue.ovfInvalidFormat| OVF used to provision agent on host {host.name} has invalid format ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.ovfInvalidProperty

warning

VC

com.vmware.vim.eam.issue.ovfInvalidProperty| OVF environment used to provision agent on host {host.name} has one or more invalid properties ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.resolved

info

VC

com.vmware.vim.eam.issue.resolved| Issue {type} resolved (key {key})

Since 5.1 Reference

com.vmware.vim.eam.issue.unknownAgentVm

warning

VC

com.vmware.vim.eam.issue.unknownAgentVm| Unknown agent VM {vm.name}

Since 5.0 Reference

com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode

warning

VC

com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode| Cannot put host into maintenance mode ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibInvalidFormat

warning

VC

com.vmware.vim.eam.issue.vibInvalidFormat| Invalid format for VIB module at {url} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibNotInstalled

warning

VC

com.vmware.vim.eam.issue.vibNotInstalled| VIB module for agent is not installed on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode

error

VC

com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode| Host must be put into maintenance mode to complete agent VIB installation ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresHostReboot

error

VC

com.vmware.vim.eam.issue.vibRequiresHostReboot| Host {host.name} must be reboot to complete agent VIB installation ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresManualInstallation

error

VC

com.vmware.vim.eam.issue.vibRequiresManualInstallation| VIB {vib} requires manual installation on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vibRequiresManualUninstallation

error

VC

com.vmware.vim.eam.issue.vibRequiresManualUninstallation| VIB {vib} requires manual uninstallation on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmCorrupted

warning

VC

com.vmware.vim.eam.issue.vmCorrupted| Agent VM {vm.name} on host {host.name} is corrupted ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmDeployed

warning

VC

com.vmware.vim.eam.issue.vmDeployed| Agent VM {vm.name} is provisioned on host {host.name} when it should be removed ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmMarkedAsTemplate

warning

VC

com.vmware.vim.eam.issue.vmMarkedAsTemplate| Agent VM {vm.name} on host {host.name} is marked as template ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmNotDeployed

warning

VC

com.vmware.vim.eam.issue.vmNotDeployed| Agent VM is missing on host {host.name} ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmOrphaned

warning

VC

com.vmware.vim.eam.issue.vmOrphaned| Orphaned agent VM {vm.name} on host {host.name} detected ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmPoweredOff

warning

VC

com.vmware.vim.eam.issue.vmPoweredOff| Agent VM {vm.name} on host {host.name} is expected to be powered on ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmPoweredOn

warning

VC

com.vmware.vim.eam.issue.vmPoweredOn| Agent VM {vm.name} on host {host.name} is expected to be powered off ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmSuspended

warning

VC

com.vmware.vim.eam.issue.vmSuspended| Agent VM {vm.name} on host {host.name} is expected to be powered on but is suspended ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmWrongFolder

warning

VC

com.vmware.vim.eam.issue.vmWrongFolder| Agent VM {vm.name} on host {host.name} is in the wrong VM folder ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.issue.vmWrongResourcePool

warning

VC

com.vmware.vim.eam.issue.vmWrongResourcePool| Agent VM {vm.name} on host {host.name} is in the resource pool ({agencyName})

Since 5.0 Reference

com.vmware.vim.eam.login.invalid

warning

VC

com.vmware.vim.eam.login.invalid| Failed login to vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.login.succeeded

info

VC

com.vmware.vim.eam.login.succeeded| Successful login by {user} into vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.logout

info

VC

com.vmware.vim.eam.logout| User {user} logged out of vSphere ESX Agent Manager by logging out of the vCenter server

Since 5.0 Reference

com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted

info

VC

com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted| Scan for unknown agent VMs completed

Since 5.0 Reference

com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated

info

VC

com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated| Scan for unknown agent VMs initiated

Since 5.0 Reference

com.vmware.vim.eam.task.setupDvFilter

info

VC

com.vmware.vim.eam.task.setupDvFilter| DvFilter switch '{switchName}' is setup on host {host.name}

Since 5.0 Reference

com.vmware.vim.eam.task.tearDownDvFilter

info

VC

com.vmware.vim.eam.task.tearDownDvFilter| DvFilter switch '{switchName}' is teared down on host {host.name}

Since 5.0 Reference

com.vmware.vim.eam.unauthorized.access

warning

VC

com.vmware.vim.eam.unauthorized.access| Unauthorized access by {user} in vSphere ESX Agent Manager

Since 5.0 Reference

com.vmware.vim.eam.vum.failedtouploadvib

error

VC

com.vmware.vim.eam.vum.failedtouploadvib| Failed to upload {vibUrl} to VMware Update Manager ({agencyName})

Since 5.0 Reference

com.vmware.vim.vsm.dependency.bind.vApp

info

VC

com.vmware.vim.vsm.dependency.bind.vApp| event.com.vmware.vim.vsm.dependency.bind.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.bind.vm

info

VC

com.vmware.vim.vsm.dependency.bind.vm| event.com.vmware.vim.vsm.dependency.bind.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.create.vApp

info

VC

com.vmware.vim.vsm.dependency.create.vApp| event.com.vmware.vim.vsm.dependency.create.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.create.vm

info

VC

com.vmware.vim.vsm.dependency.create.vm| event.com.vmware.vim.vsm.dependency.create.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.destroy.vApp

info

VC

com.vmware.vim.vsm.dependency.destroy.vApp| event.com.vmware.vim.vsm.dependency.destroy.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.destroy.vm

info

VC

com.vmware.vim.vsm.dependency.destroy.vm| event.com.vmware.vim.vsm.dependency.destroy.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.reconfigure.vApp

info

VC

com.vmware.vim.vsm.dependency.reconfigure.vApp| event.com.vmware.vim.vsm.dependency.reconfigure.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.reconfigure.vm

info

VC

com.vmware.vim.vsm.dependency.reconfigure.vm| event.com.vmware.vim.vsm.dependency.reconfigure.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.unbind.vApp

info

VC

com.vmware.vim.vsm.dependency.unbind.vApp| event.com.vmware.vim.vsm.dependency.unbind.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.unbind.vm

info

VC

com.vmware.vim.vsm.dependency.unbind.vm| event.com.vmware.vim.vsm.dependency.unbind.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.update.vApp

info

VC

com.vmware.vim.vsm.dependency.update.vApp| event.com.vmware.vim.vsm.dependency.update.vApp.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.dependency.update.vm

info

VC

com.vmware.vim.vsm.dependency.update.vm| event.com.vmware.vim.vsm.dependency.update.vm.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.provider.register

info

VC

com.vmware.vim.vsm.provider.register| event.com.vmware.vim.vsm.provider.register.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.provider.unregister

info

VC

com.vmware.vim.vsm.provider.unregister| event.com.vmware.vim.vsm.provider.unregister.fullFormat

Since 5.0 Reference

com.vmware.vim.vsm.provider.update

info

VC

com.vmware.vim.vsm.provider.update| event.com.vmware.vim.vsm.provider.update.fullFormat

Since 5.0 Reference

CustomFieldDefAddedEvent

info

VC

Created new custom field definition {name}

Since 2.0 Reference

CustomFieldDefEvent

info

VC

This event records a custom field definition event.

Since 2.0 Reference

CustomFieldDefRemovedEvent

info

VC

Removed field definition {name}

Since 2.0 Reference

CustomFieldDefRenamedEvent

info

VC

Renamed field definition from {name} to {newName}

Since 2.0 Reference

CustomFieldValueChangedEvent

info

VC

Changed custom field {name} on {entity.name} in {datacenter.name} to {value}

Since 2.0 Reference

CustomizationFailed

warning

VC

Cannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details.

Since 2.5 Reference

CustomizationLinuxIdentityFailed

warning

VC

An error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details.

Since 2.5 Reference

CustomizationNetworkSetupFailed

warning

VC

An error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details.

Since 2.5 Reference

CustomizationStartedEvent

info

VC

Started customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS.

Since 2.5 Reference

CustomizationSucceeded

info

VC

Customization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS.

Since 2.5 Reference

CustomizationSysprepFailed

warning

VC

The version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information.

Since 2.5 Reference

CustomizationUnknownFailure

warning

VC

An error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS.

Since 2.5 Reference

DasAdmissionControlDisabledEvent

info

VC

HA admission control disabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasAdmissionControlEnabledEvent

info

VC

HA admission control enabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasAgentFoundEvent

info

VC

Re-established contact with a primary host in this HA cluster

Since 2.0 Reference

DasAgentUnavailableEvent

error

Cluster

Unable to contact a primary HA agent in cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasClusterIsolatedEvent

error

Cluster

All hosts in the HA cluster {computeResource.name} in {datacenter.name} were isolated from the network. Check the network configuration for proper network redundancy in the management network.

Since 4.0 Reference

DasDisabledEvent

info

VC

HA disabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasEnabledEvent

info

VC

HA enabled on cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasHostFailedEvent

error

Cluster

A possible host failure has been detected by HA on {failedHost.name} in cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DasHostIsolatedEvent

warning

Cluster

Host {isolatedHost.name} has been isolated from cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DatacenterCreatedEvent

info

VC

Created datacenter {datacenter.name} in folder {parent.name}

Since 2.5 Reference

DatacenterRenamedEvent

info

VC

Renamed datacenter from {oldName} to {newName}

Since 2.5 Reference

DatastoreCapacityIncreasedEvent

info

VC

Datastore {datastore.name} increased in capacity from {oldCapacity} bytes to {oldCapacity} bytes in {datacenter.name}

Since 4.0 Reference

DatastoreDestroyedEvent

info

VC

Removed unconfigured datastore {datastore.name}

Since 2.0 Reference

DatastoreDiscoveredEvent

info

VC

Discovered datastore {datastore.name} on {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreDuplicatedEvent

error

VC

Multiple datastores named {datastore} detected on host {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreFileCopiedEvent

info

VC

File or directory {sourceFile} copied from {sourceDatastore.name} to {datastore.name} as {targetFile}

Since 4.0 Reference

DatastoreFileDeletedEvent

info

VC

File or directory {targetFile} deleted from {datastore.name}

Since 4.0 Reference

DatastoreFileMovedEvent

info

VC

File or directory {sourceFile} moved from {sourceDatastore.name} to {datastore.name} as {targetFile}

Since 4.0 Reference

DatastoreIORMReconfiguredEvent

info

VC

Reconfigured Storage I/O Control on datastore {datastore.name}

Since 4.1 Reference

DatastorePrincipalConfigured

info

VC

Configured datastore principal {datastorePrincipal} on host {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreRemovedOnHostEvent

info

VC

Removed datastore {datastore.name} from {host.name} in {datacenter.name}

Since 2.0 Reference

DatastoreRenamedEvent

info

VC

Renamed datastore from {oldName} to {newName} in {datacenter.name}

Since 2.0 Reference

DatastoreRenamedOnHostEvent

info

VC

Renamed datastore from {oldName} to {newName} in {datacenter.name}

Since 2.0 Reference

DrsDisabledEvent

info

VC

Disabled DRS on cluster {computeResource.name} in datacenter {datacenter.name}

Since 2.0 Reference

DrsEnabledEvent

info

VC

Enabled DRS on {computeResource.name} with automation level {behavior} in {datacenter.name}

Since 2.0 Reference

DrsEnteredStandbyModeEvent

info

VC

DRS put {host.name} into standby mode

Since 2.5 Reference

DrsEnteringStandbyModeEvent

info

VC

DRS is putting {host.name} into standby mode

Since 4.0 Reference

DrsExitedStandbyModeEvent

info

VC

DRS moved {host.name} out of standby mode

Since 2.5 Reference

DrsExitingStandbyModeEvent

info

VC

DRS is moving {host.name} out of standby mode

Since 4.0 Reference

DrsExitStandbyModeFailedEvent

error

ESXHost

DRS cannot move {host.name} out of standby mode

Since 4.0 Reference

DrsInvocationFailedEvent

error

Cluster

DRS invocation not completed

Since 4.0 Reference

DrsRecoveredFromFailureEvent

info

VC

DRS has recovered from the failure

Since 4.0 Reference

DrsResourceConfigureFailedEvent

error

Cluster

Unable to apply DRS resource settings on host {host.name} in {datacenter.name}. {reason.msg}. This can significantly reduce the effectiveness of DRS.

Since 2.0 Reference

DrsResourceConfigureSyncedEvent

info

VC

Resource configuration specification returns to synchronization from previous failure on host '{host.name}' in {datacenter.name}

Since 2.0 Reference

DrsRuleComplianceEvent

info

VC

{vm.name} on {host.name} in {datacenter.name} is now compliant with DRS VM-Host affinity rules

Since 4.1 Reference

DrsRuleViolationEvent

warning

VirtualMachine

{vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host affinity rule

Since 4.1 Reference

DrsVmMigratedEvent

info

VC

DRS migrated {vm.name} from {sourceHost.name} to {host.name} in cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

DrsVmPoweredOnEvent

info

VC

DRS powered On {vm.name} on {host.name} in {datacenter.name}

Since 2.5 Reference

DuplicateIpDetectedEvent

warning

ESXHostNetwork

Virtual machine {macAddress} on host {host.name} has a duplicate IP {duplicateIP}

Since 2.5 Reference

DvpgImportEvent

info

VC

Import operation with type {importType} was performed on {net.name}

Since 5.1 Reference

DvpgRestoreEvent

info

VC

Restore operation was performed on {net.name}

Since 5.1 Reference

DVPortgroupCreatedEvent

info

VC

Distributed virtual port group {net.name} in {datacenter.name} was added to switch {dvs.name}.

Since 4.0 Reference

DVPortgroupDestroyedEvent

info

VC

Distributed virtual port group {net.name} in {datacenter.name} was deleted.

Since 4.0 Reference

DVPortgroupReconfiguredEvent

info

VC

Distributed virtual port group {net.name} in {datacenter.name} was reconfigured.

Since 4.0 Reference

DVPortgroupRenamedEvent

info

VC

Distributed virtual port group {oldName} in {datacenter.name} was renamed to {newName}

Since 4.0 Reference

DvsCreatedEvent

info

VC

A Distributed Virtual Switch {dvs.name} was created in {datacenter.name}.

Since 4.0 Reference

DvsDestroyedEvent

info

VC

Distributed Virtual Switch {dvs.name} in {datacenter.name} was deleted.

Since 4.0 Reference

DvsEvent

info

VC

Distributed Virtual Switch event

Since 4.0 Reference

DvsHealthStatusChangeEvent

info

VC

Health check status was changed in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}

Since 5.1 Reference

DvsHostBackInSyncEvent

info

VC

The Distributed Virtual Switch {dvs.name} configuration on the host was synchronized with that of the vCenter Server.

Since 4.0 Reference

DvsHostJoinedEvent

info

VC

The host {hostJoined.name} joined the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsHostLeftEvent

info

VC

The host {hostLeft.name} left the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsHostStatusUpdated

info

VC

The host {hostMember.name} changed status on the vNetwork Distributed Switch {dvs.name} in {datacenter.name}

Since 4.1 Reference

DvsHostWentOutOfSyncEvent

warning

ESXHostNetwork

The Distributed Virtual Switch {dvs.name} configuration on the host differed from that of the vCenter Server.

Since 4.0 Reference

DvsImportEvent

info

VC

Import operation with type {importType} was performed on {dvs.name}

Since 5.1 Reference

DvsMergedEvent

info

VC

Distributed Virtual Switch {srcDvs.name} was merged into {dstDvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortBlockedEvent

info

VC

Port {portKey} was blocked in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortConnectedEvent

info

VC

The port {portKey} was connected in the Distributed Virtual Switch {dvs.name} in {datacenter.name}

Since 4.0 Reference

DvsPortCreatedEvent

info

VC

New ports were created in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortDeletedEvent

info

VC

Deleted ports in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortDisconnectedEvent

info

VC

The port {portKey} was disconnected in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortEnteredPassthruEvent

info

VC

dvPort {portKey} entered passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}

Since 4.1 Reference

DvsPortExitedPassthruEvent

info

VC

dvPort {portKey} exited passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}

Since 4.1 Reference

DvsPortJoinPortgroupEvent

info

VC

Port {portKey} was moved into the distributed virtual port group {portgroupName} in {datacenter.name}.

Since 4.0 Reference

DvsPortLeavePortgroupEvent

info

VC

Port {portKey} was moved out of the distributed virtual port group {portgroupName} in {datacenter.name}.

Since 4.0 Reference

DvsPortLinkDownEvent

warning

VC

The port {portKey} link was down in the Distributed Virtual Switch {dvs.name} in {datacenter.name}

Since 4.0 Reference

DvsPortLinkUpEvent

info

VC

The port {portKey} link was up in the Distributed Virtual Switch {dvs.name} in {datacenter.name}

Since 4.0 Reference

DvsPortReconfiguredEvent

info

VC

Reconfigured ports in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortRuntimeChangeEvent

info

VC

The dvPort {portKey} runtime information changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.

Since 5.0 Reference

DvsPortUnblockedEvent

info

VC

Port {portKey} was unblocked in the Distributed Virtual Switch {dvs.name} in {datacenter.name}.

Since 4.0 Reference

DvsPortVendorSpecificStateChangeEvent

info

VC

The dvPort {portKey} vendor specific state changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}.

Since 5.0 Reference

DvsReconfiguredEvent

info

VC

The Distributed Virtual Switch {dvs.name} in {datacenter.name} was reconfigured.

Since 4.0 Reference

DvsRenamedEvent

info

VC

The Distributed Virtual Switch {oldName} in {datacenter.name} was renamed to {newName}.

Since 4.0 Reference

DvsRestoreEvent

info

VC

Restore operation was performed on {dvs.name}

Since 5.1 Reference

DvsUpgradeAvailableEvent

info

VC

An upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} is available.

Since 4.0 Reference

DvsUpgradedEvent

info

VC

Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} was upgraded.

Since 4.0 Reference

DvsUpgradeInProgressEvent

info

VC

An upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} is in progress.

Since 4.0 Reference

DvsUpgradeRejectedEvent

info

VC

Cannot complete an upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name}

Since 4.0 Reference

EnteredMaintenanceModeEvent

info

VC

Host {host.name} in {datacenter.name} has entered maintenance mode

Since 2.0 Reference

EnteredStandbyModeEvent

info

VC

The host {host.name} is in standby mode

Since 2.5 Reference

EnteringMaintenanceModeEvent

info

VC

Host {host.name} in {datacenter.name} has started to enter maintenance mode

Since 2.0 Reference

EnteringStandbyModeEvent

info

VC

The host {host.name} is entering standby mode

Since 2.5 Reference

ErrorUpgradeEvent

error

VC

{message}

Since 2.0 Reference

esx.audit.dcui.defaults.factoryrestore

warning

VC

esx.audit.dcui.defaults.factoryrestore| The host has been restored to default factory settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.disabled

info

VC

esx.audit.dcui.disabled| The DCUI has been disabled.

Since 5.0 Reference

esx.audit.dcui.enabled

info

VC

esx.audit.dcui.enabled| The DCUI has been enabled.

Since 5.0 Reference

esx.audit.dcui.host.reboot

warning

VC

esx.audit.dcui.host.reboot| The host is being rebooted through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.host.shutdown

warning

VC

esx.audit.dcui.host.shutdown| The host is being shut down through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.hostagents.restart

info

VC

esx.audit.dcui.hostagents.restart| The management agents on the host are being restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.login.failed

error

VC

esx.audit.dcui.login.failed| Authentication of user {1} has failed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.login.passwd.changed

info

VC

esx.audit.dcui.login.passwd.changed| Login password for user {1} has been changed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.network.factoryrestore

warning

VC

esx.audit.dcui.network.factoryrestore| The host has been restored to factory network settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.dcui.network.restart

info

VC

esx.audit.dcui.network.restart| A management interface {1} has been restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information.

Since 5.0 Reference

esx.audit.esxcli.host.poweroff

warning

ESXHost

esx.audit.esxcli.host.poweroff| The host is being powered off through esxcli. Reason for powering off: {1}. Please consult vSphere Documentation Center or follow the Ask VMware link for more information.

Since 5.1 Reference

esx.audit.esxcli.host.restart

info

ESXHost

esx.audit.esxcli.host.restart| event.esx.audit.esxcli.host.restart.fullFormat

Since 5.1 Reference

esx.audit.esximage.hostacceptance.changed

info

VC

esx.audit.esximage.hostacceptance.changed| Host acceptance level changed from {1} to {2}

Since 5.0 Reference

esx.audit.esximage.install.novalidation

warning

VC

esx.audit.esximage.install.novalidation| Attempting to install an image profile with validation disabled. This may result in an image with unsatisfied dependencies, file or package conflicts, and potential security violations.

Since 5.0 Reference

esx.audit.esximage.install.securityalert

warning

VC

esx.audit.esximage.install.securityalert| SECURITY ALERT: Installing image profile '{1}' with {2}.

Since 5.0 Reference

esx.audit.esximage.profile.install.successful

info

VC

esx.audit.esximage.profile.install.successful| Successfully installed image profile '{1}'. Installed VIBs {2}, removed VIBs {3}

Since 5.0 Reference

esx.audit.esximage.profile.update.successful

info

VC

esx.audit.esximage.profile.update.successful| Successfully updated host to image profile '{1}'. Installed VIBs {2}, removed VIBs {3}

Since 5.0 Reference

esx.audit.esximage.vib.install.successful

info

VC

esx.audit.esximage.vib.install.successful| Successfully installed VIBs {1}, removed VIBs {2}

Since 5.0 Reference

esx.audit.esximage.vib.remove.successful

info

VC

esx.audit.esximage.vib.remove.successful| Successfully removed VIBs {1}

Since 5.0 Reference

esx.audit.host.boot

info

VC

esx.audit.host.boot| Host has booted.

Since 5.0 Reference

esx.audit.host.maxRegisteredVMsExceeded

warning

ESXHost

esx.audit.host.maxRegisteredVMsExceeded| The number of virtual machines registered on host {host.name} in cluster {computeResource.name} in {datacenter.name} exceeded limit: {current} registered, {limit} is the maximum supported.

Since 5.1 Reference

esx.audit.host.stop.reboot

info

VC

esx.audit.host.stop.reboot| Host is rebooting.

Since 5.0 Reference

esx.audit.host.stop.shutdown

info

VC

esx.audit.host.stop.shutdown| Host is shutting down.

Since 5.0 Reference

esx.audit.lockdownmode.disabled

info

VC

esx.audit.lockdownmode.disabled| Administrator access to the host has been enabled.

Since 5.0 Reference

esx.audit.lockdownmode.enabled

info

VC

esx.audit.lockdownmode.enabled| Administrator access to the host has been disabled.

Since 5.0 Reference

esx.audit.maintenancemode.canceled

info

VC

esx.audit.maintenancemode.canceled| The host has canceled entering maintenance mode.

Since 5.0 Reference

esx.audit.maintenancemode.entered

info

VC

esx.audit.maintenancemode.entered| The host has entered maintenance mode.

Since 5.0 Reference

esx.audit.maintenancemode.entering

info

VC

esx.audit.maintenancemode.entering| The host has begun entering maintenance mode.

Since 5.0 Reference

esx.audit.maintenancemode.exited

info

VC

esx.audit.maintenancemode.exited| The host has exited maintenance mode.

Since 5.0 Reference

esx.audit.net.firewall.config.changed

info

VC

esx.audit.net.firewall.config.changed| Firewall configuration has changed. Operation '{1}' for rule set {2} succeeded.

Since 5.0 Reference

esx.audit.net.firewall.disabled

warning

VC

esx.audit.net.firewall.disabled| Firewall has been disabled.

Since 5.0 Reference

esx.audit.net.firewall.enabled

info

VC

esx.audit.net.firewall.enabled| Firewall has been enabled for port {1}.

Since 5.0 Reference

esx.audit.net.firewall.port.hooked

info

VC

esx.audit.net.firewall.port.hooked| Port {1} is now protected by Firewall.

Since 5.0 Reference

esx.audit.net.firewall.port.removed

warning

VC

esx.audit.net.firewall.port.removed| Port {1} is no longer protected with Firewall.

Since 5.0 Reference

esx.audit.net.lacp.disable

info

VC

esx.audit.net.lacp.disable| LACP for VDS {1} is disabled.

Since 5.1 Reference

esx.audit.net.lacp.enable

info

VC

esx.audit.net.lacp.enable| LACP for VDS {1} is enabled.

Since 5.1 Reference

esx.audit.net.lacp.uplink.connected

info

VC

esx.audit.net.lacp.uplink.connected| Lacp info: uplink {1} on VDS {2} got connected.

Since 5.1 Reference

esx.audit.net.vdl2.ip.change

warning

ESXHostNetwork

esx.audit.net.vdl2.ip.change| VDL2 IP changed on vmknic {1}, port {2}, DVS {3}, VLAN {4}.

Since 5.0 Reference

esx.audit.net.vdl2.mappingtable.full

warning

ESXHostNetwork

esx.audit.net.vdl2.mappingtable.full| Mapping table entries of VDL2 network {1} on DVS {2} exhausted. This network might suffer a low performance.

Since 5.0 Reference

esx.audit.net.vdl2.route.change

warning

ESXHostNetwork

esx.audit.net.vdl2.route.change| VDL2 IP interface on vmknic: {1}, DVS: {2}, VLAN: {3} default route changed.

Since 5.0 Reference

esx.audit.shell.disabled

info

VC

esx.audit.shell.disabled| The ESX command line shell has been disabled.

Since 5.0 Reference

esx.audit.shell.enabled

info

VC

esx.audit.shell.enabled| The ESX command line shell has been enabled.

Since 5.0 Reference

esx.audit.ssh.disabled

info

VC

esx.audit.ssh.disabled| SSH access has been disabled.

Since 5.0 Reference

esx.audit.ssh.enabled

info

VC

esx.audit.ssh.enabled| SSH access has been enabled.

Since 5.0 Reference

esx.audit.usb.config.changed

info

VC

esx.audit.usb.config.changed| USB configuration has changed on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

esx.audit.uw.secpolicy.alldomains.level.changed

warning

VC

esx.audit.uw.secpolicy.alldomains.level.changed| The enforcement level for all security domains has been changed to {1}. The enforcement level must always be set to enforcing.

Since 5.0 Reference

esx.audit.uw.secpolicy.domain.level.changed

warning

VC

esx.audit.uw.secpolicy.domain.level.changed| The enforcement level for security domain {1} has been changed to {2}. The enforcement level must always be set to enforcing.

Since 5.0 Reference

esx.audit.vmfs.lvm.device.discovered

info

VC

esx.audit.vmfs.lvm.device.discovered| One or more LVM devices have been discovered on this host.

Since 5.0 Reference

esx.audit.vmfs.volume.mounted

info

VC

esx.audit.vmfs.volume.mounted| File system {1} on volume {2} has been mounted in {3} mode on this host.

Since 5.0 Reference

esx.audit.vmfs.volume.umounted

info

VC

esx.audit.vmfs.volume.umounted| The volume {1} has been safely un-mounted. The datastore is no longer accessible on this host.

Since 5.0 Reference

esx.audit.vsan.clustering.enabled

info

VC

esx.audit.vsan.clustering.enabled| VSAN clustering and directory services have been enabled.

Since 5.5 Reference

esx.clear.coredump.configured

info

VC

esx.clear.coredump.configured| A vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved.

Since 5.1 Reference

esx.clear.net.connectivity.restored

info

ESXHostNetwork

esx.clear.net.connectivity.restored| Network connectivity restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.

Since 4.1 Reference

esx.clear.net.dvport.connectivity.restored

info

ESXHostNetwork

esx.clear.net.dvport.connectivity.restored| Network connectivity restored on DVPorts: {1}. Physical NIC {2} is up.

Since 4.1 Reference

esx.clear.net.dvport.redundancy.restored

info

ESXHostNetwork

esx.clear.net.dvport.redundancy.restored| Uplink redundancy restored on DVPorts: {1}. Physical NIC {2} is up.

Since 4.1 Reference

esx.clear.net.lacp.lag.transition.up

info

VC

esx.clear.net.lacp.lag.transition.up| LACP info: LAG {1} on VDS {2} is up.

Since 5.5 Reference

esx.clear.net.lacp.uplink.transition.up

info

ESXHostNetwork

esx.clear.net.lacp.uplink.transition.up| Lacp info: uplink {1} on VDS {2} is moved into link aggregation group.

Since 5.1 Reference

esx.clear.net.lacp.uplink.unblocked

info

ESXHostNetwork

esx.clear.net.lacp.uplink.unblocked| Lacp error: uplink {1} on VDS {2} is unblocked.

Since 5.1 Reference

esx.clear.net.redundancy.restored

info

ESXHostNetwork

esx.clear.net.redundancy.restored| Uplink redundancy restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.

Since 4.1 Reference

esx.clear.net.vmnic.linkstate.up

info

ESXHostNetwork

esx.clear.net.vmnic.linkstate.up| Physical NIC {1} linkstate is up.

Since 4.1 Reference

esx.clear.scsi.device.io.latency.improved

info

ESXHostStorage

esx.clear.scsi.device.io.latency.improved| Device {1} performance has improved. I/O latency reduced from {2} microseconds to {3} microseconds.

Since 5.0 Reference

esx.clear.scsi.device.state.on

info

ESXHostStorage

esx.clear.scsi.device.state.on| Device {1}, has been turned on administratively.

Since 5.0 Reference

esx.clear.scsi.device.state.permanentloss.deviceonline

info

ESXHostStorage

esx.clear.scsi.device.state.permanentloss.deviceonline| Device {1}, that was permanently inaccessible is now online. No data consistency guarantees.

Since 5.0 Reference

esx.clear.storage.apd.exit

info

ESXHostStorage

esx.clear.storage.apd.exit| Device or filesystem with identifer [{1}] has exited the All Paths Down state.

Since 5.1 Reference

esx.clear.storage.connectivity.restored

info

ESXHostStorage

esx.clear.storage.connectivity.restored| Connectivity to storage device {1} (Datastores: {2}) restored. Path {3} is active again.

Since 4.1 Reference

esx.clear.storage.redundancy.restored

info

ESXHostStorage

esx.clear.storage.redundancy.restored| Path redundancy to storage device {1} (Datastores: {2}) restored. Path {3} is active again.

Since 4.1 Reference

esx.clear.vsan.clustering.enabled

info

VC

esx.clear.vsan.clustering.enabled| VSAN clustering and directory services have now been enabled.

Since 5.5 Reference

esx.clear.vsan.network.available

info

VC

esx.clear.vsan.network.available| event.esx.clear.vsan.network.available.fullFormat

Since 5.5 Reference

esx.clear.vsan.vmknic.ready

info

VC

esx.clear.vsan.vmknic.ready| event.esx.clear.vsan.vmknic.ready.fullFormat

Since 5.5 Reference

esx.problem.3rdParty.error

error

VC

esx.problem.3rdParty.error| A 3rd party component, {1}, running on ESXi has reported an error. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.

Since 5.0 Reference

esx.problem.3rdParty.info

info

VC

esx.problem.3rdParty.info| event.esx.problem.3rdParty.info.fullFormat

Since 5.0 Reference

esx.problem.3rdParty.warning

warning

VC

esx.problem.3rdParty.warning| A 3rd party component, {1}, running on ESXi has reported a warning related to a problem. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}.

Since 5.0 Reference

esx.problem.apei.bert.memory.error.corrected

error

ESXHostHardware

esx.problem.apei.bert.memory.error.corrected| A corrected memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}

Since 4.1 Reference

esx.problem.apei.bert.memory.error.fatal

error

ESXHostHardware

esx.problem.apei.bert.memory.error.fatal| A fatal memory error occurred in the last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}

Since 4.1 Reference

esx.problem.apei.bert.memory.error.recoverable

error

ESXHostHardware

esx.problem.apei.bert.memory.error.recoverable| A recoverable memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}

Since 4.1 Reference

esx.problem.apei.bert.pcie.error.corrected

error

ESXHostHardware

esx.problem.apei.bert.pcie.error.corrected| A corrected PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.

Since 4.1 Reference

esx.problem.apei.bert.pcie.error.fatal

error

ESXHostHardware

esx.problem.apei.bert.pcie.error.fatal| Platform encounterd a fatal PCIe error in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.

Since 4.1 Reference

esx.problem.apei.bert.pcie.error.recoverable

error

ESXHostHardware

esx.problem.apei.bert.pcie.error.recoverable| A recoverable PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.

Since 4.1 Reference

esx.problem.application.core.dumped

warning

ESXHost

esx.problem.application.core.dumped| An application ({1}) running on ESXi host has crashed ({2} time(s) so far). A core file might have been created at {3}.

Since 5.0 Reference

esx.problem.coredump.unconfigured

warning

ESXHost

esx.problem.coredump.unconfigured| No vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved.

Since 5.0 Reference

esx.problem.cpu.amd.mce.dram.disabled

error

ESXHostHardware

esx.problem.cpu.amd.mce.dram.disabled| DRAM ECC not enabled. Please enable it in BIOS.

Since 5.0 Reference

esx.problem.cpu.intel.ioapic.listing.error

error

ESXHostHardware

esx.problem.cpu.intel.ioapic.listing.error| Not all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform.

Since 5.0 Reference

esx.problem.cpu.mce.invalid

error

ESXHostHardware

esx.problem.cpu.mce.invalid| MCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware.

Since 5.0 Reference

esx.problem.cpu.smp.ht.invalid

error

ESXHostHardware

esx.problem.cpu.smp.ht.invalid| Disabling HyperThreading due to invalid configuration: Number of threads: {1}, Number of PCPUs: {2}.

Since 5.0 Reference

esx.problem.cpu.smp.ht.numpcpus.max

error

ESXHostHardware

esx.problem.cpu.smp.ht.numpcpus.max| Found {1} PCPUs, but only using {2} of them due to specified limit.

Since 5.0 Reference

esx.problem.cpu.smp.ht.partner.missing

warning

ESXHostHardware

esx.problem.cpu.smp.ht.partner.missing| Disabling HyperThreading due to invalid configuration: HT partner {1} is missing from PCPU {2}.

Since 5.0 Reference

esx.problem.dhclient.lease.none

error

ESXHostNetwork

esx.problem.dhclient.lease.none| Unable to obtain a DHCP lease on interface {1}.

Since 5.0 Reference

esx.problem.dhclient.lease.offered.error

warning

ESXHostNetwork

esx.problem.dhclient.lease.offered.error| event.esx.problem.dhclient.lease.offered.error.fullFormat

Since 5.0 Reference

esx.problem.dhclient.lease.persistent.none

warning

ESXHostNetwork

esx.problem.dhclient.lease.persistent.none| No working DHCP leases in persistent database.

Since 5.0 Reference

esx.problem.esximage.install.error

warning

VC

esx.problem.esximage.install.error| Could not install image profile: {1}

Since 5.0 Reference

esx.problem.esximage.install.invalidhardware

warning

VC

esx.problem.esximage.install.invalidhardware| Host doesn't meet image profile '{1}' hardware requirements: {2}

Since 5.0 Reference

esx.problem.esximage.install.stage.error

warning

VC

esx.problem.esximage.install.stage.error| Could not stage image profile '{1}': {2}

Since 5.0 Reference

esx.problem.hardware.acpi.interrupt.routing.device.invalid

warning

ESXHostHardware

esx.problem.hardware.acpi.interrupt.routing.device.invalid| Skipping interrupt routing entry with bad device number: {1}. This is a BIOS bug.

Since 5.0 Reference

esx.problem.hardware.acpi.interrupt.routing.pin.invalid

warning

ESXHostHardware

esx.problem.hardware.acpi.interrupt.routing.pin.invalid| Skipping interrupt routing entry with bad device pin: {1}. This is a BIOS bug.

Since 5.0 Reference

esx.problem.hardware.ioapic.missing

warning

ESXHostHardware

esx.problem.hardware.ioapic.missing| IOAPIC Num {1} is missing. Please check BIOS settings to enable this IOAPIC.

Since 5.0 Reference

esx.problem.host.coredump

warning

ESXHost

esx.problem.host.coredump| An unread host kernel core dump has been found.

Since 5.0 Reference

esx.problem.hostd.core.dumped

warning

ESXHost

esx.problem.hostd.core.dumped| {1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped.

Since 5.0 Reference

esx.problem.iorm.badversion

warning

ESXHostStorage

esx.problem.iorm.badversion| Host {1} cannot participate in Storage I/O Control(SIOC) on datastore {2} because the version number {3} of the SIOC agent on this host is incompatible with number {4} of its counterparts on other hosts connected to this datastore.

Since 5.0 Reference

esx.problem.iorm.nonviworkload

warning

ESXHostStorage

esx.problem.iorm.nonviworkload| An external I/O activity is detected on datastore {1}, this is an unsupported configuration. Consult the Resource Management Guide or follow the Ask VMware link for more information.

Since 4.1 Reference

esx.problem.migrate.vmotion.default.heap.create.failed

error

Cluster

esx.problem.migrate.vmotion.default.heap.create.failed| Failed to create default migration heap. This might be the result of severe host memory pressure or virtual address space exhaustion. Migration might still be possible, but will be unreliable in cases of extreme host memory pressure.

Since 5.0 Reference

esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown

warning

Cluster

esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown| The ESXi host's vMotion network server encountered an error while monitoring incoming network connections. Shutting down listener socket. vMotion might not be possible with this host until vMotion is manually re-enabled. Failure status: {1}

Since 5.0 Reference

esx.problem.net.connectivity.lost

error

ESXHostNetwork

esx.problem.net.connectivity.lost| Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.

Since 4.1 Reference

esx.problem.net.dvport.connectivity.lost

error

ESXHostNetwork

esx.problem.net.dvport.connectivity.lost| Lost network connectivity on DVPorts: {1}. Physical NIC {2} is down.

Since 4.1 Reference

esx.problem.net.dvport.redundancy.degraded

warning

ESXHostNetwork

esx.problem.net.dvport.redundancy.degraded| Uplink redundancy degraded on DVPorts: {1}. Physical NIC {2} is down.

Since 4.1 Reference

esx.problem.net.dvport.redundancy.lost

warning

ESXHostNetwork

esx.problem.net.dvport.redundancy.lost| Lost uplink redundancy on DVPorts: {1}. Physical NIC {2} is down.

Since 4.1 Reference

esx.problem.net.e1000.tso6.notsupported

error

ESXHostNetwork

esx.problem.net.e1000.tso6.notsupported| Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter.

Since 4.1 Reference

esx.problem.net.fence.port.badfenceid

warning

ESXHostNetwork

esx.problem.net.fence.port.badfenceid| VMkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: invalid fenceId.

Since 5.0 Reference

esx.problem.net.fence.resource.limited

warning

ESXHostNetwork

esx.problem.net.fence.resource.limited| Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: maximum number of fence networks or ports have been reached.

Since 5.0 Reference

esx.problem.net.fence.switch.unavailable

warning

ESXHostNetwork

esx.problem.net.fence.switch.unavailable| Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: dvSwitch fence property is not set.

Since 5.0 Reference

esx.problem.net.firewall.config.failed

error

ESXHostNetwork

esx.problem.net.firewall.config.failed| Firewall configuration operation '{1}' failed. The changes were not applied to rule set {2}.

Since 5.0 Reference

esx.problem.net.firewall.port.hookfailed

error

ESXHostNetwork

esx.problem.net.firewall.port.hookfailed| Adding port {1} to Firewall failed.

Since 5.0 Reference

esx.problem.net.gateway.set.failed

error

ESXHostNetwork

esx.problem.net.gateway.set.failed| Cannot connect to the specified gateway {1}. Failed to set it.

Since 5.0 Reference

esx.problem.net.heap.belowthreshold

warning

ESXHostNetwork

esx.problem.net.heap.belowthreshold| {1} heap free size dropped below {2} percent.

Since 5.0 Reference

esx.problem.net.lacp.lag.transition.down

warning

VC

esx.problem.net.lacp.lag.transition.down| LACP warning: LAG {1} on VDS {2} is down.

Since 5.5 Reference

esx.problem.net.lacp.peer.noresponse

error

ESXHostNetwork

esx.problem.net.lacp.peer.noresponse| Lacp error: No peer response on uplink {1} for VDS {2}.

Since 5.1 Reference

esx.problem.net.lacp.policy.incompatible

error

ESXHostNetwork

esx.problem.net.lacp.policy.incompatible| Lacp error: Current teaming policy on VDS {1} is incompatible, supported is IP hash only.

Since 5.1 Reference

esx.problem.net.lacp.policy.linkstatus

error

ESXHostNetwork

esx.problem.net.lacp.policy.linkstatus| Lacp error: Current teaming policy on VDS {1} is incompatible, supported link failover detection is link status only.

Since 5.1 Reference

esx.problem.net.lacp.uplink.blocked

warning

ESXHostNetwork

esx.problem.net.lacp.uplink.blocked| Lacp warning: uplink {1} on VDS {2} is blocked.

Since 5.1 Reference

esx.problem.net.lacp.uplink.disconnected

warning

ESXHostNetwork

esx.problem.net.lacp.uplink.disconnected| Lacp warning: uplink {1} on VDS {2} got disconnected.

Since 5.1 Reference

esx.problem.net.lacp.uplink.fail.duplex

error

ESXHostNetwork

esx.problem.net.lacp.uplink.fail.duplex| Lacp error: Duplex mode across all uplink ports must be full, VDS {1} uplink {2} has different mode.

Since 5.1 Reference

esx.problem.net.lacp.uplink.fail.speed

error

ESXHostNetwork

esx.problem.net.lacp.uplink.fail.speed| Lacp error: Speed across all uplink ports must be same, VDS {1} uplink {2} has different speed.

Since 5.1 Reference

esx.problem.net.lacp.uplink.inactive

error

ESXHostNetwork

esx.problem.net.lacp.uplink.inactive| Lacp error: All uplinks on VDS {1} must be active.

Since 5.1 Reference

esx.problem.net.lacp.uplink.transition.down

warning

ESXHostNetwork

esx.problem.net.lacp.uplink.transition.down| Lacp warning: uplink {1} on VDS {2} is moved out of link aggregation group.

Since 5.1 Reference

esx.problem.net.migrate.bindtovmk

warning

ESXHostNetwork

esx.problem.net.migrate.bindtovmk| The ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.

Since 4.1 Reference

esx.problem.net.migrate.unsupported.latency

warning

ESXHostNetwork

esx.problem.net.migrate.unsupported.latency| ESXi has detected {1}ms round-trip vMotion network latency between host {2} and {3}. High latency vMotion networks are supported only if both ESXi hosts have been configured for vMotion latency tolerance.

Since 5.0 Reference

esx.problem.net.portset.port.full

warning

ESXHostNetwork

esx.problem.net.portset.port.full| Portset {1} has reached the maximum number of ports ({2}). Cannot apply for any more free ports.

Since 5.0 Reference

esx.problem.net.portset.port.vlan.invalidid

warning

ESXHostNetwork

esx.problem.net.portset.port.vlan.invalidid| {1} VLANID {2} is invalid. VLAN ID must be between 0 and 4095.

Since 5.0 Reference

esx.problem.net.proxyswitch.port.unavailable

warning

ESXHostNetwork

esx.problem.net.proxyswitch.port.unavailable| Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.

Since 4.1 Reference

esx.problem.net.redundancy.degraded

warning

ESXHostNetwork

esx.problem.net.redundancy.degraded| Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.

Since 4.1 Reference

esx.problem.net.redundancy.lost

warning

ESXHostNetwork

esx.problem.net.redundancy.lost| Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.

Since 4.1 Reference

esx.problem.net.uplink.mtu.failed

warning

ESXHostNetwork

esx.problem.net.uplink.mtu.failed| VMkernel failed to set the MTU value {1} on the uplink {2}.

Since 4.1 Reference

esx.problem.net.vdl2.instance.initialization.fail

error

ESXHostNetwork

esx.problem.net.vdl2.instance.initialization.fail| VDL2 instance on DVS {1} initialization failed.

Since 5.0 Reference

esx.problem.net.vdl2.instance.notexist

error

ESXHostNetwork

esx.problem.net.vdl2.instance.notexist| VDL2 overlay instance is not created on DVS {1} before initializing VDL2 port or VDL2 IP interface.

Since 5.0 Reference

esx.problem.net.vdl2.mcastgroup.fail

error

ESXHostNetwork

esx.problem.net.vdl2.mcastgroup.fail| VDL2 IP interface on vmknic: {1}, DVS: {2}, VLAN: {3} failed to join multicast group: {4}.

Since 5.0 Reference

esx.problem.net.vdl2.network.initialization.fail

error

ESXHostNetwork

esx.problem.net.vdl2.network.initialization.fail| VDL2 network {1} on DVS {2} initialization failed.

Since 5.0 Reference

esx.problem.net.vdl2.port.initialization.fail

error

ESXHostNetwork

esx.problem.net.vdl2.port.initialization.fail| VDL2 port {1} on VDL2 network {2}, DVS {3} initialization failed.

Since 5.0 Reference

esx.problem.net.vdl2.vmknic.fail

error

ESXHostNetwork

esx.problem.net.vdl2.vmknic.fail| VDL2 IP interface failed on vmknic {1}, port {2}, DVS {3}, VLAN {4}.

Since 5.0 Reference

esx.problem.net.vdl2.vmknic.notexist

error

ESXHostNetwork

esx.problem.net.vdl2.vmknic.notexist| VDL2 IP interface does not exist on DVS {1}, VLAN {2}.

Since 5.0 Reference

esx.problem.net.vmknic.ip.duplicate

warning

ESXHostNetwork

esx.problem.net.vmknic.ip.duplicate| A duplicate IP address was detected for {1} on the interface {2}. The current owner is {3}.

Since 4.1 Reference

esx.problem.net.vmnic.linkstate.down

warning

ESXHostNetwork

esx.problem.net.vmnic.linkstate.down| Physical NIC {1} linkstate is down.

Since 4.1 Reference

esx.problem.net.vmnic.linkstate.flapping

warning

ESXHostNetwork

esx.problem.net.vmnic.linkstate.flapping| Taking down physical NIC {1} because the link is unstable.

Since 5.0 Reference

esx.problem.net.vmnic.watchdog.reset

warning

ESXHostNetwork

esx.problem.net.vmnic.watchdog.reset| Uplink {1} has recovered from a transient failure due to watchdog timeout

Since 4.1 Reference

esx.problem.ntpd.clock.correction.error

warning

ESXHost

esx.problem.ntpd.clock.correction.error| NTP daemon stopped. Time correction {1} > {2} seconds. Manually set the time and restart ntpd.

Since 5.0 Reference

esx.problem.pageretire.platform.retire.request

info

VC

esx.problem.pageretire.platform.retire.request| Memory page retirement requested by platform firmware. FRU ID: {1}. Refer to System Hardware Log: {2}

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.host.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.host.exceeded| Number of host physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.kernel.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.kernel.exceeded| Number of kernel physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded| Number of physical memory pages belonging to (user) memroy client {1} that have been selected for retirement ({2}) exceeds threshold ({3}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded| Number of private user physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded| Number of shared user physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}).

Since 5.0 Reference

esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded

warning

ESXHost

esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded| Number of physical memory pages belonging to (vmm) memroy client {1} that have been selected for retirement ({2}) exceeds threshold ({3}).

Since 5.0 Reference

esx.problem.scsi.apd.event.descriptor.alloc.failed

error

ESXHostStorage

esx.problem.scsi.apd.event.descriptor.alloc.failed| No memory to allocate APD (All Paths Down) event subsystem.

Since 5.0 Reference

esx.problem.scsi.device.close.failed

warning

ESXHostStorage

esx.problem.scsi.device.close.failed| "Failed to close the device {1} properly, plugin {2}.

Since 5.0 Reference

esx.problem.scsi.device.detach.failed

warning

ESXHostStorage

esx.problem.scsi.device.detach.failed| Detach failed for device :{1}. Exceeded the number of devices that can be detached, please cleanup stale detach entries.

Since 5.0 Reference

esx.problem.scsi.device.filter.attach.failed

warning

ESXHostStorage

esx.problem.scsi.device.filter.attach.failed| Failed to attach filters to device '%s' during registration. Plugin load failed or the filter rules are incorrect.

Since 5.0 Reference

esx.problem.scsi.device.io.bad.plugin.type

warning

ESXHostStorage

esx.problem.scsi.device.io.bad.plugin.type| Bad plugin type for device {1}, plugin {2}

Since 5.0 Reference

esx.problem.scsi.device.io.inquiry.failed

warning

ESXHostStorage

esx.problem.scsi.device.io.inquiry.failed| Failed to get standard inquiry for device {1} from Plugin {2}.

Since 5.0 Reference

esx.problem.scsi.device.io.invalid.disk.qfull.value

warning

ESXHostStorage

esx.problem.scsi.device.io.invalid.disk.qfull.value| QFullSampleSize should be bigger than QFullThreshold. LUN queue depth throttling algorithm will not function as expected. Please set the QFullSampleSize and QFullThreshold disk configuration values in ESX correctly.

Since 5.0 Reference

esx.problem.scsi.device.io.latency.high

warning

ESXHostStorage

esx.problem.scsi.device.io.latency.high| Device {1} performance has deteriorated. I/O latency increased from average value of {2} microseconds to {3} microseconds.

Since 5.0 Reference

esx.problem.scsi.device.io.qerr.change.config

warning

ESXHostStorage

esx.problem.scsi.device.io.qerr.change.config| QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The system is not configured to change the QErr setting of device. The QErr value supported by system is 0x{3}. Please check the SCSI ChangeQErrSetting configuration value for ESX.

Since 5.0 Reference

esx.problem.scsi.device.io.qerr.changed

warning

ESXHostStorage

esx.problem.scsi.device.io.qerr.changed| QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The device was originally configured to the supported QErr setting of 0x{3}, but this has been changed and could not be changed back.

Since 5.0 Reference

esx.problem.scsi.device.is.local.failed

warning

ESXHostStorage

esx.problem.scsi.device.is.local.failed| Failed to verify if the device {1} from plugin {2} is a local - not shared - device

Since 5.0 Reference

esx.problem.scsi.device.is.pseudo.failed

warning

ESXHostStorage

esx.problem.scsi.device.is.pseudo.failed| Failed to verify if the device {1} from plugin {2} is a pseudo device

Since 5.0 Reference

esx.problem.scsi.device.is.ssd.failed

warning

ESXHostStorage

esx.problem.scsi.device.is.ssd.failed| Failed to verify if the device {1} from plugin {2} is a Solid State Disk device

Since 5.0 Reference

esx.problem.scsi.device.limitreached

error

ESXHostStorage

esx.problem.scsi.device.limitreached| The maximum number of supported devices of {1} has been reached. A device from plugin {2} could not be created.

Since 4.1 Reference

esx.problem.scsi.device.state.off

info

VC

esx.problem.scsi.device.state.off| Device {1}, has been turned off administratively.

Since 5.0 Reference

esx.problem.scsi.device.state.permanentloss

warning

ESXHostStorage

esx.problem.scsi.device.state.permanentloss| Device {1} has been removed or is permanently inaccessible. Affected datastores (if any): {2}.

Since 5.0 Reference

esx.problem.scsi.device.state.permanentloss.noopens

info

VC

esx.problem.scsi.device.state.permanentloss.noopens| Permanently inaccessible device {1} has no more opens. It is now safe to unmount datastores (if any) {2} and delete the device.

Since 5.0 Reference

esx.problem.scsi.device.state.permanentloss.pluggedback

warning

ESXHostStorage

esx.problem.scsi.device.state.permanentloss.pluggedback| Device {1} has been plugged back in after being marked permanently inaccessible. No data consistency guarantees.

Since 5.0 Reference

esx.problem.scsi.device.state.permanentloss.withreservationheld

error

ESXHostStorage

esx.problem.scsi.device.state.permanentloss.withreservationheld| Device {1} has been removed or is permanently inaccessible, while holding a reservation. Affected datastores (if any): {2}.

Since 5.0 Reference

esx.problem.scsi.device.thinprov.atquota

warning

ESXHostStorage

esx.problem.scsi.device.thinprov.atquota| Space utilization on thin-provisioned device {1} exceeded configured threshold. Affected datastores (if any): {2}.

Since 4.1 Reference

esx.problem.scsi.scsipath.limitreached

error

ESXHostStorage

esx.problem.scsi.scsipath.limitreached| The maximum number of supported paths of {1} has been reached. Path {2} could not be added.

Since 4.1 Reference

esx.problem.scsi.unsupported.plugin.type

warning

ESXHostStorage

esx.problem.scsi.unsupported.plugin.type| Scsi Device Allocation not supported for plugin type {1}

Since 5.0 Reference

esx.problem.storage.apd.start

warning

ESXHostStorage

esx.problem.storage.apd.start| Device or filesystem with identifer [{1}] has entered the All Paths Down state.

Since 5.1 Reference

esx.problem.storage.apd.timeout

warning

ESXHostStorage

esx.problem.storage.apd.timeout| Device or filesystem with identifer [{1}] has entered the All Paths Down Timeout state after being in the All Paths Down state for {2} seconds. I/Os will be fast failed.

Since 5.1 Reference

esx.problem.storage.connectivity.devicepor

warning

ESXHostStorage

esx.problem.storage.connectivity.devicepor| Frequent PowerOn Reset Unit Attentions are occurring on device {1}. This might indicate a storage problem. Affected datastores: {2}1

Since 4.1 Reference

esx.problem.storage.connectivity.lost

error

ESXHostStorage

esx.problem.storage.connectivity.lost| Lost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}.

Since 4.1 Reference

esx.problem.storage.connectivity.pathpor

warning

ESXHostStorage

esx.problem.storage.connectivity.pathpor| Frequent PowerOn Reset Unit Attentions are occurring on path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3}

Since 4.1 Reference

esx.problem.storage.connectivity.pathstatechanges

warning

ESXHostStorage

esx.problem.storage.connectivity.pathstatechanges| Frequent path state changes are occurring for path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3}

Since 4.1 Reference

esx.problem.storage.iscsi.discovery.connect.error

warning

ESXHostStorage

esx.problem.storage.iscsi.discovery.connect.error| iSCSI discovery to {1} on {2} failed. The iSCSI Initiator could not establish a network connection to the discovery address.

Since 5.0 Reference

esx.problem.storage.iscsi.discovery.login.error

warning

ESXHostStorage

esx.problem.storage.iscsi.discovery.login.error| iSCSI discovery to {1} on {2} failed. The Discovery target returned a login error of: {3}.

Since 5.0 Reference

esx.problem.storage.iscsi.target.connect.error

warning

ESXHostStorage

esx.problem.storage.iscsi.target.connect.error| Login to iSCSI target {1} on {2} failed. The iSCSI initiator could not establish a network connection to the target.

Since 5.0 Reference

esx.problem.storage.iscsi.target.login.error

warning

ESXHostStorage

esx.problem.storage.iscsi.target.login.error| Login to iSCSI target {1} on {2} failed. Target returned login error of: {3}.

Since 5.0 Reference

esx.problem.storage.iscsi.target.permanently.lost

error

ESXHostStorage

esx.problem.storage.iscsi.target.permanently.lost| The iSCSI target {2} was permanently removed from {1}.

Since 5.1 Reference

esx.problem.storage.redundancy.degraded

warning

ESXHostStorage

esx.problem.storage.redundancy.degraded| Path redundancy to storage device {1} degraded. Path {2} is down. Affected datastores: {3}.

Since 4.1 Reference

esx.problem.storage.redundancy.lost

warning

ESXHostStorage

esx.problem.storage.redundancy.lost| Lost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}.

Since 4.1 Reference

esx.problem.syslog.config

warning

ESXHost

esx.problem.syslog.config| System logging is not configured on host {host.name}. Please check Syslog options for the host under Configuration -> Software -> Advanced Settings in vSphere client.

Since 5.0 Reference

esx.problem.syslog.nonpersistent

warning

ESXHost

esx.problem.syslog.nonpersistent| System logs on host {host.name} are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition.

Since 5.1 Reference

esx.problem.vfat.filesystem.full.other

warning

ESXHostStorage

esx.problem.vfat.filesystem.full.other| The VFAT filesystem {1} (UUID {2}) is full.

Since 5.0 Reference

esx.problem.vfat.filesystem.full.scratch

warning

ESXHostStorage

esx.problem.vfat.filesystem.full.scratch| The host's scratch partition, which is the VFAT filesystem {1} (UUID {2}), is full.

Since 5.0 Reference

esx.problem.visorfs.failure

error

ESXHostStorage

esx.problem.visorfs.failure| An operation on the root filesystem has failed.

Since 5.0 Reference

esx.problem.visorfs.inodetable.full

warning

ESXHostStorage

esx.problem.visorfs.inodetable.full| The root filesystem's file table is full. As a result, the file {1} could not be created by the application '{2}'.

Since 5.0 Reference

esx.problem.visorfs.ramdisk.full

warning

ESXHostStorage

esx.problem.visorfs.ramdisk.full| The ramdisk '{1}' is full. As a result, the file {2} could not be written.

Since 5.0 Reference

esx.problem.visorfs.ramdisk.inodetable.full

error

ESXHostStorage

esx.problem.visorfs.ramdisk.inodetable.full| The file table of the ramdisk '{1}' is full. As a result, the file {2} could not be created by the application '{3}'.

Since 5.1 Reference

esx.problem.vm.kill.unexpected.fault.failure

error

ESXHost

esx.problem.vm.kill.unexpected.fault.failure| The VM using the config file {1} could not fault in a guest physical page from the hypervisor level swap file at {2}. The VM is terminated as further progress is impossible.

Since 5.1 Reference

esx.problem.vm.kill.unexpected.forcefulPageRetire

error

ESXHost

esx.problem.vm.kill.unexpected.forcefulPageRetire| The VM using the config file {1} contains the host physical page {2} which was scheduled for immediate retirement. To avoid system instability the VM is forcefully powered off.

Since 5.0 Reference

esx.problem.vm.kill.unexpected.noSwapResponse

error

ESXHost

esx.problem.vm.kill.unexpected.noSwapResponse| The VM using the config file {1} did not respond to {2} swap actions in {3} seconds and is forcefully powered off to prevent system instability.

Since 5.0 Reference

esx.problem.vm.kill.unexpected.vmtrack

error

ESXHost

esx.problem.vm.kill.unexpected.vmtrack| The VM using the config file {1} is allocating too many pages while system is critically low in free memory. It is forcefully terminated to prevent system instability.

Since 5.1 Reference

esx.problem.vmfs.ats.support.lost

error

ESXHostStorage

esx.problem.vmfs.ats.support.lost| event.esx.problem.vmfs.ats.support.lost.fullFormat

Since 5.1 Reference

esx.problem.vmfs.error.volume.is.locked

error

ESXHostStorage

esx.problem.vmfs.error.volume.is.locked| Volume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover.

Since 5.0 Reference

esx.problem.vmfs.extent.offline

warning

ESXHostStorage

esx.problem.vmfs.extent.offline| An attached device {1} may be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible.

Since 5.0 Reference

esx.problem.vmfs.extent.online

info

ESXHostStorage

esx.problem.vmfs.extent.online| Device {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available.

Since 5.0 Reference

esx.problem.vmfs.heartbeat.recovered

info

ESXHostStorage

esx.problem.vmfs.heartbeat.recovered| Successfully restored access to volume {1} ({2}) following connectivity issues.

Since 4.1 Reference

esx.problem.vmfs.heartbeat.timedout

warning

ESXHostStorage

esx.problem.vmfs.heartbeat.timedout| Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.

Since 4.1 Reference

esx.problem.vmfs.heartbeat.unrecoverable

error

ESXHostStorage

esx.problem.vmfs.heartbeat.unrecoverable| Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed.

Since 4.1 Reference

esx.problem.vmfs.journal.createfailed

warning

ESXHostStorage

esx.problem.vmfs.journal.createfailed| No space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support.

Since 4.1 Reference

esx.problem.vmfs.lock.corruptondisk

error

ESXHostStorage

esx.problem.vmfs.lock.corruptondisk| At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too.

Since 4.1 Reference

esx.problem.vmfs.nfs.mount.connect.failed

error

ESXHostStorage

esx.problem.vmfs.nfs.mount.connect.failed| Failed to mount to the server {1} mount point {2}. {3}

Since 4.1 Reference

esx.problem.vmfs.nfs.mount.limit.exceeded

error

ESXHostStorage

esx.problem.vmfs.nfs.mount.limit.exceeded| Failed to mount to the server {1} mount point {2}. {3}

Since 4.1 Reference

esx.problem.vmfs.nfs.server.disconnect

error

ESXHostStorage

esx.problem.vmfs.nfs.server.disconnect| Lost connection to server {1} mount point {2} mounted as {3} ({4}).

Since 4.1 Reference

esx.problem.vmfs.nfs.server.restored

info

ESXHostStorage

esx.problem.vmfs.nfs.server.restored| Restored connection to server {1} mount point {2} mounted as {3} ({4}).

Since 4.1 Reference

esx.problem.vmfs.resource.corruptondisk

error

ESXHostStorage

esx.problem.vmfs.resource.corruptondisk| At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too.

Since 4.1 Reference

esx.problem.vmfs.volume.locked

error

ESXHostStorage

esx.problem.vmfs.volume.locked| Volume on device {1} locked, possibly because remote host {2} encountered an error during a volume operation and could not recover.

Since 4.1 Reference

esx.problem.vmsyslogd.remote.failure

error

ESXHost

esx.problem.vmsyslogd.remote.failure| The host "{1}" has become unreachable. Remote logging to this host has stopped.

Since 5.0 Reference

esx.problem.vmsyslogd.storage.failure

error

ESXHost

esx.problem.vmsyslogd.storage.failure| Logging to storage has failed. Logs are no longer being stored locally on this host.

Since 5.0 Reference

esx.problem.vmsyslogd.storage.logdir.invalid

error

ESXHost

esx.problem.vmsyslogd.storage.logdir.invalid| The configured log directory {1} cannot be used. The default directory {2} will be used instead.

Since 5.1 Reference

esx.problem.vmsyslogd.unexpected

warning

ESXHost

esx.problem.vmsyslogd.unexpected| Log daemon has failed for an unexpected reason: {1}

Since 5.0 Reference

esx.problem.vpxa.core.dumped

warning

ESXHost

esx.problem.vpxa.core.dumped| {1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped.

Since 5.0 Reference

esx.problem.vsan.clustering.disabled

warning

VC

esx.problem.vsan.clustering.disabled| VSAN clustering and directory services have been disabled thus will be no longer available.

Since 5.5 Reference

esx.problem.vsan.net.not.ready

warning

ESXHostNetwork

esx.problem.vsan.net.not.ready| vmknic {1} that is currently configured to be used with VSAN doesn't have an IP address yet. There are no other active network configuration and therefore the VSAN node doesn't have network connectivity.

Since 5.5 Reference

esx.problem.vsan.net.redundancy.lost

warning

ESXHostNetwork

esx.problem.vsan.net.redundancy.lost| VSAN network configuration doesn't have any redundancy. This might be a problem if further network configuration is removed.

Since 5.5 Reference

esx.problem.vsan.net.redundancy.reduced

warning

ESXHostNetwork

esx.problem.vsan.net.redundancy.reduced| VSAN network configuration redundancy has been reduced. This might be a problem if further network configuration is removed.

Since 5.5 Reference

esx.problem.vsan.no.network.connectivity

error

ESXHostNetwork

esx.problem.vsan.no.network.connectivity| VSAN doesn't have any network configuration. This can severely impact several objects in the VSAN datastore.

Since 5.5 Reference

esx.problem.vsan.vmknic.not.ready

warning

VC

esx.problem.vsan.vmknic.not.ready| vmknic {1} that is currently configured to be used with VSAN doesn't have an IP address yet. However, there are other network configuration which are active. If those configurations are removed that may cause problems.

Since 5.5 Reference

ExitedStandbyModeEvent

info

VC

The host {host.name} is no longer in standby mode

Since 2.5 Reference

ExitingStandbyModeEvent

info

VC

The host {host.name} is exiting standby mode

Since 4.0 Reference

ExitMaintenanceModeEvent

info

VC

Host {host.name} in {datacenter.name} has exited maintenance mode

Since 2.0 Reference

ExitStandbyModeFailedEvent

error

ESXHost

The host {host.name} could not exit standby mode

Since 4.0 Reference

FailoverLevelRestored

info

VC

Sufficient resources are available to satisfy HA failover level in cluster {computeResource.name} in {datacenter.name}

Since 2.0 Reference

GeneralEvent

info

VC

General event: {message}

Since 2.0 Reference

GeneralHostErrorEvent

error

ESXHost

Error detected on {host.name} in {datacenter.name}: {message}

Since 2.0 Reference

GeneralHostInfoEvent

info

VC

Issue detected on {host.name} in {datacenter.name}: {message}

Since 2.0 Reference

GeneralHostWarningEvent

warning

ESXHost

Issue detected on {host.name} in {datacenter.name}: {message}

Since 2.0 Reference

GeneralUserEvent

user

VC

User logged event: {message}

Since 2.0 Reference

GeneralVmErrorEvent

error

VirtualMachine

Error detected for {vm.name} on {host.name} in {datacenter.name}: {message}

Since 2.0 Reference

GeneralVmInfoEvent

info

VC

Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message}

Since 2.0 Reference

GeneralVmWarningEvent

warning

VirtualMachine

Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message}

Since 2.0 Reference

GhostDvsProxySwitchDetectedEvent

info

VC

The Distributed Virtual Switch corresponding to the proxy switches {switchUuid} on the host {host.name} does not exist in vCenter or does not contain this host.

Since 4.0 Reference

GhostDvsProxySwitchRemovedEvent

info

VC

A ghost proxy switch {switchUuid} on the host {host.name} was resolved.

Since 4.0 Reference

GlobalMessageChangedEvent

info

VC

The message changed: {message}

Since 2.0 Reference

hbr.primary.AppQuiescedDeltaCompletedEvent

info

VC

hbr.primary.AppQuiescedDeltaCompletedEvent| Application consistent delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred)

Since 5.0 Reference

hbr.primary.ConnectionRestoredToHbrServerEvent

info

VC

hbr.primary.ConnectionRestoredToHbrServerEvent| Connection to replication server restored for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

hbr.primary.DeltaAbortedEvent

warning

VC

hbr.primary.DeltaAbortedEvent| Delta aborted for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForDeltaAbort}

Since 5.0 Reference

hbr.primary.DeltaCompletedEvent

info

VC

hbr.primary.DeltaCompletedEvent| Delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred).

Since 5.0 Reference

hbr.primary.DeltaStartedEvent

info

VC

hbr.primary.DeltaStartedEvent| Delta started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

hbr.primary.FailedToStartDeltaEvent

error

VC

hbr.primary.FailedToStartDeltaEvent| Failed to start delta for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault}

Since 5.0 Reference

hbr.primary.FailedToStartSyncEvent

error

VC

hbr.primary.FailedToStartSyncEvent| Failed to start full sync for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault}

Since 5.0 Reference

hbr.primary.FSQuiescedDeltaCompletedEvent

warning

VC

hbr.primary.FSQuiescedDeltaCompletedEvent| File system consistent delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred)

Since 5.0 Reference

hbr.primary.InvalidDiskReplicationConfigurationEvent

warning

VC

hbr.primary.InvalidDiskReplicationConfigurationEvent| Replication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}, disk {diskKey}: {reasonForFault.@enum.fault.ReplicationDiskConfigFault.ReasonForFault}

Since 5.0 Reference

hbr.primary.InvalidVmReplicationConfigurationEvent

warning

VC

hbr.primary.InvalidVmReplicationConfigurationEvent| Replication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reasonForFault.@enum.fault.ReplicationVmConfigFault.ReasonForFault}

Since 5.0 Reference

hbr.primary.NoConnectionToHbrServerEvent

warning

VC

hbr.primary.NoConnectionToHbrServerEvent| No connection to replication server for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerConnection}

Since 5.0 Reference

hbr.primary.NoProgressWithHbrServerEvent

warning

VC

hbr.primary.NoProgressWithHbrServerEvent| Replication server error for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerProgress}

Since 5.0 Reference

hbr.primary.QuiesceNotSupported

warning

VC

hbr.primary.QuiesceNotSupported| Quiescing is not supported for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

hbr.primary.SyncCompletedEvent

info

VC

hbr.primary.SyncCompletedEvent| Full sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred).

Since 5.0 Reference

hbr.primary.SyncStartedEvent

info

VC

hbr.primary.SyncStartedEvent| Full sync started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}.

Since 5.0 Reference

hbr.primary.UnquiescedDeltaCompletedEvent

warning

VC

hbr.primary.UnquiescedDeltaCompletedEvent| Delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred).

Since 5.0 Reference

hbr.primary.VmReplicationConfigurationChangedEvent

info

VC

hbr.primary.VmReplicationConfigurationChangedEvent| Replication configuration changed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({numDisks} disks, {rpo} minutes RPO, HBR Server is {hbrServerAddress}).

Since 5.0 Reference

HealthStatusChangedEvent

info

VC

{componentName} status changed from {oldStatus} to {newStatus}

Since 4.0 Reference

HostAddedEvent

info

VC

Added host {host.name} to datacenter {datacenter.name}

Since 2.0 Reference

HostAddFailedEvent

error

VC

Cannot add host {hostname} to datacenter {datacenter.name}

Since 2.0 Reference

HostAdminDisableEvent

warning

VC

Administrator access to the host {host.name} is disabled

Since 2.5 Reference

HostAdminEnableEvent

warning

VC

Administrator access to the host {host.name} has been restored

Since 2.5 Reference

HostCnxFailedAccountFailedEvent

error

ESXHost

Cannot connect {host.name} in {datacenter.name}: cannot configure management account

Since 2.0 Reference

HostCnxFailedAlreadyManagedEvent

error

ESXHost

Cannot connect {host.name} in {datacenter.name}: already managed by {serverName}

Since 2.0 Reference

HostCnxFailedBadCcagentEvent

error

ESXHost

Cannot connect host {host.name} in {datacenter.name} : server agent is not responding

Since 2.0 Reference

HostCnxFailedBadUsernameEvent

error

ESXHost

Cannot connect {host.name} in {datacenter.name}: incorrect user name or password

Since 2.0 Reference

HostCnxFailedBadVersionEvent

error

ESXHost

Cannot connect {host.name} in {datacenter.name}: incompatible version

Since 2.0 Reference

HostCnxFailedCcagentUpgradeEvent

error

ESXHost

Cannot connect host {host.name} in {datacenter.name}. Did not install or upgrade vCenter agent service.

Since 2.0 Reference

HostCnxFailedEvent

error

ESXHost

Cannot connect {host.name} in {datacenter.name}: error connecting to host

Since 2.0 Reference

HostCnxFailedNetworkErrorEvent

error

ESXHost

Cannot connect {host.name} in {datacenter.name}: network error

Since 2.0 Reference

HostCnxFailedNoAccessEvent

error

ESXHost

Cannot connect host {hos