vCenter Events
ID | Severity | Group | Message Catalog Text |
com.vmware.cl.CopyLibraryItemEvent | info | VC | com.vmware.cl.CopyLibraryItemEvent|Copied Library Item {targetLibraryItemName} to Library {targetLibraryName}({targetLibraryId}). Source Library Item {sourceLibraryItemName}({sourceLibraryItemId}), source Library {sourceLibraryName}({sourceLibraryId}). Since 6.0 |
com.vmware.cl.CopyLibraryItemFailEvent | error | VC | com.vmware.cl.CopyLibraryItemFailEvent|Failed to copy Library Item {targetLibraryItemName}. Since 6.0 |
com.vmware.cl.CreateLibraryEvent | info | VC | com.vmware.cl.CreateLibraryEvent|Created Library {libraryName} Since 6.0 |
com.vmware.cl.CreateLibraryFailEvent | error | VC | com.vmware.cl.CreateLibraryFailEvent|Failed to create Library {libraryName} Since 6.0 |
com.vmware.cl.CreateLibraryItemEvent | info | VC | com.vmware.cl.CreateLibraryItemEvent|Created Library Item {libraryItemName} in Library {libraryName}({libraryId}). Since 6.0 |
com.vmware.cl.CreateLibraryItemFailEvent | error | VC | com.vmware.cl.CreateLibraryItemFailEvent|Failed to create Library Item {libraryItemName}. Since 6.0 |
com.vmware.cl.DeleteLibraryEvent | info | VC | com.vmware.cl.DeleteLibraryEvent|Deleted Library {libraryName} Since 6.0 |
com.vmware.cl.DeleteLibraryFailEvent | error | VC | com.vmware.cl.DeleteLibraryFailEvent|Failed to delete Library Since 6.0 |
com.vmware.cl.DeleteLibraryItemEvent | info | VC | com.vmware.cl.DeleteLibraryItemEvent|Deleted Library Item {libraryItemName} in Library {libraryName}({libraryId}). Since 6.0 |
com.vmware.cl.DeleteLibraryItemFailEvent | error | VC | com.vmware.cl.DeleteLibraryItemFailEvent|Failed to delete Library Item. Since 6.0 |
com.vmware.cl.UpdateLibraryEvent | info | VC | com.vmware.cl.UpdateLibraryEvent|Updated Library {libraryName} Since 6.0 |
com.vmware.cl.UpdateLibraryFailEvent | error | VC | com.vmware.cl.UpdateLibraryFailEvent|Failed to update Library Since 6.0 |
com.vmware.cl.UpdateLibraryItemEvent | info | VC | com.vmware.cl.UpdateLibraryItemEvent|Updated Library Item {libraryItemName} in Library {libraryName}({libraryId}). Since 6.0 |
com.vmware.cl.UpdateLibraryItemFailEvent | error | VC | com.vmware.cl.UpdateLibraryItemFailEvent|Failed to update Library Item. Since 6.0 |
com.vmware.license.HostLicenseExpiredEvent | warning | VC | com.vmware.license.HostLicenseExpiredEvent|Expired host license or evaluation period. Since 6.0 |
com.vmware.license.HostSubscriptionLicenseExpiredEvent | warning | VC | com.vmware.license.HostSubscriptionLicenseExpiredEvent|Expired host time-limited license. Since 6.0 |
com.vmware.license.VcLicenseExpiredEvent | warning | VC | com.vmware.license.VcLicenseExpiredEvent|Expired vCenter Server license or evaluation period. Since 6.0 |
com.vmware.license.VcSubscriptionLicenseExpiredEvent | warning | VC | com.vmware.license.VcSubscriptionLicenseExpiredEvent|Expired vCenter Server time-limited license. Since 6.0 |
com.vmware.license.vsan.HostSsdOverUsageEvent | warning | VC | com.vmware.license.vsan.HostSsdOverUsageEvent|The capacity of the flash disks on the host exceeds the limit of the Virtual SAN license. Since 6.0 |
com.vmware.license.vsan.LicenseExpiryEvent | warning | VC | com.vmware.license.vsan.LicenseExpiryEvent|Expired Virtual SAN license or evaluation period. Since 6.0 |
com.vmware.license.vsan.SubscriptionLicenseExpiredEvent | warning | VC | com.vmware.license.vsan.SubscriptionLicenseExpiredEvent|Expired Virtual SAN time-limited license. Since 6.0 |
com.vmware.pbm.profile.associate | info | VC | com.vmware.pbm.profile.associate|Associated storage policy: {ProfileId} with entity: {EntityId} Since 6.0 |
com.vmware.pbm.profile.delete | info | VC | com.vmware.pbm.profile.delete|Deleted storage policy: {ProfileId} Since 6.0 |
com.vmware.pbm.profile.dissociate | info | VC | com.vmware.pbm.profile.dissociate|Dissociated storage policy: {ProfileId} from entity: {EntityId} Since 6.0 |
com.vmware.pbm.profile.updateName | info | VC | com.vmware.pbm.profile.updateName|Storage policy name updated for {ProfileId}. New name: {NewProfileName} Since 6.0 |
com.vmware.rbd.activateRuleSet | info | VC | com.vmware.rbd.activateRuleSet|Activate Rule Set Since 6.0 |
com.vmware.rbd.fdmPackageMissing | warning | VC | com.vmware.rbd.fdmPackageMissing|A host in a HA cluster does not have the 'vmware-fdm' package in its image profile Since 6.0 |
com.vmware.rbd.hostProfileRuleAssocEvent | warning | VC | com.vmware.rbd.hostProfileRuleAssocEvent|A host profile associated with one or more active rules was deleted. Since 6.0 |
com.vmware.rbd.ignoreMachineIdentity | warning | VC | com.vmware.rbd.ignoreMachineIdentity|Ignoring the AutoDeploy.MachineIdentity event, since the host is already provisioned through Auto Deploy Since 6.0 |
com.vmware.rbd.pxeBootNoImageRule | info | VC | com.vmware.rbd.pxeBootNoImageRule|Unable to PXE boot host since it does not match any rules Since 6.0 |
com.vmware.rbd.pxeBootUnknownHost | info | VC | com.vmware.rbd.pxeBootUnknownHost|PXE Booting unknown host Since 6.0 |
com.vmware.rbd.pxeProfileAssoc | info | VC | com.vmware.rbd.pxeProfileAssoc|Attach PXE Profile Since 6.0 |
com.vmware.rbd.vmcaCertGenerationFailureEvent | error | VC | com.vmware.rbd.vmcaCertGenerationFailureEvent|Failed to generate host certificates using VMCA Since 6.0 |
com.vmware.vc.certmgr.HostCaCertsAndCrlsUpdatedEvent | info | VC | com.vmware.vc.certmgr.HostCaCertsAndCrlsUpdatedEvent|CA Certificates were updated on {hostname} Since 6.0 |
com.vmware.vc.certmgr.HostCertExpirationImminentEvent | warning | VC | com.vmware.vc.certmgr.HostCertExpirationImminentEvent|Host Certificate expiration is imminent on {hostname}. Expiration Date: {expiryDate} Since 6.0 |
com.vmware.vc.certmgr.HostCertExpiringEvent | warning | VC | com.vmware.vc.certmgr.HostCertExpiringEvent|Host Certificate on {hostname} is nearing expiration. Expiration Date: {expiryDate} Since 6.0 |
com.vmware.vc.certmgr.HostCertExpiringShortlyEvent | warning | VC | com.vmware.vc.certmgr.HostCertExpiringShortlyEvent|Host Certificate on {hostname} will expire soon. Expiration Date: {expiryDate} Since 6.0 |
com.vmware.vc.certmgr.HostCertManagementModeChangedEvent | info | VC | com.vmware.vc.certmgr.HostCertManagementModeChangedEvent|Host Certificate Management Mode changed from {previousMode} to {presentMode} Since 6.0 |
com.vmware.vc.certmgr.HostCertMetadataChangedEvent | info | VC | com.vmware.vc.certmgr.HostCertMetadataChangedEvent|Host Certificate Management Metadata changed Since 6.0 |
com.vmware.vc.certmgr.HostCertRevokedEvent | warning | VC | com.vmware.vc.certmgr.HostCertRevokedEvent|Host Certificate on {hostname} is revoked. Since 6.0 |
com.vmware.vc.certmgr.HostCertUpdatedEvent | info | VC | com.vmware.vc.certmgr.HostCertUpdatedEvent|Host Certificate was updated on {hostname}, new thumbprint: {thumbprint} Since 6.0 |
com.vmware.vc.certmgr.HostMgmtAgentsRestartedEvent | info | VC | com.vmware.vc.certmgr.HostMgmtAgentsRestartedEvent|Management Agents were restarted on {hostname} Since 6.0 |
com.vmware.vc.HA.ClusterFailoverInProgressEvent | Warning | VC | com.vmware.vc.HA.ClusterFailoverInProgressEvent|vSphere HA failover operation in progress in cluster {computeResource.name} in datacenter {datacenter.name}: {numBeingPlaced} VMs being restarted, {numToBePlaced} VMs waiting for a retry, {numAwaitingResource} VMs waiting for resources, {numAwaitingVsanVmChange} inaccessible Virtual SAN VMs Since 6.0 |
com.vmware.vc.HA.ConnectedToMaster | info | VC | com.vmware.vc.HA.ConnectedToMaster|vSphere HA agent on host {host.name} connected to the vSphere HA master on host {masterHostName} in cluster {computeResource.name} in datacenter {datacenter.name} Since 6.0 |
com.vmware.vc.HA.CreateConfigVvolFailedEvent | error | VC | com.vmware.vc.HA.CreateConfigVvolFailedEvent|vSphere HA failed to create a configuration vVol for this datastore and so will not be able to protect virtual machines on the datastore until the problem is resolved. Error: {fault} Since 6.0 |
com.vmware.vc.HA.CreateConfigVvolSucceededEvent | info | VC | com.vmware.vc.HA.CreateConfigVvolSucceededEvent|vSphere HA successfully created a configuration vVol after the previous failure Since 6.0 |
com.vmware.vc.HA.VcCannotCommunicateWithMasterEvent | warning | VC | com.vmware.vc.HA.VcCannotCommunicateWithMasterEvent|vCenter Server cannot communicate with the master vSphere HA agent on {hostname} in cluster {computeResource.name} in {datacenter.name} Since 6.0 |
com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore | warning | VC | com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore|vSphere HA did not terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.VmcpNotTerminateVmWithInaccessibleDatastore} Since 6.0 |
com.vmware.vc.HA.VmcpStorageFailureCleared | info | VC | com.vmware.vc.HA.VmcpStorageFailureCleared|Datastore {ds.name} mounted on host {host.name} was inaccessible. The condition was cleared and the datastore is now accessible Since 6.0 |
com.vmware.vc.HA.VmcpStorageFailureDetectedForVm | warning | VC | com.vmware.vc.HA.VmcpStorageFailureDetectedForVm|vSphere HA detected that a datastore mounted on host {host.name} in cluster {computeResource.name} in {datacenter.name} was inaccessible due to {failureType.@enum.com.vmware.vc.HA.VmcpStorageFailureDetectedForVm}. This affected VM {vm.name} with files on the datastore Since 6.0 |
com.vmware.vc.HA.VmcpTerminateVmAborted | error | VC | com.vmware.vc.HA.VmcpTerminateVmAborted|vSphere HA was unable to terminate VM {vm.name} affected by an inaccessible datastore on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries Since 6.0 |
com.vmware.vc.HA.VmcpTerminatingVm | warning | VC | com.vmware.vc.HA.VmcpTerminatingVm|vSphere HA attempted to terminate VM {vm.name} on host{host.name} in cluster {computeResource.name} in {datacenter.name} because the VM was affected by an inaccessible datastore Since 6.0 |
com.vmware.vc.HA.VmDasResetAbortedEvent | error | VC | com.vmware.vc.HA.VmDasResetAbortedEvent|vSphere HA was unable to reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} after {retryTimes} retries Since 6.0 |
com.vmware.vc.host.problem.DeprecatedVMFSVolumeFound | warning | VC | com.vmware.vc.host.problem.DeprecatedVMFSVolumeFound|Deprecated VMFS volume(s) found on the host. Please consider upgrading volume(s) to the latest version. Since 6.0 |
com.vmware.vc.iofilter.FilterInstallationFailedEvent | error | VC | com.vmware.vc.iofilter.FilterInstallationFailedEvent|vSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed Since 6.0 |
com.vmware.vc.iofilter.FilterInstallationSuccessEvent | info | VC | com.vmware.vc.iofilter.FilterInstallationSuccessEvent|vSphere APIs for I/O Filters (VAIO) installation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful Since 6.0 |
com.vmware.vc.iofilter.FilterUninstallationFailedEvent | error | VC | com.vmware.vc.iofilter.FilterUninstallationFailedEvent|vSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed Since 6.0 |
com.vmware.vc.iofilter.FilterUninstallationSuccessEvent | info | VC | com.vmware.vc.iofilter.FilterUninstallationSuccessEvent|vSphere APIs for I/O Filters (VAIO) uninstallation of filters on cluster {computeResource.name} in datacenter {datacenter.name} is successful Since 6.0 |
com.vmware.vc.iofilter.FilterUpgradeFailedEvent | error | VC | com.vmware.vc.iofilter.FilterUpgradeFailedEvent|vSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has failed Since 6.0 |
com.vmware.vc.iofilter.FilterUpgradeSuccessEvent | info | VC | com.vmware.vc.iofilter.FilterUpgradeSuccessEvent|vSphere APIs for I/O Filters (VAIO) upgrade of filters on cluster {computeResource.name} in datacenter {datacenter.name} has succeeded Since 6.0 |
com.vmware.vc.iofilter.HostVendorProviderRegistrationFailedEvent | error | VC | com.vmware.vc.iofilter.HostVendorProviderRegistrationFailedEvent|vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} registration has failed. Reason : {fault.msg}. Since 6.0 |
com.vmware.vc.iofilter.HostVendorProviderRegistrationSuccessEvent | info | VC | com.vmware.vc.iofilter.HostVendorProviderRegistrationSuccessEvent|vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} has been successfully registered Since 6.0 |
com.vmware.vc.iofilter.HostVendorProviderUnregistrationFailedEvent | error | VC | com.vmware.vc.iofilter.HostVendorProviderUnregistrationFailedEvent|Failed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name}. Reason : {fault.msg}. Since 6.0 |
com.vmware.vc.iofilter.HostVendorProviderUnregistrationSuccessEvent | info | VC | com.vmware.vc.iofilter.HostVendorProviderUnregistrationSuccessEvent|Failed to unregister vSphere APIs for I/O Filters (VAIO) vendor provider {host.name} Since 6.0 |
com.vmware.vc.sms.ObjectTypeAlarmClearedEvent | info | VC | com.vmware.vc.sms.ObjectTypeAlarmClearedEvent|Storage provider [{providerName}] cleared a Storage Alarm of type 'Object' on {eventSubjectId} : {msgTxt} Since 6.0 |
com.vmware.vc.sms.ObjectTypeAlarmErrorEvent | error | VC | com.vmware.vc.sms.ObjectTypeAlarmErrorEvent|Storage provider [{providerName}] raised an alert type 'Object' on {eventSubjectId} : {msgTxt} Since 6.0 |
com.vmware.vc.sms.ObjectTypeAlarmWarningEvent | warning | VC | com.vmware.vc.sms.ObjectTypeAlarmWarningEvent|Storage provider [{providerName}] raised a warning of type 'Object' on {eventSubjectId} : {msgTxt} Since 6.0 |
com.vmware.vc.sms.VasaProviderCertificateHardLimitReachedEvent | error | VC | com.vmware.vc.sms.VasaProviderCertificateHardLimitReachedEvent|Certificate for storage provider {providerName} will expire very shortly. Expiration date : {expiryDate} Since 6.0 |
com.vmware.vc.sms.VasaProviderCertificateSoftLimitReachedEvent | warning | VC | com.vmware.vc.sms.VasaProviderCertificateSoftLimitReachedEvent|Certificate for storage provider {providerName} will expire soon. Expiration date : {expiryDate} Since 6.0 |
com.vmware.vc.sms.VasaProviderCertificateValidEvent | info | VC | com.vmware.vc.sms.VasaProviderCertificateValidEvent|Certificate for storage provider {providerName} is valid Since 6.0 |
com.vmware.vc.sms.VasaProviderConnectedEvent | info | VC | com.vmware.vc.sms.VasaProviderConnectedEvent|Storage provider {providerName} is connected Since 6.0 |
com.vmware.vc.sms.VasaProviderDisconnectedEvent | error | VC | com.vmware.vc.sms.VasaProviderDisconnectedEvent|Storage provider {providerName} is disconnected Since 6.0 |
com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsFailure | error | VC | com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsFailure|Refreshing CA certificates and CRLs failed for VASA providers with url : {providerUrls} Since 6.0 |
com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsSuccess | info | VC | com.vmware.vc.sms.VasaProviderRefreshCACertsAndCRLsSuccess|Refreshing CA certificates and CRLs succeeded for all registered VASA providers. Since 6.0 |
com.vmware.vc.spbm.ServiceErrorEvent | error | VC | com.vmware.vc.spbm.ServiceErrorEvent|Configuring storage policy failed for VM {entityName}. Verify that SPBM service is healthy. Fault Reason : {errorMessage} Since 6.0 |
com.vmware.vc.vm.DstVmMigratedEvent | info | VC | com.vmware.vc.vm.DstVmMigratedEvent|Virtual machine {vm.name} {newMoRef} in {computeResource.name} in {datacenter.name} was migrated from {oldMoRef} Since 6.0 |
com.vmware.vc.vm.PowerOnAfterCloneErrorEvent | error | VC | com.vmware.vc.vm.PowerOnAfterCloneErrorEvent|Virtual machine {vm.name} failed to power on after cloning on host {host.name} in datacenter {datacenter.name} Since 6.0 |
com.vmware.vc.vm.SrcVmMigratedEvent | info | VC | com.vmware.vc.vm.SrcVmMigratedEvent|Virtual machine {vm.name} {oldMoRef} in {computeResource.name} in {datacenter.name} was migrated to {newMoRef} Since 6.0 |
com.vmware.vc.vm.VmAdapterResvNotSatisfiedEvent | error | VC | com.vmware.vc.vm.VmAdapterResvNotSatisfiedEvent|Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is not satisfied Since 6.0 |
com.vmware.vc.vm.VmAdapterResvSatisfiedEvent | info | VC | com.vmware.vc.vm.VmAdapterResvSatisfiedEvent|Reservation of Virtual NIC {deviceLabel} of machine {vm.name} on host {host.name} in datacenter {datacenter.name} is satisfied Since 6.0 |
com.vmware.vc.vsan.ChecksumDisabledHostFoundEvent | error | VC | com.vmware.vc.vsan.ChecksumDisabledHostFoundEvent|Found a checksum disabled host {host.name} in a checksum protected vCenter Server cluster {computeResource.name} in datacenter {datacenter.name} Since 6.0 |
com.vmware.vc.vsan.ChecksumNotSupportedDiskFoundEvent | error | VC | com.vmware.vc.vsan.ChecksumNotSupportedDiskFoundEvent|Virtual SAN disk {disk} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} does not support checksum Since 6.0 |
com.vmware.vc.vsan.TurnDiskLocatorLedOffFailedEvent | error | VC | com.vmware.vc.vsan.TurnDiskLocatorLedOffFailedEvent|Failed to turn off the locator LED of disk {disk.path}. Reason : {fault.msg} Since 6.0 |
com.vmware.vc.vsan.TurnDiskLocatorLedOnFailedEvent | error | VC | com.vmware.vc.vsan.TurnDiskLocatorLedOnFailedEvent|Failed to turn on the locator LED of disk {disk.path}. Reason : {fault.msg} Since 6.0 |
com.vmware.vc.vsan.VsanHostNeedsUpgradeEvent | warning | VC | com.vmware.vc.vsan.VsanHostNeedsUpgradeEvent|Virtual SAN cluster {computeResource.name} has one or more hosts that need disk format upgrade: {host}. For more detailed information of Virtual SAN upgrade, please see the 'Virtual SAN upgrade procedure' section in the documentation Since 6.0 |
DrsSoftRuleViolationEvent | info | VC | {vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host soft affinity rule Since 6.0 |
esx.audit.account.locked | warning | VC | esx.audit.account.locked|Remote access for ESXi local user account '{1}' has been locked for {2} seconds after {3} failed login attempts. Since 6.0 |
esx.audit.account.loginfailures | warning | VC | esx.audit.account.loginfailures|Multiple remote login failures detected for ESXi local user account '{1}'. Since 6.0 |
esx.audit.lockdownmode.exceptions.changed | info | VC | esx.audit.lockdownmode.exceptions.changed|List of lockdown exception users has been changed. Since 6.0 |
esx.audit.vsan.net.vnic.added | info | VC | esx.audit.vsan.net.vnic.added|Virtual SAN virtual NIC has been added. Since 6.0 |
esx.clear.coredump.configured2 | info | VC | esx.clear.coredump.configured2|At least one coredump target has been configured. Host core dumps will be saved. Since 6.0 |
esx.clear.vob.vsan.pdl.online | info | ESXHostStorage | Virtual SAN device {1} has come online. Since 6.0 |
esx.clear.vsan.vsan.network.available | info | ESXHostStorage | Virtual SAN now has a usable network configuration. Earlier reported connectivity problems, if any, can now be ignored because they are resolved. Since 6.0 |
esx.clear.vsan.vsan.vmknic.ready | info | ESXHostStorage | vmknic {1} now has an IP address. Earlier reported connectivity problems, if any, can now be ignored because they are resolved. Since 6.0 |
esx.problem.coredump.capacity.insufficient | warning | VC | esx.problem.coredump.capacity.insufficient|The storage capacity of the coredump targets is insufficient to capture a complete coredump. Recommended coredump capacity is {1} MiB. Since 6.0 |
esx.problem.coredump.copyspace | warning | VC | esx.problem.coredump.copyspace|The free space available in default coredump copy location is insufficient to copy new coredumps. Recommended free space is {1} MiB. Since 6.0 |
esx.problem.coredump.extraction.failed.nospace | warning | VC | esx.problem.coredump.extraction.failed.nospace|The given partition has insufficient amount of free space to extract the coredump. At least {1} MiB is required. Since 6.0 |
esx.problem.coredump.unconfigured2 | warning | VC | esx.problem.coredump.unconfigured2|No coredump target has been configured. Host core dumps cannot be saved. Since 6.0 |
esx.problem.scratch.partition.size.small | warning | VC | esx.problem.scratch.partition.size.small|Size of scratch partition {1} is too small. Recommended scratch partition size is {2} MiB. Since 6.0 |
esx.problem.scratch.partition.unconfigured | warning | VC | esx.problem.scratch.partition.unconfigured|No scratch partition has been configured. Recommended scratch partition size is {} MiB. Since 6.0 |
esx.problem.scsi.scsipath.badpath.unreachpe | error | VC | esx.problem.scsi.scsipath.badpath.unreachpe|Sanity check failed for path {1}. The path is to a vVol PE, but it goes out of adapter {2} which is not PE capable. Path dropped. Since 6.0 |
esx.problem.scsi.scsipath.badpath.unsafepe | error | VC | esx.problem.scsi.scsipath.badpath.unsafepe|Sanity check failed for path {1}. Could not safely determine if the path is to a vVol PE. Path dropped. Since 6.0 |
esx.problem.vmfs.ats.incompatibility.detected | error | VC | esx.problem.vmfs.ats.incompatibility.detected|Multi-extent ATS-only volume '{1}' ({2}) is unable to use ATS because HardwareAcceleratedLocking is disabled on this host: potential for introducing filesystem corruption. Volume should not be used from other hosts. Since 6.0 |
esx.problem.vmfs.lockmode.inconsistency.detected | error | VC | esx.problem.vmfs.lockmode.inconsistency.detected|Inconsistent lockmode change detected for VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. Protocol error during ATS transition. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume. Since 6.0 |
esx.problem.vmfs.spanned.lockmode.inconsistency.detected | error | VC | esx.problem.vmfs.spanned.lockmode.inconsistency.detected|Inconsistent lockmode change detected for spanned VMFS volume '{1} ({2})': volume was configured for {3} lockmode at time of open and now it is configured for {4} lockmode but this host is not using {5} lockmode. All operations on this volume will fail until this host unmounts and remounts the volume. Since 6.0 |
esx.problem.vmfs.spanstate.incompatibility.detected | error | VC | esx.problem.vmfs.spanstate.incompatibility.detected|Incompatible span change detected for VMFS volume '{1} ({2})': volume was not spanned at time of open but now it is, and this host is using ATS-only lockmode but the volume is not ATS-only. Volume descriptor refresh operations will fail until this host unmounts and remounts the volume. Since 6.0 |
esx.problem.vob.vsan.lsom.componentthreshold | warning | ESXHostStorage | Virtual SAN Node: {1} reached threshold of {2} %% opened components ({3} of {4}). Since 6.0 |
esx.problem.vob.vsan.lsom.diskerror | error | ESXHostStorage | Virtual SAN device {1} is under permanent failure. Since 6.0 |
esx.problem.vob.vsan.lsom.diskgrouplimit | error | ESXHostStorage | Failed to create new disk group {1}. The system has reached the maximum amount of disks groups allowed {2} for the current amount of memory {3}. Add more memory. Since 6.0 |
esx.problem.vob.vsan.lsom.disklimit | error | ESXHostStorage | Failed to add disk {1} to disk group. The system has reached the maximum amount of disks allowed {2} for the current amount of memory {3} GB. Add more memory. Since 6.0 |
esx.problem.vob.vsan.pdl.offline | error | ESXHostStorage | Virtual SAN device {1} has gone offline. Since 6.0 |
esx.problem.vsan.lsom.congestionthreshold | info | ESXHostStorage | LSOM {1} Congestion State: {2}. Congestion Threshold: {3} Current Congestion: {4}. Since 6.0 |
hbr.primary.RpoOkForServerEvent | info | VC | hbr.primary.RpoOkForServerEvent|VR Server is compatible with support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 6.0 |
hbr.primary.RpoTooLowForServerEvent | warning | VC | hbr.primary.RpoTooLowForServerEvent|VR Server does not support the configured RPO for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 6.0 |
NetCompressionNotOkForServerEvent | error | VC | NetCompressionNotOkForServerEvent|event.NetCompressionNotOkForServerEvent.fullFormat Since 6.0 |
NetCompressionOkForServerEvent | info | VC | NetCompressionOkForServerEvent|event.NetCompressionOkForServerEvent.fullFormat Since 6.0 |
vim.event.SubscriptionLicenseExpiredEvent | warning | VC | vim.event.SubscriptionLicenseExpiredEvent|The time-limited license on host {host.name} has expired. To comply with the EULA, renew the license at http://my.vmware.com Since 6.0 |
VmGuestOSCrashedEvent | error | VC | {vm.name} on {host.name}: Guest operating system has crashed. Since 6.0 |
info | VC | An account was created on host {host.name} Since 2.0 Reference | |
info | VC | Account {account} was removed on host {host.name} Since 2.0 Reference | |
info | VC | An account was updated on host {host.name} Since 2.0 Reference | |
info | VC | ad.event.ImportCertEvent| Import certificate succeeded. Since 5.0 Reference | |
error | VC | ad.event.ImportCertFailedEvent| Import certificate failed. Since 5.0 Reference | |
info | VC | ad.event.JoinDomainEvent| Join domain succeeded. Since 5.0 Reference | |
error | VC | ad.event.JoinDomainFailedEvent| Join domain failed. Since 5.0 Reference | |
info | VC | ad.event.LeaveDomainEvent| Leave domain succeeded. Since 5.0 Reference | |
error | VC | ad.event.LeaveDomainFailedEvent| Leave domain failed. Since 5.0 Reference | |
info | VC | The default password for the root user on the host {host.name} has not been changed Since 2.5 Reference | |
info | VC | Acknowledged alarm '{alarm.name}' on {entity.name} Since 5.0 Reference | |
info | VC | Alarm '{alarm.name}' on {entity.name} triggered an action Since 2.0 Reference | |
info | VC | Manually cleared alarm '{alarm.name}' on {entity.name} from {from.@enum.ManagedEntity.Status} Since 5.0 Reference | |
info | VC | Created alarm '{alarm.name}' on {entity.name} Since 2.0 Reference | |
info | VC | Alarm '{alarm.name}' on {entity.name} sent email to {to} Since 2.0 Reference | |
error | VC | Alarm '{alarm.name}' on {entity.name} cannot send email to {to} Since 2.0 Reference | |
info | VC | Reconfigured alarm '{alarm.name}' on {entity.name} Since 2.0 Reference | |
info | VC | Removed alarm '{alarm.name}' on {entity.name} Since 2.0 Reference | |
info | VC | Alarm '{alarm.name}' on {entity.name} ran script {script} Since 2.0 Reference | |
error | VC | Alarm '{alarm.name}' on {entity.name} did not complete script: {reason.msg} Since 2.0 Reference | |
info | VC | Alarm '{alarm.name}' on entity {entity.name} send SNMP trap Since 2.0 Reference | |
error | VC | Alarm '{alarm.name}' on entity {entity.name} did not send SNMP trap: {reason.msg} Since 2.0 Reference | |
info | VC | Alarm '{alarm.name}' on {entity.name} changed from {from.@enum.ManagedEntity.Status} to {to.@enum.ManagedEntity.Status} Since 2.0 Reference | |
info | VC | All running virtual machines are licensed Since 2.5 Reference | |
info | VC | User cannot logon since the user is already logged on Since 2.0 Reference | |
warning | VC | Cannot login {userName}@{ipAddress} Since 2.0 Reference | |
info | VC | The operation performed on host {host.name} in {datacenter.name} was canceled Since 2.0 Reference | |
info | VC | Changed ownership of file name {filename} from {oldOwner} to {newOwner} on {host.name} in {datacenter.name}. Since 5.1 Reference | |
error | VC | Cannot change ownership of file name {filename} from {owner} to {attemptedOwner} on {host.name} in {datacenter.name}. Since 5.1 Reference | |
info | VC | Checked cluster for compliance Since 4.0 Reference | |
info | VC | Created cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Removed cluster {computeResource.name} in datacenter {datacenter.name} Since 2.0 Reference | |
warning | Cluster | Insufficient capacity in cluster {computeResource.name} to satisfy resource configuration in {datacenter.name} Since 2.0 Reference | |
info | VC | Reconfigured cluster {computeResource.name} in datacenter {datacenter.name} Since 2.0 Reference | |
info | VC | Configuration status on cluster {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name} Since 2.0 Reference | |
info | VC | com.vmware.license.AddLicenseEvent| License {licenseKey} added to VirtualCenter Since 4.0 Reference | |
info | VC | com.vmware.license.AssignLicenseEvent| License {licenseKey} assigned to asset {entityName} Since 4.0 Reference | |
warning | VC | com.vmware.license.DLFDownloadFailedEvent| Failed to download license information from the host {hostname} due to {errorReason.@enum.com.vmware.license.DLFDownloadFailedEvent.DLFDownloadFailedReason} Since 4.1 Reference | |
error | VC | com.vmware.license.LicenseAssignFailedEvent| License assignment on the host fails. Reasons: {errorMessage.@enum.com.vmware.license.LicenseAssignError}. Since 4.0 Reference | |
warning | VC | com.vmware.license.LicenseCapacityExceededEvent| The current license usage ({currentUsage} {costUnitText}) for {edition} exceeds the license capacity ({capacity} {costUnitText}) Since 5.0 Reference | |
error | VC | com.vmware.license.LicenseExpiryEvent| Your host license will expire in {remainingDays} days. The host will be disconnected from VC when its license expires. Since 4.0 Reference | |
warning | VC | com.vmware.license.LicenseUserThresholdExceededEvent| Current license usage ({currentUsage} {costUnitText}) for {edition} exceeded the user-defined threshold ({threshold} {costUnitText}) Since 4.1 Reference | |
info | VC | com.vmware.license.RemoveLicenseEvent| License {licenseKey} removed from VirtualCenter Since 4.0 Reference | |
info | VC | com.vmware.license.UnassignLicenseEvent| License unassigned from asset {entityName} Since 4.0 Reference | |
info | VC | com.vmware.vc.cim.CIMGroupHealthStateChanged| Health of [data.group] changed from [data.oldState] to [data.newState]. Since 4.0 Reference | |
info | VC | com.vmware.vc.datastore.UpdatedVmFilesEvent| Updated VM files on datastore {ds.name} using host {hostName} Since 4.1 Reference | |
error | VC | com.vmware.vc.datastore.UpdateVmFilesFailedEvent| Failed to update VM files on datastore {ds.name} using host {hostName} Since 4.1 Reference | |
info | VC | com.vmware.vc.datastore.UpdatingVmFilesEvent| Updating VM files on datastore {ds.name} using host {hostName} Since 4.1 Reference | |
info | VC | com.vmware.vc.dvs.LacpConfigInconsistentEvent| Single Link Aggregation Control Group is enabled on Uplink Port Groups while enhanced LACP support is enabled. Since 5.5 Reference | |
warning | VirtualMachine | com.vmware.vc.ft.VmAffectedByDasDisabledEvent| VMware HA has been disabled in cluster {computeResource.name} of datacenter {datacenter.name}. HA will not restart VM {vm.name} or its Secondary VM after a failure. Since 4.1 Reference | |
info | VC | com.vmware.vc.guestOperations.GuestOperation| Guest operation {operationName.@enum.com.vmware.vc.guestOp} performed on Virtual machine {vm.name}. Since 5.0 Reference | |
warning | VirtualMachine | com.vmware.vc.guestOperations.GuestOperationAuthFailure| Guest operation authentication failed for operation {operationName.@enum.com.vmware.vc.guestOp} on Virtual machine {vm.name}. Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.AllHostAddrsPingable| All vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.AllIsoAddrsPingable| All vSphere HA isolation addresses are reachable by host {host.name} in cluster {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
warning | VirtualMachine | com.vmware.vc.HA.AnsweredVmLockLostQuestionEvent| Lock-lost question on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} was answered by vSphere HA Since 5.0 Reference | |
warning | VirtualMachine | com.vmware.vc.HA.AnsweredVmTerminatePDLEvent| vSphere HA answered a question from host {host.name} in cluster {computeResource.name} about terminating virtual machine {vm.name} Since 5.1 Reference | |
info | VC | com.vmware.vc.HA.AutoStartDisabled| The automatic Virtual Machine Startup/Shutdown feature has been disabled on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Automatic VM restarts will interfere with vSphere HA when reacting to a host failure. Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.CannotResetVmWithInaccessibleDatastore| vSphere HA did not reset VM {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} because the VM had files on inaccessible datastore(s) Since 5.5 Reference | |
warning | Cluster | com.vmware.vc.HA.ClusterContainsIncompatibleHosts| vSphere HA Cluster {computeResource.name} in {datacenter.name} contains ESX/ESXi 3.5 hosts and more recent host versions, which isn't fully supported. Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.ClusterFailoverActionCompletedEvent| HA completed a failover action in cluster {computeResource.name} in datacenter {datacenter.name} Since 4.1 Reference | |
warning | Cluster | com.vmware.vc.HA.ClusterFailoverActionInitiatedEvent| HA initiated a failover action in cluster {computeResource.name} in datacenter {datacenter.name} Since 4.1 Reference | |
info | VC | com.vmware.vc.HA.DasAgentRunningEvent| HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is running Since 4.1 Reference | |
error | Cluster | com.vmware.vc.HA.DasFailoverHostFailedEvent| HA failover host {host.name} in cluster {computeResource.name} in {datacenter.name} has failed Since 4.1 Reference | |
warning | Cluster | com.vmware.vc.HA.DasFailoverHostIsolatedEvent| Host {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.DasFailoverHostPartitionedEvent| Failover Host {host.name} in {computeResource.name} in {datacenter.name} is in a different network partition than the master Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.DasFailoverHostUnreachableEvent| The vSphere HA agent on the failover host {host.name} in cluster {computeResource.name} in {datacenter.name} is not reachable from vCenter Server Since 5.0 Reference | |
error | Cluster | com.vmware.vc.HA.DasHostCompleteDatastoreFailureEvent| All shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name} Since 4.1 Reference | |
error | Cluster | com.vmware.vc.HA.DasHostCompleteNetworkFailureEvent| All VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name} Since 4.1 Reference | |
error | Cluster | com.vmware.vc.HA.DasHostFailedEvent| A possible host failure has been detected by HA on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} Since 4.1 Reference | |
warning | Cluster | com.vmware.vc.HA.DasHostIsolatedEvent| Host {host.name} has been isolated from cluster {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.DasHostMonitoringDisabledEvent| No virtual machine failover will occur until Host Monitoring is enabled in cluster {computeResource.name} in {datacenter.name} Since 4.1 Reference | |
error | Cluster | com.vmware.vc.HA.DasTotalClusterFailureEvent| HA recovered from a total cluster failure in cluster {computeResource.name} in datacenter {datacenter.name} Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.HA.FailedRestartAfterIsolationEvent| vSphere HA was unable to restart virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} after it was powered off in response to a network isolation event. The virtual machine should be manually powered back on. Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.HeartbeatDatastoreChanged| Datastore {dsName} is {changeType.@enum.com.vmware.vc.HA.HeartbeatDatastoreChange} for storage heartbeating monitored by the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.HeartbeatDatastoreNotSufficient| The number of heartbeat datastores for host {host.name} in cluster {computeResource.name} in {datacenter.name} is {selectedNum}, which is less than required: {requiredNum} Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.HostAgentErrorEvent| vSphere HA Agent for host {host.name} has an error in {computeResource.name} in {datacenter.name}: {reason.@enum.com.vmware.vc.HA.HostAgentErrorReason} Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.HostDasAgentHealthyEvent| HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is healthy Since 4.1 Reference | |
warning | Cluster | com.vmware.vc.HA.HostDasErrorEvent| vSphere HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error: {reason.@enum.HostDasErrorEvent.HostDasErrorReason} Since 5.0 Reference | |
error | VC | com.vmware.vc.HA.HostDoesNotSupportVsan| vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in {datacenter.name} because vCloud Distributed Storage is enabled but the host does not support that feature Since 5.5 Reference | |
warning | Cluster | com.vmware.vc.HA.HostHasNoIsolationAddrsDefined| Host {host.name} in cluster {computeResource.name} in {datacenter.name} has no isolation addresses defined as required by vSphere HA. Since 5.0 Reference | |
error | Cluster | com.vmware.vc.HA.HostHasNoMountedDatastores| vSphere HA cannot be configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because there are no mounted datastores. Since 5.1 Reference | |
error | Cluster | com.vmware.vc.HA.HostHasNoSslThumbprint| vSphere HA cannot be configured on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} because its SSL thumbprint has not been verified. Check that vCenter Server is configured to verify SSL thumbprints and that the thumbprint for {host.name} has been verified. Since 5.0 Reference | |
error | Cluster | com.vmware.vc.HA.HostIncompatibleWithHA| The product version of host {host.name} in cluster {computeResource.name} in {datacenter.name} is incompatible with HA. Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.HostPartitionedFromMasterEvent| Host {host.name} is in a different network partition than the master {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.HostStateChangedEvent| The vSphere HA availability state of the host {host.name} has changed to {newState.@enum.com.vmware.vc.HA.DasFdmAvailabilityState} in {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.HostUnconfiguredWithProtectedVms| Host {host.name} in cluster {computeResource.name} in {datacenter.name} is disconnected, but contains {protectedVmCount} protected virtual machine(s) Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.HostUnconfigureError| There was an error unconfiguring the vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name}. To solve this problem, connect the host to a vCenter Server of version 5.0 or later. Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.InvalidMaster| vSphere HA Agent on host {remoteHostname} is an invalid master. The host should be examined to determine if it has been compromised. Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.NotAllHostAddrsPingable| The vSphere HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} cannot reach some of the management network addresses of other hosts, and thus vSphere HA may not be able to restart VMs if a host failure occurs: {unpingableAddrs} Since 5.0 Reference | |
info | VirtualMachine | com.vmware.vc.HA.StartFTSecondaryFailedEvent| vSphere HA agent failed to start Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name} in {datacenter.name}. Reason : {fault.msg}. vSphere HA agent will retry until it times out. Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.StartFTSecondarySucceededEvent| vSphere HA agent successfully started Fault Tolerance secondary VM {secondaryCfgPath} on host {secondaryHost} for primary VM {vm.name} in cluster {computeResource.name}. Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.UserHeartbeatDatastoreRemoved| Datastore {dsName} is removed from the set of preferred heartbeat datastores selected for cluster {computeResource.name} in {datacenter.name} because the datastore is removed from inventory Since 5.0 Reference | |
warning | Cluster | com.vmware.vc.HA.VcCannotFindMasterEvent| vCenter Server is unable to find a master vSphere HA Agent in {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
warning | VC | com.vmware.vc.HA.VcConnectedToMasterEvent| vCenter Server is connected to the master vSphere HA Agent running on host {hostname} in {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
warning | VC | com.vmware.vc.HA.VcDisconnectedFromMasterEvent| vCenter Server is disconnected from the master vSphere HA Agent running on host {hostname} in {computeResource.name} in {datacenter.name} Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.VMIsHADisabledIsolationEvent| vSphere HA did not perform an isolation response for {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled Since 5.1 Reference | |
info | VC | com.vmware.vc.HA.VMIsHADisabledRestartEvent| vSphere HA did not attempt to restart {vm.name} in cluster {computeResource.name} in {datacenter.name} because its VM restart priority is Disabled Since 5.1 Reference | |
warning | VirtualMachine | com.vmware.vc.HA.VmNotProtectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} failed to become vSphere HA Protected and vSphere HA may not attempt to restart it after a failure. Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.VmProtectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} is vSphere HA Protected and vSphere HA will attempt to restart it after a failure. Since 5.0 Reference | |
warning | VirtualMachine | com.vmware.vc.ha.VmRestartedByHAEvent| Virtual machine {vm.name} was restarted on host {host.name} in cluster {computeResource.name} by vSphere HA Since 5.0 Reference | |
warning | VirtualMachine | com.vmware.vc.HA.VmUnprotectedEvent| VM {vm.name} in cluster {computeResource.name} in {datacenter.name} is not vSphere HA Protected. Since 5.0 Reference | |
info | VC | com.vmware.vc.HA.VmUnprotectedOnDiskSpaceFull| vSphere HA has unprotected virtual machine {vm.name} in cluster {computeResource.name} in datacenter {datacenter.name} because it ran out of disk space Since 5.1 Reference | |
error | VC | com.vmware.vc.host.AutoStartReconfigureFailedEvent| Reconfiguring autostart rules for virtual machines on {host.name} in datacenter {datacenter.name} failed Since 5.0 Reference | |
info | VC | com.vmware.vc.host.clear.vFlashResource.inaccessible| Host vSphere Flash resource is restored to be accessible. Since 5.5 Reference | |
info | VC | com.vmware.vc.host.clear.vFlashResource.reachthreshold| Host vSphere Flash resource usage dropped below {1}%. Since 5.5 Reference | |
warning | VC | com.vmware.vc.host.problem.vFlashResource.inaccessible| Host vSphere Flash resource is inaccessible. Since 5.5 Reference | |
warning | VC | com.vmware.vc.host.problem.vFlashResource.reachthreshold| Host vSphere Flash resource usage is more than {1}%. Since 5.5 Reference | |
info | VC | com.vmware.vc.host.vFlash.defaultModuleChangedEvent| Any new vFlash cache configuration request will use {vFlashModule} as default vSphere Flash module. All existing vFlash cache configurations remain unchanged. Since 5.5 Reference | |
info | VC | com.vmware.vc.host.vFlash.modulesLoadedEvent| vSphere Flash modules are loaded or reloaded on the host Since 5.5 Reference | |
error | ESXHostStorage | com.vmware.vc.host.vFlash.SsdConfigurationFailedEvent| {1} on disk '{2}' failed due to {3} Since 5.5 Reference | |
com.vmware.vc.host.vFlash.VFlashResourceCapacityExtendedEvent | info | VC | com.vmware.vc.host.vFlash.VFlashResourceCapacityvSphere Flash resource capacity is extended Since 5.5 Reference |
info | VC | com.vmware.vc.host.vFlash.VFlashResourceConfiguredEvent| vSphere Flash resource is configured on the host Since 5.5 Reference | |
info | VC | com.vmware.vc.host.vFlash.VFlashResourceRemovedEvent| vSphere Flash resource is removed from the host Since 5.5 Reference | |
info | VC | com.vmware.vc.npt.VmAdapterEnteredPassthroughEvent| Network passthrough is active on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name} Since 4.1 Reference | |
info | VC | com.vmware.vc.npt.VmAdapterExitedPassthroughEvent| Network passthrough is inactive on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name} Since 4.1 Reference | |
warning | VC | com.vmware.vc.ovfconsumers.CloneOvfConsumerStateErrorEvent| Failed to clone state for the entity '{entityName}' on extension {extensionName} Since 5.0 Reference | |
com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent | warning | VC | com.vmware.vc.ovfconsumers.GetOvfEnvironmentSectionsErrorEvent| Failed to retrieve OVF environment sections for VM '{vm.name}' from extension {extensionName} Since 5.0 Reference |
warning | VC | com.vmware.vc.ovfconsumers.PowerOnAfterCloneErrorEvent| Powering on VM '{vm.name}' after cloning was blocked by an extension. Message: {description} Since 5.0 Reference | |
warning | VC | com.vmware.vc.ovfconsumers.RegisterEntityErrorEvent| Failed to register entity '{entityName}' on extension {extensionName} Since 5.0 Reference | |
warning | VC | com.vmware.vc.ovfconsumers.UnregisterEntitiesErrorEvent| Failed to unregister entities on extension {extensionName} Since 5.0 Reference | |
warning | VC | com.vmware.vc.ovfconsumers.ValidateOstErrorEvent| Failed to validate OVF descriptor on extension {extensionName} Since 5.0 Reference | |
info | VC | com.vmware.vc.profile.AnswerFileExportedEvent| Answer file for host {host.name} in datacenter {datacenter.name} has been exported Since 5.0 Reference | |
info | VC | com.vmware.vc.profile.AnswerFileUpdatedEvent| Answer file for host {host.name} in datacenter {datacenter.name} has been updated Since 5.0 Reference | |
info | VC | com.vmware.vc.rp.ResourcePoolRenamedEvent| Resource pool '{oldName}' has been renamed to '{newName}' Since 5.1 Reference | |
info | VC | com.vmware.vc.sdrs.CanceledDatastoreMaintenanceModeEvent| The datastore maintenance mode operation has been canceled Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.ConfiguredStorageDrsOnPodEvent| Configured storage DRS on datastore cluster {objectName} Since 5.0 Reference | |
warning | VC | com.vmware.vc.sdrs.ConsistencyGroupViolationEvent| Datastore cluster {objectName} has datastores that belong to different SRM Consistency Groups Since 5.1 Reference | |
info | VC | com.vmware.vc.sdrs.DatastoreEnteredMaintenanceModeEvent| Datastore {ds.name} has entered maintenance mode Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.DatastoreEnteringMaintenanceModeEvent| Datastore {ds.name} is entering maintenance mode Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.DatastoreExitedMaintenanceModeEvent| Datastore {ds.name} has exited maintenance mode Since 5.0 Reference | |
warning | VC | com.vmware.vc.sdrs.DatastoreInMultipleDatacentersEvent| Datastore cluster {objectName} has one or more datastores: {datastore} shared across multiple datacenters Since 5.0 Reference | |
error | VC | com.vmware.vc.sdrs.DatastoreMaintenanceModeErrorsEvent| Datastore {ds.name} encountered errors while entering maintenance mode Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.StorageDrsDisabledEvent| Disabled storage DRS on datastore cluster {objectName} Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.StorageDrsEnabledEvent| Enabled storage DRS on datastore cluster {objectName} with automation level {behavior.@enum.storageDrs.PodConfigInfo.Behavior} Since 5.0 Reference | |
error | VC | com.vmware.vc.sdrs.StorageDrsInvocationFailedEvent| Storage DRS invocation failed on datastore cluster {objectName} Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.StorageDrsNewRecommendationPendingEvent| A new storage DRS recommendation has been generated on datastore cluster {objectName} Since 5.0 Reference | |
com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent | warning | VC | com.vmware.vc.sdrs.StorageDrsNotSupportedHostConnectedToPodEvent| Datastore cluster {objectName} is connected to one or more hosts: {host} that do not support storage DRS Since 5.0 Reference |
info | VC | com.vmware.vc.sdrs.StorageDrsRecommendationApplied| All pending recommendations on datastore cluster {objectName} were applied Since 5.5 Reference | |
info | VC | com.vmware.vc.sdrs.StorageDrsStorageMigrationEvent| Storage DRS migrated disks of VM {vm.name} to datastore {ds.name} Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.StorageDrsStoragePlacementEvent| Storage DRS placed disks of VM {vm.name} on datastore {ds.name} Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.StoragePodCreatedEvent| Created datastore cluster {objectName} Since 5.0 Reference | |
info | VC | com.vmware.vc.sdrs.StoragePodDestroyedEvent| Removed datastore cluster {objectName} Since 5.0 Reference | |
com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent | warning | VC | com.vmware.vc.sioc.NotSupportedHostConnectedToDatastoreEvent| SIOC has detected that a host: {host} connected to a SIOC-enabled datastore: {objectName} is running an older version of ESX that does not support SIOC. This is an unsupported configuration. Since 5.0 Reference |
info | VC | com.vmware.vc.sms.datastore.ComplianceStatusCompliantEvent| Virtual disk {diskKey} on {vmName} connected to datastore {datastore.name} in {datacenter.name} is compliant from storage provider {providerName}. Since 5.5 Reference | |
com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent | error | VirtualMachine | com.vmware.vc.sms.datastore.ComplianceStatusNonCompliantEvent| Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}. Since 5.5 Reference |
warning | VC | com.vmware.vc.sms.datastore.ComplianceStatusUnknownEvent| Virtual disk {diskKey} on {vmName} connected to {datastore.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}. Since 5.5 Reference | |
info | VC | com.vmware.vc.sms.LunCapabilityInitEvent| Storage provider system default capability event Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.LunCapabilityMetEvent| Storage provider system capability requirements met Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.LunCapabilityNotMetEvent| Storage provider system capability requirements not met Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.provider.health.event| {msgTxt} Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.provider.system.event| {msgTxt} Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.ThinProvisionedLunThresholdClearedEvent| Storage provider thin provisioning capacity threshold reached Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.ThinProvisionedLunThresholdCrossedEvent| Storage provider thin provisioning capacity threshold crossed Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.ThinProvisionedLunThresholdInitEvent| Storage provider thin provisioning default capacity event Since 5.0 Reference | |
info | VC | com.vmware.vc.sms.vm.ComplianceStatusCompliantEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is compliant from storage provider {providerName}. Since 5.5 Reference | |
error | VC | com.vmware.vc.sms.vm.ComplianceStatusNonCompliantEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} is not compliant {operationalStatus] from storage provider {providerName}. Since 5.5 Reference | |
warning | VC | com.vmware.vc.sms.vm.ComplianceStatusUnknownEvent| Virtual disk {diskKey} on {vm.name} on {host.name} and {computeResource.name} in {datacenter.name} compliance status is unknown from storage provider {providerName}. Since 5.5 Reference | |
error | VC | com.vmware.vc.spbm.ProfileAssociationFailedEvent| Profile association/dissociation failed for {entityName} Since 5.5 Reference | |
info | VC | com.vmware.vc.stats.HostQuickStatesNotUpToDateEvent| Quick stats on {host.name} in {computeResource.name} in {datacenter.name} is not up-to-date Since 5.0 Reference | |
info | VC | com.vmware.vc.VCHealthStateChangedEvent| vCenter Service overall health changed from '{oldState}' to '{newState}' Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEvent| HA VM Component Protection protects virtual machine {vm.name} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because the FT state is disabled Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.FtFailoverEvent| FT Primary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is going to fail over to Secondary VM due to component failure Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.vcp.FtFailoverFailedEvent| FT virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to failover to secondary Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.FtSecondaryRestartEvent| HA VM Component Protection is restarting FT secondary virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to component failure Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.vcp.FtSecondaryRestartFailedEvent| FT Secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEvent| HA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too long Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.TestEndEvent| VM Component Protection test ends on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.TestStartEvent| VM Component Protection test starts on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.VcpNoActionEvent| HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration setting Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.vcp.VmDatastoreFailedEvent| Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore} Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.vcp.VmNetworkFailedEvent| Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network} Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.vcp.VmPowerOffHangEvent| HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep trying Since 4.1 Reference | |
info | VC | com.vmware.vc.vcp.VmRestartEvent| HA VM Component Protection is restarting virtual machine {vm.name} due to component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.vcp.VmRestartFailedEvent| Virtual machine {vm.name} affected by component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart Since 4.1 Reference | |
error | VirtualMachine | com.vmware.vc.vcp.VmWaitForCandidateHostEvent| HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep trying Since 4.1 Reference | |
error | VC | com.vmware.vc.vm.VmRegisterFailedEvent| Virtual machine {vm.name} registration on {host.name} in datacenter {datacenter.name} failed Since 5.0 Reference | |
error | VirtualMachine | com.vmware.vc.vm.VmStateFailedToRevertToSnapshot| Failed to revert the execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} to snapshot {snapshotName}, with ID {snapshotId} Since 5.0 Reference | |
info | VC | com.vmware.vc.vm.VmStateRevertedToSnapshot| The execution state of the virtual machine {vm.name} on host {host.name}, in compute resource {computeResource.name} has been reverted to the state of snapshot {snapshotName}, with ID {snapshotId} Since 5.0 Reference | |
warning | VC | com.vmware.vc.vmam.AppMonitoringNotSupported| Application monitoring is not supported on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 4.1 Reference | |
warning | VC | com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEvent| Application heartbeat status changed to {status} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 4.1 Reference | |
warning | VirtualMachine | com.vmware.vc.vmam.VmAppHealthStateChangedEvent| vSphere HA detected that the application state changed to {state.@enum.vm.GuestInfo.AppStateType} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 5.5 Reference | |
warning | VirtualMachine | com.vmware.vc.vmam.VmDasAppHeartbeatFailedEvent| Application heartbeat failed for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 4.1 Reference | |
error | VC | com.vmware.vc.VmCloneFailedInvalidDestinationEvent| Cannot clone {vm.name} as {destVmName} to invalid or non-existent destination with ID {invalidMoRef}: {fault} Since 5.0 Reference | |
error | VC | com.vmware.vc.VmCloneToResourcePoolFailedEvent| Cannot clone {vm.name} as {destVmName} to resource pool {destResourcePool}: {fault} Since 5.0 Reference | |
info | VC | com.vmware.vc.VmDiskConsolidatedEvent| Virtual machine {vm.name} disks consolidated successfully on {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
info | VC | com.vmware.vc.VmDiskConsolidationNeeded| Virtual machine {vm.name} disks consolidation is needed on {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
info | VC | com.vmware.vc.VmDiskConsolidationNoLongerNeeded| Virtual machine {vm.name} disks consolidation is no longer needed on {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.1 Reference | |
error | VirtualMachine | com.vmware.vc.VmDiskFailedToConsolidateEvent| Virtual machine {vm.name} disks consolidation failed on {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
error | VC | com.vmware.vc.vsan.DatastoreNoCapacityEvent| VSAN datastore {datastoreName} in cluster {computeResource.name} in datacenter {datacenter.name} does not have capacity Since 5.5 Reference | |
error | ESXHost | com.vmware.vc.vsan.HostCommunicationErrorEvent| event.com.vmware.vc.vsan.HostCommunicationErrorEvent.fullFormat Since 5.5 Reference | |
error | VC | com.vmware.vc.vsan.HostNotInClusterEvent| {host.name} with the VSAN service enabled is not in the vCenter cluster {computeResource.name} in datacenter {datacenter.name} Since 5.5 Reference | |
error | VC | com.vmware.vc.vsan.HostNotInVsanClusterEvent| {host.name} is in a VSAN enabled cluster {computeResource.name} in datacenter {datacenter.name} but does not have VSAN service enabled Since 5.5 Reference | |
com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent | error | VC | com.vmware.vc.vsan.HostVendorProviderDeregistrationFailedEvent| Vendor provider {host.name} deregistration failed Since 5.5 Reference |
com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent | info | VC | com.vmware.vc.vsan.HostVendorProviderDeregistrationSuccessEvent| Vendor provider {host.name} deregistration succeeded Since 5.5 Reference |
com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent | error | VC | com.vmware.vc.vsan.HostVendorProviderRegistrationFailedEvent| Vendor provider {host.name} registration failed Since 5.5 Reference |
com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent | info | VC | com.vmware.vc.vsan.HostVendorProviderRegistrationSuccessEvent| Vendor provider {host.name} registration succeeded Since 5.5 Reference |
error | ESXHostNetwork | com.vmware.vc.vsan.NetworkMisConfiguredEvent| VSAN network is not configured on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} Since 5.5 Reference | |
error | VC | com.vmware.vc.vsan.RogueHostFoundEvent| Found another host participating in the VSAN service in cluster {computeResource.name} in datacenter {datacenter.name} which is not a member of this host vCenter cluster Since 5.5 Reference | |
info | VC | com.vmware.vim.eam.agency.create| {agencyName} created by {ownerName} Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agency.destroyed| {agencyName} removed from the vSphere ESX Agent Manager Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agency.goalstate| {agencyName} changed goal state from {oldGoalState} to {newGoalState} Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agency.statusChanged| Agency status changed from {oldStatus} to {newStatus} Since 5.1 Reference | |
info | VC | com.vmware.vim.eam.agency.updated| Configuration updated {agencyName} Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.created| Agent added to host {host.name} ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.destroyed| Agent removed from host {host.name} ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.destroyedNoHost| Agent removed from host ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterPowerOn| Agent VM {vm.name} has been powered on. Mark agent as available to proceed agent workflow ({agencyName}) Since 5.0 Reference | |
com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning | info | VC | com.vmware.vim.eam.agent.markAgentVmAsAvailableAfterProvisioning| Agent VM {vm.name} has been provisioned. Mark agent as available to proceed agent workflow ({agencyName}) Since 5.0 Reference |
info | VC | com.vmware.vim.eam.agent.statusChanged| Agent status changed from {oldStatus} to {newStatus} Since 5.1 Reference | |
info | VC | com.vmware.vim.eam.agent.task.deleteVm| Agent VM {vmName} is deleted on host {host.name} ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.task.deployVm| Agent VM {vm.name} is provisioned on host {host.name} ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.task.powerOffVm| Agent VM {vm.name} powered off, on host {host.name} ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.task.powerOnVm| Agent VM {vm.name} powered on, on host {host.name} ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.task.vibInstalled| Agent installed VIB {vib} on host {host.name} ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.agent.task.vibUninstalled| Agent uninstalled VIB {vib} on host {host.name} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.cannotAccessAgentOVF| Unable to access agent OVF package at {url} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.cannotAccessAgentVib| Unable to access agent VIB module at {url} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.hostInMaintenanceMode| Agent cannot complete an operation since the host {host.name} is in maintenance mode ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.hostInStandbyMode| Agent cannot complete an operation since the host {host.name} is in standby mode ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.hostPoweredOff| Agent cannot complete an operation since the host {host.name} is powered off ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.incompatibleHostVersion| Agent is not deployed due to incompatible host {host.name} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.insufficientIpAddresses| Insufficient IP addresses in IP pool in agent VM network ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.insufficientResources| Agent cannot be provisioned due to insufficient resources on host {host.name} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.insufficientSpace| Agent on {host.name} cannot be provisioned due to insufficient space on datastore ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.missingAgentIpPool| No IP pool in agent VM network ({agencyname}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.missingDvFilterSwitch| dvFilter switch is not configured on host {host.name} ({agencyname}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.noAgentVmDatastore| No agent datastore configuration on host {host.name} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.noAgentVmNetwork| No agent network configuration on host {host.name} ({agencyName}) Since 5.0 Reference | |
error | VC | com.vmware.vim.eam.issue.noCustomAgentVmDatastore| Agent datastore(s) {customAgentVmDatastoreName} not available on host {host.name} ({agencyName}) Since 5.5 Reference | |
error | VC | com.vmware.vim.eam.issue.noCustomAgentVmNetwork| Agent network(s) {customAgentVmNetworkName} not available on host {host.name} ({agencyName}) Since 5.1 Reference | |
warning | VC | com.vmware.vim.eam.issue.orphandedDvFilterSwitch| Unused dvFilter switch on host {host.name} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.orphanedAgency| Orphaned agency found. ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.ovfInvalidFormat| OVF used to provision agent on host {host.name} has invalid format ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.ovfInvalidProperty| OVF environment used to provision agent on host {host.name} has one or more invalid properties ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.issue.resolved| Issue {type} resolved (key {key}) Since 5.1 Reference | |
warning | VC | com.vmware.vim.eam.issue.unknownAgentVm| Unknown agent VM {vm.name} Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vibCannotPutHostInMaintenanceMode| Cannot put host into maintenance mode ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vibInvalidFormat| Invalid format for VIB module at {url} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vibNotInstalled| VIB module for agent is not installed on host {host.name} ({agencyName}) Since 5.0 Reference | |
error | VC | com.vmware.vim.eam.issue.vibRequiresHostInMaintenanceMode| Host must be put into maintenance mode to complete agent VIB installation ({agencyName}) Since 5.0 Reference | |
error | VC | com.vmware.vim.eam.issue.vibRequiresHostReboot| Host {host.name} must be reboot to complete agent VIB installation ({agencyName}) Since 5.0 Reference | |
error | VC | com.vmware.vim.eam.issue.vibRequiresManualInstallation| VIB {vib} requires manual installation on host {host.name} ({agencyName}) Since 5.0 Reference | |
error | VC | com.vmware.vim.eam.issue.vibRequiresManualUninstallation| VIB {vib} requires manual uninstallation on host {host.name} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmCorrupted| Agent VM {vm.name} on host {host.name} is corrupted ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmDeployed| Agent VM {vm.name} is provisioned on host {host.name} when it should be removed ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmMarkedAsTemplate| Agent VM {vm.name} on host {host.name} is marked as template ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmNotDeployed| Agent VM is missing on host {host.name} ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmOrphaned| Orphaned agent VM {vm.name} on host {host.name} detected ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmPoweredOff| Agent VM {vm.name} on host {host.name} is expected to be powered on ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmPoweredOn| Agent VM {vm.name} on host {host.name} is expected to be powered off ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmSuspended| Agent VM {vm.name} on host {host.name} is expected to be powered on but is suspended ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmWrongFolder| Agent VM {vm.name} on host {host.name} is in the wrong VM folder ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.issue.vmWrongResourcePool| Agent VM {vm.name} on host {host.name} is in the resource pool ({agencyName}) Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.login.invalid| Failed login to vSphere ESX Agent Manager Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.login.succeeded| Successful login by {user} into vSphere ESX Agent Manager Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.logout| User {user} logged out of vSphere ESX Agent Manager by logging out of the vCenter server Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.task.scanForUnknownAgentVmsCompleted| Scan for unknown agent VMs completed Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.task.scanForUnknownAgentVmsInitiated| Scan for unknown agent VMs initiated Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.task.setupDvFilter| DvFilter switch '{switchName}' is setup on host {host.name} Since 5.0 Reference | |
info | VC | com.vmware.vim.eam.task.tearDownDvFilter| DvFilter switch '{switchName}' is teared down on host {host.name} Since 5.0 Reference | |
warning | VC | com.vmware.vim.eam.unauthorized.access| Unauthorized access by {user} in vSphere ESX Agent Manager Since 5.0 Reference | |
error | VC | com.vmware.vim.eam.vum.failedtouploadvib| Failed to upload {vibUrl} to VMware Update Manager ({agencyName}) Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.bind.vApp| event.com.vmware.vim.vsm.dependency.bind.vApp.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.bind.vm| event.com.vmware.vim.vsm.dependency.bind.vm.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.create.vApp| event.com.vmware.vim.vsm.dependency.create.vApp.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.create.vm| event.com.vmware.vim.vsm.dependency.create.vm.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.destroy.vApp| event.com.vmware.vim.vsm.dependency.destroy.vApp.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.destroy.vm| event.com.vmware.vim.vsm.dependency.destroy.vm.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.reconfigure.vApp| event.com.vmware.vim.vsm.dependency.reconfigure.vApp.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.reconfigure.vm| event.com.vmware.vim.vsm.dependency.reconfigure.vm.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.unbind.vApp| event.com.vmware.vim.vsm.dependency.unbind.vApp.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.unbind.vm| event.com.vmware.vim.vsm.dependency.unbind.vm.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.update.vApp| event.com.vmware.vim.vsm.dependency.update.vApp.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.dependency.update.vm| event.com.vmware.vim.vsm.dependency.update.vm.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.provider.register| event.com.vmware.vim.vsm.provider.register.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.provider.unregister| event.com.vmware.vim.vsm.provider.unregister.fullFormat Since 5.0 Reference | |
info | VC | com.vmware.vim.vsm.provider.update| event.com.vmware.vim.vsm.provider.update.fullFormat Since 5.0 Reference | |
info | VC | Created new custom field definition {name} Since 2.0 Reference | |
info | VC | This event records a custom field definition event. Since 2.0 Reference | |
info | VC | Removed field definition {name} Since 2.0 Reference | |
info | VC | Renamed field definition from {name} to {newName} Since 2.0 Reference | |
info | VC | Changed custom field {name} on {entity.name} in {datacenter.name} to {value} Since 2.0 Reference | |
warning | VC | Cannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details. Since 2.5 Reference | |
warning | VC | An error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details. Since 2.5 Reference | |
warning | VC | An error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details. Since 2.5 Reference | |
info | VC | Started customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS. Since 2.5 Reference | |
info | VC | Customization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS. Since 2.5 Reference | |
warning | VC | The version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information. Since 2.5 Reference | |
warning | VC | An error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS. Since 2.5 Reference | |
info | VC | HA admission control disabled on cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | HA admission control enabled on cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Re-established contact with a primary host in this HA cluster Since 2.0 Reference | |
error | Cluster | Unable to contact a primary HA agent in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
error | Cluster | All hosts in the HA cluster {computeResource.name} in {datacenter.name} were isolated from the network. Check the network configuration for proper network redundancy in the management network. Since 4.0 Reference | |
info | VC | HA disabled on cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | HA enabled on cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
error | Cluster | A possible host failure has been detected by HA on {failedHost.name} in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
warning | Cluster | Host {isolatedHost.name} has been isolated from cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Created datacenter {datacenter.name} in folder {parent.name} Since 2.5 Reference | |
info | VC | Renamed datacenter from {oldName} to {newName} Since 2.5 Reference | |
info | VC | Datastore {datastore.name} increased in capacity from {oldCapacity} bytes to {oldCapacity} bytes in {datacenter.name} Since 4.0 Reference | |
info | VC | Removed unconfigured datastore {datastore.name} Since 2.0 Reference | |
info | VC | Discovered datastore {datastore.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
error | VC | Multiple datastores named {datastore} detected on host {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | File or directory {sourceFile} copied from {sourceDatastore.name} to {datastore.name} as {targetFile} Since 4.0 Reference | |
info | VC | File or directory {targetFile} deleted from {datastore.name} Since 4.0 Reference | |
info | VC | File or directory {sourceFile} moved from {sourceDatastore.name} to {datastore.name} as {targetFile} Since 4.0 Reference | |
info | VC | Reconfigured Storage I/O Control on datastore {datastore.name} Since 4.1 Reference | |
info | VC | Configured datastore principal {datastorePrincipal} on host {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Removed datastore {datastore.name} from {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Renamed datastore from {oldName} to {newName} in {datacenter.name} Since 2.0 Reference | |
info | VC | Renamed datastore from {oldName} to {newName} in {datacenter.name} Since 2.0 Reference | |
info | VC | Disabled DRS on cluster {computeResource.name} in datacenter {datacenter.name} Since 2.0 Reference | |
info | VC | Enabled DRS on {computeResource.name} with automation level {behavior} in {datacenter.name} Since 2.0 Reference | |
info | VC | DRS put {host.name} into standby mode Since 2.5 Reference | |
info | VC | DRS is putting {host.name} into standby mode Since 4.0 Reference | |
info | VC | DRS moved {host.name} out of standby mode Since 2.5 Reference | |
info | VC | DRS is moving {host.name} out of standby mode Since 4.0 Reference | |
error | ESXHost | DRS cannot move {host.name} out of standby mode Since 4.0 Reference | |
error | Cluster | DRS invocation not completed Since 4.0 Reference | |
info | VC | DRS has recovered from the failure Since 4.0 Reference | |
error | Cluster | Unable to apply DRS resource settings on host {host.name} in {datacenter.name}. {reason.msg}. This can significantly reduce the effectiveness of DRS. Since 2.0 Reference | |
info | VC | Resource configuration specification returns to synchronization from previous failure on host '{host.name}' in {datacenter.name} Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is now compliant with DRS VM-Host affinity rules Since 4.1 Reference | |
warning | VirtualMachine | {vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host affinity rule Since 4.1 Reference | |
info | VC | DRS migrated {vm.name} from {sourceHost.name} to {host.name} in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | DRS powered On {vm.name} on {host.name} in {datacenter.name} Since 2.5 Reference | |
warning | ESXHostNetwork | Virtual machine {macAddress} on host {host.name} has a duplicate IP {duplicateIP} Since 2.5 Reference | |
info | VC | Import operation with type {importType} was performed on {net.name} Since 5.1 Reference | |
info | VC | Restore operation was performed on {net.name} Since 5.1 Reference | |
info | VC | Distributed virtual port group {net.name} in {datacenter.name} was added to switch {dvs.name}. Since 4.0 Reference | |
info | VC | Distributed virtual port group {net.name} in {datacenter.name} was deleted. Since 4.0 Reference | |
info | VC | Distributed virtual port group {net.name} in {datacenter.name} was reconfigured. Since 4.0 Reference | |
info | VC | Distributed virtual port group {oldName} in {datacenter.name} was renamed to {newName} Since 4.0 Reference | |
info | VC | A Distributed Virtual Switch {dvs.name} was created in {datacenter.name}. Since 4.0 Reference | |
info | VC | Distributed Virtual Switch {dvs.name} in {datacenter.name} was deleted. Since 4.0 Reference | |
info | VC | Distributed Virtual Switch event Since 4.0 Reference | |
info | VC | Health check status was changed in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name} Since 5.1 Reference | |
info | VC | The Distributed Virtual Switch {dvs.name} configuration on the host was synchronized with that of the vCenter Server. Since 4.0 Reference | |
info | VC | The host {hostJoined.name} joined the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | The host {hostLeft.name} left the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | The host {hostMember.name} changed status on the vNetwork Distributed Switch {dvs.name} in {datacenter.name} Since 4.1 Reference | |
warning | ESXHostNetwork | The Distributed Virtual Switch {dvs.name} configuration on the host differed from that of the vCenter Server. Since 4.0 Reference | |
info | VC | Import operation with type {importType} was performed on {dvs.name} Since 5.1 Reference | |
info | VC | Distributed Virtual Switch {srcDvs.name} was merged into {dstDvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | Port {portKey} was blocked in the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | The port {portKey} was connected in the Distributed Virtual Switch {dvs.name} in {datacenter.name} Since 4.0 Reference | |
info | VC | New ports were created in the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | Deleted ports in the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | The port {portKey} was disconnected in the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | dvPort {portKey} entered passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name} Since 4.1 Reference | |
info | VC | dvPort {portKey} exited passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name} Since 4.1 Reference | |
info | VC | Port {portKey} was moved into the distributed virtual port group {portgroupName} in {datacenter.name}. Since 4.0 Reference | |
info | VC | Port {portKey} was moved out of the distributed virtual port group {portgroupName} in {datacenter.name}. Since 4.0 Reference | |
warning | VC | The port {portKey} link was down in the Distributed Virtual Switch {dvs.name} in {datacenter.name} Since 4.0 Reference | |
info | VC | The port {portKey} link was up in the Distributed Virtual Switch {dvs.name} in {datacenter.name} Since 4.0 Reference | |
info | VC | Reconfigured ports in the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | The dvPort {portKey} runtime information changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. Since 5.0 Reference | |
info | VC | Port {portKey} was unblocked in the Distributed Virtual Switch {dvs.name} in {datacenter.name}. Since 4.0 Reference | |
info | VC | The dvPort {portKey} vendor specific state changed in the vSphere Distributed Switch {dvs.name} in {datacenter.name}. Since 5.0 Reference | |
info | VC | The Distributed Virtual Switch {dvs.name} in {datacenter.name} was reconfigured. Since 4.0 Reference | |
info | VC | The Distributed Virtual Switch {oldName} in {datacenter.name} was renamed to {newName}. Since 4.0 Reference | |
info | VC | Restore operation was performed on {dvs.name} Since 5.1 Reference | |
info | VC | An upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} is available. Since 4.0 Reference | |
info | VC | Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} was upgraded. Since 4.0 Reference | |
info | VC | An upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} is in progress. Since 4.0 Reference | |
info | VC | Cannot complete an upgrade for the Distributed Virtual Switch {dvs.name} in datacenter {datacenter.name} Since 4.0 Reference | |
info | VC | Host {host.name} in {datacenter.name} has entered maintenance mode Since 2.0 Reference | |
info | VC | The host {host.name} is in standby mode Since 2.5 Reference | |
info | VC | Host {host.name} in {datacenter.name} has started to enter maintenance mode Since 2.0 Reference | |
info | VC | The host {host.name} is entering standby mode Since 2.5 Reference | |
error | VC | {message} Since 2.0 Reference | |
warning | VC | esx.audit.dcui.defaults.factoryrestore| The host has been restored to default factory settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
info | VC | esx.audit.dcui.disabled| The DCUI has been disabled. Since 5.0 Reference | |
info | VC | esx.audit.dcui.enabled| The DCUI has been enabled. Since 5.0 Reference | |
warning | VC | esx.audit.dcui.host.reboot| The host is being rebooted through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
warning | VC | esx.audit.dcui.host.shutdown| The host is being shut down through the Direct Console User Interface (DCUI). Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
info | VC | esx.audit.dcui.hostagents.restart| The management agents on the host are being restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
error | VC | esx.audit.dcui.login.failed| Authentication of user {1} has failed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
info | VC | esx.audit.dcui.login.passwd.changed| Login password for user {1} has been changed. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
warning | VC | esx.audit.dcui.network.factoryrestore| The host has been restored to factory network settings. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
info | VC | esx.audit.dcui.network.restart| A management interface {1} has been restarted. Please consult ESXi Embedded and vCenter Server Setup Guide or follow the Ask VMware link for more information. Since 5.0 Reference | |
warning | ESXHost | esx.audit.esxcli.host.poweroff| The host is being powered off through esxcli. Reason for powering off: {1}. Please consult vSphere Documentation Center or follow the Ask VMware link for more information. Since 5.1 Reference | |
info | ESXHost | esx.audit.esxcli.host.restart| event.esx.audit.esxcli.host.restart.fullFormat Since 5.1 Reference | |
info | VC | esx.audit.esximage.hostacceptance.changed| Host acceptance level changed from {1} to {2} Since 5.0 Reference | |
warning | VC | esx.audit.esximage.install.novalidation| Attempting to install an image profile with validation disabled. This may result in an image with unsatisfied dependencies, file or package conflicts, and potential security violations. Since 5.0 Reference | |
warning | VC | esx.audit.esximage.install.securityalert| SECURITY ALERT: Installing image profile '{1}' with {2}. Since 5.0 Reference | |
info | VC | esx.audit.esximage.profile.install.successful| Successfully installed image profile '{1}'. Installed VIBs {2}, removed VIBs {3} Since 5.0 Reference | |
info | VC | esx.audit.esximage.profile.update.successful| Successfully updated host to image profile '{1}'. Installed VIBs {2}, removed VIBs {3} Since 5.0 Reference | |
info | VC | esx.audit.esximage.vib.install.successful| Successfully installed VIBs {1}, removed VIBs {2} Since 5.0 Reference | |
info | VC | esx.audit.esximage.vib.remove.successful| Successfully removed VIBs {1} Since 5.0 Reference | |
info | VC | esx.audit.host.boot| Host has booted. Since 5.0 Reference | |
warning | ESXHost | esx.audit.host.maxRegisteredVMsExceeded| The number of virtual machines registered on host {host.name} in cluster {computeResource.name} in {datacenter.name} exceeded limit: {current} registered, {limit} is the maximum supported. Since 5.1 Reference | |
info | VC | esx.audit.host.stop.reboot| Host is rebooting. Since 5.0 Reference | |
info | VC | esx.audit.host.stop.shutdown| Host is shutting down. Since 5.0 Reference | |
info | VC | esx.audit.lockdownmode.disabled| Administrator access to the host has been enabled. Since 5.0 Reference | |
info | VC | esx.audit.lockdownmode.enabled| Administrator access to the host has been disabled. Since 5.0 Reference | |
info | VC | esx.audit.maintenancemode.canceled| The host has canceled entering maintenance mode. Since 5.0 Reference | |
info | VC | esx.audit.maintenancemode.entered| The host has entered maintenance mode. Since 5.0 Reference | |
info | VC | esx.audit.maintenancemode.entering| The host has begun entering maintenance mode. Since 5.0 Reference | |
info | VC | esx.audit.maintenancemode.exited| The host has exited maintenance mode. Since 5.0 Reference | |
info | VC | esx.audit.net.firewall.config.changed| Firewall configuration has changed. Operation '{1}' for rule set {2} succeeded. Since 5.0 Reference | |
warning | VC | esx.audit.net.firewall.disabled| Firewall has been disabled. Since 5.0 Reference | |
info | VC | esx.audit.net.firewall.enabled| Firewall has been enabled for port {1}. Since 5.0 Reference | |
info | VC | esx.audit.net.firewall.port.hooked| Port {1} is now protected by Firewall. Since 5.0 Reference | |
warning | VC | esx.audit.net.firewall.port.removed| Port {1} is no longer protected with Firewall. Since 5.0 Reference | |
info | VC | esx.audit.net.lacp.disable| LACP for VDS {1} is disabled. Since 5.1 Reference | |
info | VC | esx.audit.net.lacp.enable| LACP for VDS {1} is enabled. Since 5.1 Reference | |
info | VC | esx.audit.net.lacp.uplink.connected| Lacp info: uplink {1} on VDS {2} got connected. Since 5.1 Reference | |
warning | ESXHostNetwork | esx.audit.net.vdl2.ip.change| VDL2 IP changed on vmknic {1}, port {2}, DVS {3}, VLAN {4}. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.audit.net.vdl2.mappingtable.full| Mapping table entries of VDL2 network {1} on DVS {2} exhausted. This network might suffer a low performance. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.audit.net.vdl2.route.change| VDL2 IP interface on vmknic: {1}, DVS: {2}, VLAN: {3} default route changed. Since 5.0 Reference | |
info | VC | esx.audit.shell.disabled| The ESX command line shell has been disabled. Since 5.0 Reference | |
info | VC | esx.audit.shell.enabled| The ESX command line shell has been enabled. Since 5.0 Reference | |
info | VC | esx.audit.ssh.disabled| SSH access has been disabled. Since 5.0 Reference | |
info | VC | esx.audit.ssh.enabled| SSH access has been enabled. Since 5.0 Reference | |
info | VC | esx.audit.usb.config.changed| USB configuration has changed on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
warning | VC | esx.audit.uw.secpolicy.alldomains.level.changed| The enforcement level for all security domains has been changed to {1}. The enforcement level must always be set to enforcing. Since 5.0 Reference | |
warning | VC | esx.audit.uw.secpolicy.domain.level.changed| The enforcement level for security domain {1} has been changed to {2}. The enforcement level must always be set to enforcing. Since 5.0 Reference | |
info | VC | esx.audit.vmfs.lvm.device.discovered| One or more LVM devices have been discovered on this host. Since 5.0 Reference | |
info | VC | esx.audit.vmfs.volume.mounted| File system {1} on volume {2} has been mounted in {3} mode on this host. Since 5.0 Reference | |
info | VC | esx.audit.vmfs.volume.umounted| The volume {1} has been safely un-mounted. The datastore is no longer accessible on this host. Since 5.0 Reference | |
info | VC | esx.audit.vsan.clustering.enabled| VSAN clustering and directory services have been enabled. Since 5.5 Reference | |
info | VC | esx.clear.coredump.configured| A vmkcore disk partition is available and/or a network coredump server has been configured. Host core dumps will be saved. Since 5.1 Reference | |
info | ESXHostNetwork | esx.clear.net.connectivity.restored| Network connectivity restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up. Since 4.1 Reference | |
info | ESXHostNetwork | esx.clear.net.dvport.connectivity.restored| Network connectivity restored on DVPorts: {1}. Physical NIC {2} is up. Since 4.1 Reference | |
info | ESXHostNetwork | esx.clear.net.dvport.redundancy.restored| Uplink redundancy restored on DVPorts: {1}. Physical NIC {2} is up. Since 4.1 Reference | |
info | VC | esx.clear.net.lacp.lag.transition.up| LACP info: LAG {1} on VDS {2} is up. Since 5.5 Reference | |
info | ESXHostNetwork | esx.clear.net.lacp.uplink.transition.up| Lacp info: uplink {1} on VDS {2} is moved into link aggregation group. Since 5.1 Reference | |
info | ESXHostNetwork | esx.clear.net.lacp.uplink.unblocked| Lacp error: uplink {1} on VDS {2} is unblocked. Since 5.1 Reference | |
info | ESXHostNetwork | esx.clear.net.redundancy.restored| Uplink redundancy restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up. Since 4.1 Reference | |
info | ESXHostNetwork | esx.clear.net.vmnic.linkstate.up| Physical NIC {1} linkstate is up. Since 4.1 Reference | |
info | ESXHostStorage | esx.clear.scsi.device.io.latency.improved| Device {1} performance has improved. I/O latency reduced from {2} microseconds to {3} microseconds. Since 5.0 Reference | |
info | ESXHostStorage | esx.clear.scsi.device.state.on| Device {1}, has been turned on administratively. Since 5.0 Reference | |
info | ESXHostStorage | esx.clear.scsi.device.state.permanentloss.deviceonline| Device {1}, that was permanently inaccessible is now online. No data consistency guarantees. Since 5.0 Reference | |
info | ESXHostStorage | esx.clear.storage.apd.exit| Device or filesystem with identifer [{1}] has exited the All Paths Down state. Since 5.1 Reference | |
info | ESXHostStorage | esx.clear.storage.connectivity.restored| Connectivity to storage device {1} (Datastores: {2}) restored. Path {3} is active again. Since 4.1 Reference | |
info | ESXHostStorage | esx.clear.storage.redundancy.restored| Path redundancy to storage device {1} (Datastores: {2}) restored. Path {3} is active again. Since 4.1 Reference | |
info | VC | esx.clear.vsan.clustering.enabled| VSAN clustering and directory services have now been enabled. Since 5.5 Reference | |
info | VC | esx.clear.vsan.network.available| event.esx.clear.vsan.network.available.fullFormat Since 5.5 Reference | |
info | VC | esx.clear.vsan.vmknic.ready| event.esx.clear.vsan.vmknic.ready.fullFormat Since 5.5 Reference | |
error | VC | esx.problem.3rdParty.error| A 3rd party component, {1}, running on ESXi has reported an error. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}. Since 5.0 Reference | |
info | VC | esx.problem.3rdParty.info| event.esx.problem.3rdParty.info.fullFormat Since 5.0 Reference | |
warning | VC | esx.problem.3rdParty.warning| A 3rd party component, {1}, running on ESXi has reported a warning related to a problem. Please follow the knowledge base link ({2}) to see the steps to remedy the problem as reported by {3}. The message reported is: {4}. Since 5.0 Reference | |
error | ESXHostHardware | esx.problem.apei.bert.memory.error.corrected| A corrected memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} Since 4.1 Reference | |
error | ESXHostHardware | esx.problem.apei.bert.memory.error.fatal| A fatal memory error occurred in the last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} Since 4.1 Reference | |
error | ESXHostHardware | esx.problem.apei.bert.memory.error.recoverable| A recoverable memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} Since 4.1 Reference | |
error | ESXHostHardware | esx.problem.apei.bert.pcie.error.corrected| A corrected PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. Since 4.1 Reference | |
error | ESXHostHardware | esx.problem.apei.bert.pcie.error.fatal| Platform encounterd a fatal PCIe error in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. Since 4.1 Reference | |
error | ESXHostHardware | esx.problem.apei.bert.pcie.error.recoverable| A recoverable PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. Since 4.1 Reference | |
warning | ESXHost | esx.problem.application.core.dumped| An application ({1}) running on ESXi host has crashed ({2} time(s) so far). A core file might have been created at {3}. Since 5.0 Reference | |
warning | ESXHost | esx.problem.coredump.unconfigured| No vmkcore disk partition is available and no network coredump server has been configured. Host core dumps cannot be saved. Since 5.0 Reference | |
error | ESXHostHardware | esx.problem.cpu.amd.mce.dram.disabled| DRAM ECC not enabled. Please enable it in BIOS. Since 5.0 Reference | |
error | ESXHostHardware | esx.problem.cpu.intel.ioapic.listing.error| Not all IO-APICs are listed in the DMAR. Not enabling interrupt remapping on this platform. Since 5.0 Reference | |
error | ESXHostHardware | esx.problem.cpu.mce.invalid| MCE monitoring will be disabled as an unsupported CPU was detected. Please consult the ESX HCL for information on supported hardware. Since 5.0 Reference | |
error | ESXHostHardware | esx.problem.cpu.smp.ht.invalid| Disabling HyperThreading due to invalid configuration: Number of threads: {1}, Number of PCPUs: {2}. Since 5.0 Reference | |
error | ESXHostHardware | esx.problem.cpu.smp.ht.numpcpus.max| Found {1} PCPUs, but only using {2} of them due to specified limit. Since 5.0 Reference | |
warning | ESXHostHardware | esx.problem.cpu.smp.ht.partner.missing| Disabling HyperThreading due to invalid configuration: HT partner {1} is missing from PCPU {2}. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.dhclient.lease.none| Unable to obtain a DHCP lease on interface {1}. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.dhclient.lease.offered.error| event.esx.problem.dhclient.lease.offered.error.fullFormat Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.dhclient.lease.persistent.none| No working DHCP leases in persistent database. Since 5.0 Reference | |
warning | VC | esx.problem.esximage.install.error| Could not install image profile: {1} Since 5.0 Reference | |
warning | VC | esx.problem.esximage.install.invalidhardware| Host doesn't meet image profile '{1}' hardware requirements: {2} Since 5.0 Reference | |
warning | VC | esx.problem.esximage.install.stage.error| Could not stage image profile '{1}': {2} Since 5.0 Reference | |
warning | ESXHostHardware | esx.problem.hardware.acpi.interrupt.routing.device.invalid| Skipping interrupt routing entry with bad device number: {1}. This is a BIOS bug. Since 5.0 Reference | |
warning | ESXHostHardware | esx.problem.hardware.acpi.interrupt.routing.pin.invalid| Skipping interrupt routing entry with bad device pin: {1}. This is a BIOS bug. Since 5.0 Reference | |
warning | ESXHostHardware | esx.problem.hardware.ioapic.missing| IOAPIC Num {1} is missing. Please check BIOS settings to enable this IOAPIC. Since 5.0 Reference | |
warning | ESXHost | esx.problem.host.coredump| An unread host kernel core dump has been found. Since 5.0 Reference | |
warning | ESXHost | esx.problem.hostd.core.dumped| {1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.iorm.badversion| Host {1} cannot participate in Storage I/O Control(SIOC) on datastore {2} because the version number {3} of the SIOC agent on this host is incompatible with number {4} of its counterparts on other hosts connected to this datastore. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.iorm.nonviworkload| An external I/O activity is detected on datastore {1}, this is an unsupported configuration. Consult the Resource Management Guide or follow the Ask VMware link for more information. Since 4.1 Reference | |
error | Cluster | esx.problem.migrate.vmotion.default.heap.create.failed| Failed to create default migration heap. This might be the result of severe host memory pressure or virtual address space exhaustion. Migration might still be possible, but will be unreliable in cases of extreme host memory pressure. Since 5.0 Reference | |
esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown | warning | Cluster | esx.problem.migrate.vmotion.server.pending.cnx.listen.socket.shutdown| The ESXi host vMotion network server encountered an error while monitoring incoming network connections. Shutting down listener socket. vMotion might not be possible with this host until vMotion is manually re-enabled. Failure status: {1} Since 5.0 Reference |
error | ESXHostNetwork | esx.problem.net.connectivity.lost| Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. Since 4.1 Reference | |
error | ESXHostNetwork | esx.problem.net.dvport.connectivity.lost| Lost network connectivity on DVPorts: {1}. Physical NIC {2} is down. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.dvport.redundancy.degraded| Uplink redundancy degraded on DVPorts: {1}. Physical NIC {2} is down. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.dvport.redundancy.lost| Lost uplink redundancy on DVPorts: {1}. Physical NIC {2} is down. Since 4.1 Reference | |
error | ESXHostNetwork | esx.problem.net.e1000.tso6.notsupported| Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.fence.port.badfenceid| VMkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: invalid fenceId. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.fence.resource.limited| Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: maximum number of fence networks or ports have been reached. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.fence.switch.unavailable| Vmkernel failed to set fenceId {1} on distributed virtual port {2} on switch {3}. Reason: dvSwitch fence property is not set. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.firewall.config.failed| Firewall configuration operation '{1}' failed. The changes were not applied to rule set {2}. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.firewall.port.hookfailed| Adding port {1} to Firewall failed. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.gateway.set.failed| Cannot connect to the specified gateway {1}. Failed to set it. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.heap.belowthreshold| {1} heap free size dropped below {2} percent. Since 5.0 Reference | |
warning | VC | esx.problem.net.lacp.lag.transition.down| LACP warning: LAG {1} on VDS {2} is down. Since 5.5 Reference | |
error | ESXHostNetwork | esx.problem.net.lacp.peer.noresponse| Lacp error: No peer response on uplink {1} for VDS {2}. Since 5.1 Reference | |
error | ESXHostNetwork | esx.problem.net.lacp.policy.incompatible| Lacp error: Current teaming policy on VDS {1} is incompatible, supported is IP hash only. Since 5.1 Reference | |
error | ESXHostNetwork | esx.problem.net.lacp.policy.linkstatus| Lacp error: Current teaming policy on VDS {1} is incompatible, supported link failover detection is link status only. Since 5.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.lacp.uplink.blocked| Lacp warning: uplink {1} on VDS {2} is blocked. Since 5.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.lacp.uplink.disconnected| Lacp warning: uplink {1} on VDS {2} got disconnected. Since 5.1 Reference | |
error | ESXHostNetwork | esx.problem.net.lacp.uplink.fail.duplex| Lacp error: Duplex mode across all uplink ports must be full, VDS {1} uplink {2} has different mode. Since 5.1 Reference | |
error | ESXHostNetwork | esx.problem.net.lacp.uplink.fail.speed| Lacp error: Speed across all uplink ports must be same, VDS {1} uplink {2} has different speed. Since 5.1 Reference | |
error | ESXHostNetwork | esx.problem.net.lacp.uplink.inactive| Lacp error: All uplinks on VDS {1} must be active. Since 5.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.lacp.uplink.transition.down| Lacp warning: uplink {1} on VDS {2} is moved out of link aggregation group. Since 5.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.migrate.bindtovmk| The ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.migrate.unsupported.latency| ESXi has detected {1}ms round-trip vMotion network latency between host {2} and {3}. High latency vMotion networks are supported only if both ESXi hosts have been configured for vMotion latency tolerance. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.portset.port.full| Portset {1} has reached the maximum number of ports ({2}). Cannot apply for any more free ports. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.portset.port.vlan.invalidid| {1} VLANID {2} is invalid. VLAN ID must be between 0 and 4095. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.proxyswitch.port.unavailable| Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.redundancy.degraded| Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.redundancy.lost| Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.uplink.mtu.failed| VMkernel failed to set the MTU value {1} on the uplink {2}. Since 4.1 Reference | |
error | ESXHostNetwork | esx.problem.net.vdl2.instance.initialization.fail| VDL2 instance on DVS {1} initialization failed. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.vdl2.instance.notexist| VDL2 overlay instance is not created on DVS {1} before initializing VDL2 port or VDL2 IP interface. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.vdl2.mcastgroup.fail| VDL2 IP interface on vmknic: {1}, DVS: {2}, VLAN: {3} failed to join multicast group: {4}. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.vdl2.network.initialization.fail| VDL2 network {1} on DVS {2} initialization failed. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.vdl2.port.initialization.fail| VDL2 port {1} on VDL2 network {2}, DVS {3} initialization failed. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.vdl2.vmknic.fail| VDL2 IP interface failed on vmknic {1}, port {2}, DVS {3}, VLAN {4}. Since 5.0 Reference | |
error | ESXHostNetwork | esx.problem.net.vdl2.vmknic.notexist| VDL2 IP interface does not exist on DVS {1}, VLAN {2}. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.vmknic.ip.duplicate| A duplicate IP address was detected for {1} on the interface {2}. The current owner is {3}. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.vmnic.linkstate.down| Physical NIC {1} linkstate is down. Since 4.1 Reference | |
warning | ESXHostNetwork | esx.problem.net.vmnic.linkstate.flapping| Taking down physical NIC {1} because the link is unstable. Since 5.0 Reference | |
warning | ESXHostNetwork | esx.problem.net.vmnic.watchdog.reset| Uplink {1} has recovered from a transient failure due to watchdog timeout Since 4.1 Reference | |
warning | ESXHost | esx.problem.ntpd.clock.correction.error| NTP daemon stopped. Time correction {1} > {2} seconds. Manually set the time and restart ntpd. Since 5.0 Reference | |
info | VC | esx.problem.pageretire.platform.retire.request| Memory page retirement requested by platform firmware. FRU ID: {1}. see System Hardware Log: {2} Since 5.0 Reference | |
warning | ESXHost | esx.problem.pageretire.selectedmpnthreshold.host.exceeded| Number of host physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}). Since 5.0 Reference | |
warning | ESXHost | esx.problem.pageretire.selectedmpnthreshold.kernel.exceeded| Number of kernel physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}). Since 5.0 Reference | |
esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded | warning | ESXHost | esx.problem.pageretire.selectedmpnthreshold.userclient.exceeded| Number of physical memory pages belonging to (user) memroy client {1} that have been selected for retirement ({2}) exceeds threshold ({3}). Since 5.0 Reference |
esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded | warning | ESXHost | esx.problem.pageretire.selectedmpnthreshold.userprivate.exceeded| Number of private user physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}). Since 5.0 Reference |
esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded | warning | ESXHost | esx.problem.pageretire.selectedmpnthreshold.usershared.exceeded| Number of shared user physical memory pages that have been selected for retirement ({1}) exceeds threshold ({2}). Since 5.0 Reference |
esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded | warning | ESXHost | esx.problem.pageretire.selectedmpnthreshold.vmmclient.exceeded| Number of physical memory pages belonging to (vmm) memroy client {1} that have been selected for retirement ({2}) exceeds threshold ({3}). Since 5.0 Reference |
error | ESXHostStorage | esx.problem.scsi.apd.event.descriptor.alloc.failed| No memory to allocate APD (All Paths Down) event subsystem. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.close.failed| "Failed to close the device {1} properly, plugin {2}. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.detach.failed| Detach failed for device :{1}. Exceeded the number of devices that can be detached, please cleanup stale detach entries. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.filter.attach.failed| Failed to attach filters to device '%s' during registration. Plugin load failed or the filter rules are incorrect. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.io.bad.plugin.type| Bad plugin type for device {1}, plugin {2} Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.io.inquiry.failed| Failed to get standard inquiry for device {1} from Plugin {2}. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.io.invalid.disk.qfull.value| QFullSampleSize should be bigger than QFullThreshold. LUN queue depth throttling algorithm will not function as expected. Please set the QFullSampleSize and QFullThreshold disk configuration values in ESX correctly. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.io.latency.high| Device {1} performance has deteriorated. I/O latency increased from average value of {2} microseconds to {3} microseconds. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.io.qerr.change.config| QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The system is not configured to change the QErr setting of device. The QErr value supported by system is 0x{3}. Please check the SCSI ChangeQErrSetting configuration value for ESX. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.io.qerr.changed| QErr set to 0x{1} for device {2}. This may cause unexpected behavior. The device was originally configured to the supported QErr setting of 0x{3}, but this has been changed and could not be changed back. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.is.local.failed| Failed to verify if the device {1} from plugin {2} is a local - not shared - device Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.is.pseudo.failed| Failed to verify if the device {1} from plugin {2} is a pseudo device Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.is.ssd.failed| Failed to verify if the device {1} from plugin {2} is a Solid State Disk device Since 5.0 Reference | |
error | ESXHostStorage | esx.problem.scsi.device.limitreached| The maximum number of supported devices of {1} has been reached. A device from plugin {2} could not be created. Since 4.1 Reference | |
info | VC | esx.problem.scsi.device.state.off| Device {1}, has been turned off administratively. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.state.permanentloss| Device {1} has been removed or is permanently inaccessible. Affected datastores (if any): {2}. Since 5.0 Reference | |
info | VC | esx.problem.scsi.device.state.permanentloss.noopens| Permanently inaccessible device {1} has no more opens. It is now safe to unmount datastores (if any) {2} and delete the device. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.scsi.device.state.permanentloss.pluggedback| Device {1} has been plugged back in after being marked permanently inaccessible. No data consistency guarantees. Since 5.0 Reference | |
esx.problem.scsi.device.state.permanentloss.withreservationheld | error | ESXHostStorage | esx.problem.scsi.device.state.permanentloss.withreservationheld| Device {1} has been removed or is permanently inaccessible, while holding a reservation. Affected datastores (if any): {2}. Since 5.0 Reference |
warning | ESXHostStorage | esx.problem.scsi.device.thinprov.atquota| Space utilization on thin-provisioned device {1} exceeded configured threshold. Affected datastores (if any): {2}. Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.scsi.scsipath.limitreached| The maximum number of supported paths of {1} has been reached. Path {2} could not be added. Since 4.1 Reference | |
warning | ESXHostStorage | esx.problem.scsi.unsupported.plugin.type| Scsi Device Allocation not supported for plugin type {1} Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.storage.apd.start| Device or filesystem with identifer [{1}] has entered the All Paths Down state. Since 5.1 Reference | |
warning | ESXHostStorage | esx.problem.storage.apd.timeout| Device or filesystem with identifer [{1}] has entered the All Paths Down Timeout state after being in the All Paths Down state for {2} seconds. I/Os will be fast failed. Since 5.1 Reference | |
warning | ESXHostStorage | esx.problem.storage.connectivity.devicepor| Frequent PowerOn Reset Unit Attentions are occurring on device {1}. This might indicate a storage problem. Affected datastores: {2}1 Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.storage.connectivity.lost| Lost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}. Since 4.1 Reference | |
warning | ESXHostStorage | esx.problem.storage.connectivity.pathpor| Frequent PowerOn Reset Unit Attentions are occurring on path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3} Since 4.1 Reference | |
warning | ESXHostStorage | esx.problem.storage.connectivity.pathstatechanges| Frequent path state changes are occurring for path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3} Since 4.1 Reference | |
warning | ESXHostStorage | esx.problem.storage.iscsi.discovery.connect.error| iSCSI discovery to {1} on {2} failed. The iSCSI Initiator could not establish a network connection to the discovery address. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.storage.iscsi.discovery.login.error| iSCSI discovery to {1} on {2} failed. The Discovery target returned a login error of: {3}. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.storage.iscsi.target.connect.error| Login to iSCSI target {1} on {2} failed. The iSCSI initiator could not establish a network connection to the target. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.storage.iscsi.target.login.error| Login to iSCSI target {1} on {2} failed. Target returned login error of: {3}. Since 5.0 Reference | |
error | ESXHostStorage | esx.problem.storage.iscsi.target.permanently.lost| The iSCSI target {2} was permanently removed from {1}. Since 5.1 Reference | |
warning | ESXHostStorage | esx.problem.storage.redundancy.degraded| Path redundancy to storage device {1} degraded. Path {2} is down. Affected datastores: {3}. Since 4.1 Reference | |
warning | ESXHostStorage | esx.problem.storage.redundancy.lost| Lost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}. Since 4.1 Reference | |
warning | ESXHost | esx.problem.syslog.config| System logging is not configured on host {host.name}. Please check Syslog options for the host under Configuration -> Software -> Advanced Settings in vSphere client. Since 5.0 Reference | |
warning | ESXHost | esx.problem.syslog.nonpersistent| System logs on host {host.name} are stored on non-persistent storage. Consult product documentation to configure a syslog server or a scratch partition. Since 5.1 Reference | |
warning | ESXHostStorage | esx.problem.vfat.filesystem.full.other| The VFAT filesystem {1} (UUID {2}) is full. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.vfat.filesystem.full.scratch| The host scratch partition, which is the VFAT filesystem {1} (UUID {2}), is full. Since 5.0 Reference | |
error | ESXHostStorage | esx.problem.visorfs.failure| An operation on the root filesystem has failed. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.visorfs.inodetable.full| The root filesystem file table is full. As a result, the file {1} could not be created by the application '{2}'. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.visorfs.ramdisk.full| The ramdisk '{1}' is full. As a result, the file {2} could not be written. Since 5.0 Reference | |
error | ESXHostStorage | esx.problem.visorfs.ramdisk.inodetable.full| The file table of the ramdisk '{1}' is full. As a result, the file {2} could not be created by the application '{3}'. Since 5.1 Reference | |
error | ESXHost | esx.problem.vm.kill.unexpected.fault.failure| The VM using the config file {1} could not fault in a guest physical page from the hypervisor level swap file at {2}. The VM is terminated as further progress is impossible. Since 5.1 Reference | |
error | ESXHost | esx.problem.vm.kill.unexpected.forcefulPageRetire| The VM using the config file {1} contains the host physical page {2} which was scheduled for immediate retirement. To avoid system instability the VM is forcefully powered off. Since 5.0 Reference | |
error | ESXHost | esx.problem.vm.kill.unexpected.noSwapResponse| The VM using the config file {1} did not respond to {2} swap actions in {3} seconds and is forcefully powered off to prevent system instability. Since 5.0 Reference | |
error | ESXHost | esx.problem.vm.kill.unexpected.vmtrack| The VM using the config file {1} is allocating too many pages while system is critically low in free memory. It is forcefully terminated to prevent system instability. Since 5.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.ats.support.lost| event.esx.problem.vmfs.ats.support.lost.fullFormat Since 5.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.error.volume.is.locked| Volume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover. Since 5.0 Reference | |
warning | ESXHostStorage | esx.problem.vmfs.extent.offline| An attached device {1} may be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible. Since 5.0 Reference | |
info | ESXHostStorage | esx.problem.vmfs.extent.online| Device {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available. Since 5.0 Reference | |
info | ESXHostStorage | esx.problem.vmfs.heartbeat.recovered| Successfully restored access to volume {1} ({2}) following connectivity issues. Since 4.1 Reference | |
warning | ESXHostStorage | esx.problem.vmfs.heartbeat.timedout| Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.heartbeat.unrecoverable| Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed. Since 4.1 Reference | |
warning | ESXHostStorage | esx.problem.vmfs.journal.createfailed| No space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support. Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.lock.corruptondisk| At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too. Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.nfs.mount.connect.failed| Failed to mount to the server {1} mount point {2}. {3} Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.nfs.mount.limit.exceeded| Failed to mount to the server {1} mount point {2}. {3} Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.nfs.server.disconnect| Lost connection to server {1} mount point {2} mounted as {3} ({4}). Since 4.1 Reference | |
info | ESXHostStorage | esx.problem.vmfs.nfs.server.restored| Restored connection to server {1} mount point {2} mounted as {3} ({4}). Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.resource.corruptondisk| At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too. Since 4.1 Reference | |
error | ESXHostStorage | esx.problem.vmfs.volume.locked| Volume on device {1} locked, possibly because remote host {2} encountered an error during a volume operation and could not recover. Since 4.1 Reference | |
error | ESXHost | esx.problem.vmsyslogd.remote.failure| The host "{1}" has become unreachable. Remote logging to this host has stopped. Since 5.0 Reference | |
error | ESXHost | esx.problem.vmsyslogd.storage.failure| Logging to storage has failed. Logs are no longer being stored locally on this host. Since 5.0 Reference | |
error | ESXHost | esx.problem.vmsyslogd.storage.logdir.invalid| The configured log directory {1} cannot be used. The default directory {2} will be used instead. Since 5.1 Reference | |
warning | ESXHost | esx.problem.vmsyslogd.unexpected| Log daemon has failed for an unexpected reason: {1} Since 5.0 Reference | |
warning | ESXHost | esx.problem.vpxa.core.dumped| {1} crashed ({2} time(s) so far) and a core file might have been created at {3}. This might have caused connections to the host to be dropped. Since 5.0 Reference | |
warning | VC | esx.problem.vsan.clustering.disabled| VSAN clustering and directory services have been disabled thus will be no longer available. Since 5.5 Reference | |
warning | ESXHostNetwork | esx.problem.vsan.net.not.ready| vmknic {1} that is currently configured to be used with VSAN doesn't have an IP address yet. There are no other active network configuration and therefore the VSAN node doesn't have network connectivity. Since 5.5 Reference | |
warning | ESXHostNetwork | esx.problem.vsan.net.redundancy.lost| VSAN network configuration doesn't have any redundancy. This might be a problem if further network configuration is removed. Since 5.5 Reference | |
warning | ESXHostNetwork | esx.problem.vsan.net.redundancy.reduced| VSAN network configuration redundancy has been reduced. This might be a problem if further network configuration is removed. Since 5.5 Reference | |
error | ESXHostNetwork | esx.problem.vsan.no.network.connectivity| VSAN doesn't have any network configuration. This can severely impact several objects in the VSAN datastore. Since 5.5 Reference | |
warning | VC | esx.problem.vsan.vmknic.not.ready| vmknic {1} that is currently configured to be used with VSAN doesn't have an IP address yet. However, there are other network configuration which are active. If those configurations are removed that may cause problems. Since 5.5 Reference | |
info | VC | The host {host.name} is no longer in standby mode Since 2.5 Reference | |
info | VC | The host {host.name} is exiting standby mode Since 4.0 Reference | |
info | VC | Host {host.name} in {datacenter.name} has exited maintenance mode Since 2.0 Reference | |
error | ESXHost | The host {host.name} could not exit standby mode Since 4.0 Reference | |
info | VC | Sufficient resources are available to satisfy HA failover level in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | General event: {message} Since 2.0 Reference | |
error | ESXHost | Error detected on {host.name} in {datacenter.name}: {message} Since 2.0 Reference | |
info | VC | Issue detected on {host.name} in {datacenter.name}: {message} Since 2.0 Reference | |
warning | ESXHost | Issue detected on {host.name} in {datacenter.name}: {message} Since 2.0 Reference | |
user | VC | User logged event: {message} Since 2.0 Reference | |
error | VirtualMachine | Error detected for {vm.name} on {host.name} in {datacenter.name}: {message} Since 2.0 Reference | |
info | VC | Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message} Since 2.0 Reference | |
warning | VirtualMachine | Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message} Since 2.0 Reference | |
info | VC | The Distributed Virtual Switch corresponding to the proxy switches {switchUuid} on the host {host.name} does not exist in vCenter or does not contain this host. Since 4.0 Reference | |
info | VC | A ghost proxy switch {switchUuid} on the host {host.name} was resolved. Since 4.0 Reference | |
info | VC | The message changed: {message} Since 2.0 Reference | |
info | VC | hbr.primary.AppQuiescedDeltaCompletedEvent| Application consistent delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred) Since 5.0 Reference | |
info | VC | hbr.primary.ConnectionRestoredToHbrServerEvent| Connection to replication server restored for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
warning | VC | hbr.primary.DeltaAbortedEvent| Delta aborted for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForDeltaAbort} Since 5.0 Reference | |
info | VC | hbr.primary.DeltaCompletedEvent| Delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred). Since 5.0 Reference | |
info | VC | hbr.primary.DeltaStartedEvent| Delta started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
error | VC | hbr.primary.FailedToStartDeltaEvent| Failed to start delta for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault} Since 5.0 Reference | |
error | VC | hbr.primary.FailedToStartSyncEvent| Failed to start full sync for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.fault.ReplicationVmFault.ReasonForFault} Since 5.0 Reference | |
warning | VC | hbr.primary.FSQuiescedDeltaCompletedEvent| File system consistent delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred) Since 5.0 Reference | |
warning | VC | hbr.primary.InvalidDiskReplicationConfigurationEvent| Replication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}, disk {diskKey}: {reasonForFault.@enum.fault.ReplicationDiskConfigFault.ReasonForFault} Since 5.0 Reference | |
warning | VC | hbr.primary.InvalidVmReplicationConfigurationEvent| Replication configuration is invalid for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reasonForFault.@enum.fault.ReplicationVmConfigFault.ReasonForFault} Since 5.0 Reference | |
warning | VC | hbr.primary.NoConnectionToHbrServerEvent| No connection to replication server for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerConnection} Since 5.0 Reference | |
warning | VC | hbr.primary.NoProgressWithHbrServerEvent| Replication server error for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}: {reason.@enum.hbr.primary.ReasonForNoServerProgress} Since 5.0 Reference | |
warning | VC | hbr.primary.QuiesceNotSupported| Quiescing is not supported for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
info | VC | hbr.primary.SyncCompletedEvent| Full sync completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred). Since 5.0 Reference | |
info | VC | hbr.primary.SyncStartedEvent| Full sync started by {userName} for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 5.0 Reference | |
warning | VC | hbr.primary.UnquiescedDeltaCompletedEvent| Delta completed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({bytes} bytes transferred). Since 5.0 Reference | |
info | VC | hbr.primary.VmReplicationConfigurationChangedEvent| Replication configuration changed for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in {datacenter.name} ({numDisks} disks, {rpo} minutes RPO, HBR Server is {hbrServerAddress}). Since 5.0 Reference | |
info | VC | {componentName} status changed from {oldStatus} to {newStatus} Since 4.0 Reference | |
info | VC | Added host {host.name} to datacenter {datacenter.name} Since 2.0 Reference | |
error | VC | Cannot add host {hostname} to datacenter {datacenter.name} Since 2.0 Reference | |
warning | VC | Administrator access to the host {host.name} is disabled Since 2.5 Reference | |
warning | VC | Administrator access to the host {host.name} has been restored Since 2.5 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: cannot configure management account Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: already managed by {serverName} Since 2.0 Reference | |
error | ESXHost | Cannot connect host {host.name} in {datacenter.name} : server agent is not responding Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: incorrect user name or password Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: incompatible version Since 2.0 Reference | |
error | ESXHost | Cannot connect host {host.name} in {datacenter.name}. Did not install or upgrade vCenter agent service. Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: error connecting to host Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: network error Since 2.0 Reference | |
error | ESXHost | Cannot connect host {host.name} in {datacenter.name}: account has insufficient privileges Since 2.0 Reference | |
error | ESXHost | Cannot connect host {host.name} in {datacenter.name} Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: not enough CPU licenses Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: incorrect host name Since 2.0 Reference | |
error | ESXHost | Cannot connect {host.name} in {datacenter.name}: time-out waiting for host response Since 2.0 Reference | |
info | VC | Host {host.name} checked for compliance. Since 4.0 Reference | |
info | VC | Host {host.name} is in compliance with the attached profile Since 4.0 Reference | |
info | VC | Host configuration changes applied. Since 4.0 Reference | |
info | VC | Connected to {host.name} in {datacenter.name} Since 2.0 Reference | |
error | ESXHost | Host {host.name} in {datacenter.name} is not responding Since 2.0 Reference | |
info | VC | HA agent disabled on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | HA is being disabled on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} Since 2.0 Reference | |
info | VC | HA agent enabled on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
warning | Cluster | Enabling HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
error | Cluster | HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error {message}: {reason.@enum.HostDasErrorEvent.HostDasErrorReason} Since 2.0 Reference | |
info | VC | HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} is configured correctly Since 2.0 Reference | |
warning | ESXHost | Disconnected from {host.name} in {datacenter.name}. Reason: {reason.@enum.HostDisconnectedEvent.ReasonCode} Since 2.0 Reference | |
info | VC | dvPort connected to host {host.name} in {datacenter.name} changed status Since 4.1 Reference | |
error | VC | Cannot restore some administrator permissions to the host {host.name} Since 2.5 Reference | |
error | ESXHostNetwork | Host {host.name} has the following extra networks not used by other hosts for HA communication:{ips}. Consider using HA advanced option das.allowNetwork to control network usage Since 4.0 Reference | |
error | ESXHostNetwork | Cannot complete command 'hostname -s' on host {host.name} or returned incorrect name format Since 2.5 Reference | |
info | VC | Host {host.name} is running in audit mode. The host configuration will not be persistent across reboots. Since 5.0 Reference | |
warning | ESXHost | Maximum ({capacity}) number of hosts allowed for this edition of vCenter Server has been reached Since 2.5 Reference | |
info | VC | The virtual machine inventory file on host {host.name} is damaged or unreadable. Since 4.0 Reference | |
info | VC | IP address of the host {host.name} changed from {oldIP} to {newIP} Since 2.5 Reference | |
warning | ESXHostNetwork | Configuration of host IP address is inconsistent on host {host.name}: address resolved to {ipAddress} and {ipAddress2} Since 2.5 Reference | |
warning | ESXHostNetwork | Cannot resolve IP address to short name on host {host.name} Since 2.5 Reference | |
warning | ESXHostNetwork | Host {host.name} could not reach isolation address: {isolationIp} Since 2.5 Reference | |
error | VC | A host license for {host.name} has expired Since 2.0 Reference | |
info | ESXHostNetwork | A host local port {hostLocalPort.portKey} is created on vSphere Distributed Switch {hostLocalPort.switchUuid} to recover from management network connectivity loss on virtual NIC device {hostLocalPort.vnic} on the host {host.name}. Since 5.1 Reference | |
error | ESXHostNetwork | Host {host.name} does not have the following networks used by other hosts for HA communication:{ips}. Consider using HA advanced option das.allowNetwork to control network usage Since 4.0 Reference | |
info | VC | Host monitoring state in {computeResource.name} in {datacenter.name} changed to {state} Since 4.0 Reference | |
error | ESXHostNetwork | Host {host.name} currently has no available networks for HA Communication. The following networks are currently used by HA: {ips} Since 4.0 Reference | |
error | ESXHostNetwork | Host {host.name} has no port groups enabled for HA communication. Since 4.0 Reference | |
warning | VC | Host {host.name} is not in compliance with the attached profile Since 4.0 Reference | |
warning | ESXHostNetwork | Host {host.name} currently has no management network redundancy Since 2.5 Reference | |
error | Cluster | Host {host.name} is not a cluster member in {datacenter.name} Since 2.5 Reference | |
error | VC | Insufficient capacity in host {computeResource.name} to satisfy resource configuration in {datacenter.name} Since 4.0 Reference | |
error | ESXHostNetwork | Primary agent {primaryAgent} was not specified as a short name to host {host.name} Since 2.5 Reference | |
info | VC | Profile is applied on the host {host.name} Since 4.0 Reference | |
error | VC | Cannot reconnect to {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Removed host {host.name} in {datacenter.name} Since 2.0 Reference | |
warning | ESXHostNetwork | Host names {shortName} and {shortName2} both resolved to the same IP address. Check the host network configuration and DNS entries Since 2.5 Reference | |
warning | ESXHostNetwork | Cannot resolve short name {shortName} to IP address on host {host.name} Since 2.5 Reference | |
info | VC | Shut down of {host.name} in {datacenter.name}: {reason} Since 2.0 Reference | |
info | VC | Configuration status on host {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name} Since 4.0 Reference | |
error | VC | Cannot synchronize host {host.name}. {reason.msg} Since 4.0 Reference | |
error | ESXHost | Cannot install or upgrade vCenter agent service on {host.name} in {datacenter.name} Since 2.0 Reference | |
warning | VC | event.HostUserWorldSwapNotEnabledEvent.fullFormat Since 4.0 Reference | |
info | VC | Host {host.name} vNIC {vnic.vnic} was reconfigured to use dvPort {vnic.port.portKey} with port level configuration, which might be different from the dvPort group. Since 4.0 Reference | |
warning | ESXHostStorage | WWNs are changed for {host.name} Since 2.5 Reference | |
error | ESXHostStorage | The WWN ({wwn}) of {host.name} conflicts with the currently registered WWN Since 2.5 Reference | |
error | ESXHost | Host {host.name} did not provide the information needed to acquire the correct set of licenses Since 2.5 Reference | |
info | VC | {message} Since 2.0 Reference | |
warning | Cluster | Insufficient resources to satisfy HA failover level on cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
error | VC | The license edition '{feature}' is invalid Since 2.5 Reference | |
warning | VC | Booting from iSCSI failed with an error. See the VMware Knowledge Base for information on configuring iBFT networking Since 4.1 Reference | |
error | VC | License {feature.featureName} has expired Since 2.0 Reference | |
error | VC | License inventory is not compliant. Licenses are overused Since 4.0 Reference | |
error | VC | Unable to acquire licenses due to a restriction in the option file on the license server. Since 2.5 Reference | |
info | VC | License server {licenseServer} is available Since 2.0 Reference | |
error | VC | License server {licenseServer} is unavailable Since 2.0 Reference | |
info | VC | Created local datastore {datastore.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | The Local Tech Support Mode for the host {host.name} has been enabled Since 4.1 Reference | |
warning | VC | Datastore {datastore} which is configured to back the locker does not exist Since 2.5 Reference | |
info | VC | Locker was reconfigured from {oldDatastore} to {newDatastore} datastore Since 2.5 Reference | |
error | Cluster | Unable to migrate {vm.name} from {host.name} in {datacenter.name}: {fault.msg} Since 2.0 Reference | |
error | Cluster | Unable to migrate {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg} Since 2.0 Reference | |
warning | Cluster | Migration of {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg} Since 2.0 Reference | |
error | Cluster | Cannot migrate {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg} Since 2.0 Reference | |
warning | Cluster | Migration of {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg} Since 2.0 Reference | |
warning | Cluster | Migration of {vm.name} from {host.name} in {datacenter.name}: {fault.msg} Since 2.0 Reference | |
info | ESXHostNetwork | The MTU configured in the vSphere Distributed Switch matches the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name} Since 5.1 Reference | |
error | ESXHostNetwork | The MTU configured in the vSphere Distributed Switch does not match the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name} Since 5.1 Reference | |
info | VC | Created NAS datastore {datastore.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
error | ESXHostNetwork | Network configuration on the host {host.name} is rolled back as it disconnects the host from vCenter server. Since 5.1 Reference | |
error | VC | Cannot login user {userName}@{ipAddress}: no permission Since 2.0 Reference | |
info | VC | No datastores have been configured on the host {host.name} Since 2.5 Reference | |
error | VC | A required license {feature.featureName} is not reserved Since 2.0 Reference | |
info | VC | Unable to automatically migrate {vm.name} from {host.name} Since 2.0 Reference | |
info | VC | Non-VI workload detected on datastore {datastore.name} Since 4.1 Reference | |
info | VC | Not enough resources to failover {vm.name} in {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
warning | VC | The Distributed Virtual Switch configuration on some hosts differed from that of the vCenter Server. Since 4.0 Reference | |
info | VC | Permission created for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate} Since 2.0 Reference | |
info | VC | Permission rule removed for {principal} on {entity.name} Since 2.0 Reference | |
info | VC | Permission changed for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate} Since 2.0 Reference | |
info | VC | Profile {profile.name} attached. Since 4.0 Reference | |
info | VC | Profile {profile.name} was changed. Since 4.0 Reference | |
info | VC | Profile is created. Since 4.0 Reference | |
info | VC | Profile {profile.name} detached. Since 4.0 Reference | |
info | VC | This event records a Profile specific event. Since 4.0 Reference | |
info | VC | Profile {profile.name} reference host changed. Since 4.0 Reference | |
info | VC | Profile was removed. Since 4.0 Reference | |
info | ESXHostNetwork | The host {hostName} network connectivity was recovered on the management virtual NIC {vnic} by connecting to a new port {portKey} on the vSphere Distributed Switch {dvsUuid}. Since 5.1 Reference | |
info | VC | Remote Tech Support Mode (SSH) for the host {host.name} has been enabled Since 4.1 Reference | |
info | VC | Created resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Removed resource pool {resourcePool.name} on {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Moved resource pool {resourcePool.name} from {oldParent.name} to {newParent.name} on {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
verbose | VC | Updated configuration for {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
error | VC | Resource usage exceeds configuration for resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | New role {role.name} created Since 2.0 Reference | |
info | VC | Role {role.name} removed Since 2.0 Reference | |
info | VC | Modifed role {role.name} Since 2.0 Reference | |
info | ESXHostNetwork | The Network API {methodName} on this entity caused the host {hostName} to be diconnected from the vCenter Server. The configuration change was rolled back on the host. Since 5.1 Reference | |
info | VC | Task {scheduledTask.name} on {entity.name} in {datacenter.name} completed successfully Since 2.0 Reference | |
info | VC | Created task {scheduledTask.name} on {entity.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Task {scheduledTask.name} on {entity.name} in {datacenter.name} sent email to {to} Since 2.0 Reference | |
warning | VC | Task {scheduledTask.name} on {entity.name} in {datacenter.name} cannot send email to {to}: {reason.msg} Since 2.0 Reference | |
info | VC | This event records the completion of a scheduled task. The name of the task is indicated. Since 2.0 Reference | |
warning | VC | Task {scheduledTask.name} on {entity.name} in {datacenter.name} cannot be completed: {reason.msg} Since 2.0 Reference | |
info | VC | Reconfigured task {scheduledTask.name} on {entity.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Removed task {scheduledTask.name} on {entity.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Running task {scheduledTask.name} on {entity.name} in {datacenter.name} Since 2.0 Reference | |
error | VC | A vCenter Server license has expired Since 2.0 Reference | |
info | VC | vCenter started Since 2.0 Reference | |
info | VC | A session for user '{terminatedUsername}' has stopped Since 2.0 Reference | |
info | VC | Task: {info.descriptionId} Since 2.0 Reference | |
info | VC | Task: {info.descriptionId} time-out Since 2.5 Reference | |
info | ESXHostNetwork | Teaming configuration in the vSphere Distributed Switch {dvs.name} on host {host.name} matches the physical switch configuration in {datacenter.name}. Detail: {healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus} Since 5.1 Reference | |
error | ESXHostNetwork | Teaming configuration in the vSphere Distributed Switch {dvs.name} on host {host.name} does not match the physical switch configuration in {datacenter.name}. Detail: {healthResult.summary.@enum.dvs.VmwareDistributedVirtualSwitch.TeamingMatchStatus} Since 5.1 Reference | |
info | VC | Upgrading template {legacyTemplate} Since 2.0 Reference | |
info | VC | Template {legacyTemplate} upgrade completed Since 2.0 Reference | |
info | VC | Cannot upgrade template {legacyTemplate} due to: {reason.msg} Since 2.0 Reference | |
warning | ESXHost | The operation performed on {host.name} in {datacenter.name} timed out Since 2.0 Reference | |
info | VC | There are {unlicensed} unlicensed virtual machines on host {host} - there are only {available} licenses available Since 2.5 Reference | |
info | VC | {unlicensed} unlicensed virtual machines found on host {host} Since 2.5 Reference | |
info | VC | The agent on host {host.name} is updated and will soon restart Since 2.5 Reference | |
info | VC | This event records that the agent has been patched and will be restarted. Since 2.0 Reference | |
error | ESXHostNetwork | Not all VLAN MTU settings on the external physical switch allow the vSphere Distributed Switch maximum MTU size packets to pass on the uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. Since 5.1 Reference | |
info | ESXHostNetwork | All VLAN MTU settings on the external physical switch allow the vSphere Distributed Switch maximum MTU size packets to pass on the uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. Since 5.1 Reference | |
info | ESXHostNetwork | The configured VLAN in the vSphere Distributed Switch was trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. Since 5.1 Reference | |
error | ESXHostNetwork | Not all the configured VLANs in the vSphere Distributed Switch were trunked by the physical switch connected to uplink port {healthResult.uplinkPortKey} in vSphere Distributed Switch {dvs.name} on host {host.name} in {datacenter.name}. Since 5.1 Reference | |
info | VC | User {userLogin} was added to group {group} Since 2.0 Reference | |
verbose | VC | User {userName}@{ipAddress} logged in Since 2.0 Reference | |
verbose | VC | User {userName} logged out Since 2.0 Reference | |
info | VC | Password was changed for account {userLogin} on host {host.name} Since 2.0 Reference | |
info | VC | User {userLogin} removed from group {group} Since 2.0 Reference | |
user | VC | {message} Since 2.0 Reference | |
info | VC | event.VcAgentUninstalledEvent.fullFormat Since 4.0 Reference | |
error | VC | Cannot uninstall vCenter agent from {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason} Since 4.0 Reference | |
info | VC | vCenter agent has been upgraded on {host.name} in {datacenter.name} Since 2.0 Reference | |
error | VC | Cannot upgrade vCenter agent on {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason} Since 2.0 Reference | |
warning | VC | vim.event.LicenseDowngradedEvent| License downgrade: {licenseKey} removes the following features: {lostFeatures} Since 4.1 Reference | |
info | VC | VIM account password was changed on host {host.name} Since 2.5 Reference | |
info | VC | Remote console to {vm.name} on {host.name} in {datacenter.name} has been opened Since 2.5 Reference | |
info | VC | A ticket for {vm.name} of type {ticketType} on {host.name} in {datacenter.name} has been acquired Since 4.1 Reference | |
info | VC | Invalid name for {vm.name} on {host.name} in {datacenter.name}. Renamed from {oldName} to {newName} Since 2.0 Reference | |
info | VC | Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name} Since 2.0 Reference | |
info | VC | Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name} Since 4.1 Reference | |
info | VC | Creating {vm.name} on host {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Deploying {vm.name} on host {host.name} in {datacenter.name} from template {srcTemplate.name} Since 2.0 Reference | |
info | VC | Migrating {vm.name} from {host.name} to {destHost.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Relocating {vm.name} from {host.name} to {destHost.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Relocating {vm.name} in {datacenter.name} from {host.name} to {destHost.name} Since 2.0 Reference | |
info | VC | Clone of {sourceVm.name} completed Since 2.0 Reference | |
error | VC | Cannot clone {vm.name}: {reason.msg} Since 2.0 Reference | |
info | VC | Configuration file for {vm.name} on {host.name} in {datacenter.name} cannot be found Since 2.0 Reference | |
info | VC | Virtual machine {vm.name} is connected Since 2.0 Reference | |
info | VC | Created virtual machine {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
warning | VirtualMachine | {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset due to a guest OS error Since 4.0 Reference | |
warning | VirtualMachine | {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset due to a guest OS error. Screenshot is saved at {screenshotFilePath}. Since 4.0 Reference | |
error | VirtualMachine | Cannot reset {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} due to a guest OS error Since 4.0 Reference | |
error | VirtualMachine | Unable to update HA agents given the state of {vm.name} Since 2.0 Reference | |
info | VC | HA agents have been updated with the current state of the virtual machine Since 2.0 Reference | |
error | VirtualMachine | Disconnecting all hosts as the date of virtual machine {vm.name} has been rolled back Since 2.0 Reference | |
info | VC | Template {srcTemplate.name} deployed on host {host.name} Since 2.0 Reference | |
error | VC | Cannot deploy template: {reason.msg} Since 2.0 Reference | |
info | VC | {vm.name} on host {host.name} in {datacenter.name} is disconnected Since 2.0 Reference | |
info | VC | Discovered {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
error | VirtualMachine | Cannot create virtual disk {disk} Since 2.0 Reference | |
info | VC | dvPort connected to VM {vm.name} on {host.name} in {datacenter.name} changed status Since 4.1 Reference | |
info | VC | Migrating {vm.name} off host {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | End a recording session on {vm.name} Since 4.0 Reference | |
info | VC | End a replay session on {vm.name} Since 4.0 Reference | |
info | VC | This is a catch-all event for various VM events (the type of event is listed in the event). See VMware documentation for the list of possible events. Since 2.0 Reference | |
error | VirtualMachine | Cannot migrate {vm.name} from {host.name} to {destHost.name} in {datacenter.name} Since 2.0 Reference | |
error | VirtualMachine | Cannot complete relayout {vm.name} on {host.name} in {datacenter.name}: {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | Cannot complete relayout for virtual machine {vm.name} which has disks on a VMFS2 volume. Since 2.0 Reference | |
error | VirtualMachine | vCenter cannot start the Secondary VM {vm.name}. Reason: {reason.@enum.VmFailedStartingSecondaryEvent.FailureReason} Since 4.0 Reference | |
error | VirtualMachine | Cannot power Off {vm.name} on {host.name} in {datacenter.name}: {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | Cannot power On {vm.name} on {host.name} in {datacenter.name}. {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | Cannot reboot the guest OS for {vm.name} on {host.name} in {datacenter.name}. {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | {vm.name} cannot shut down the guest OS on {host.name} in {datacenter.name}: {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | {vm.name} cannot standby the guest OS on {host.name} in {datacenter.name}: {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg} Since 2.0 Reference | |
error | VirtualMachine | vCenter cannot update the Secondary VM {vm.name} configuration Since 4.0 Reference | |
warning | VirtualMachine | Failover unsuccessful for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Fault Tolerance state on {vm.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState} Since 4.0 Reference | |
info | VC | Fault Tolerance protection has been turned off for {vm.name} Since 4.0 Reference | |
error | VirtualMachine | The Fault Tolerance VM ({vm.name}) has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason} Since 4.0 Reference | |
info | VC | Created VMFS datastore {datastore.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Expanded VMFS datastore {datastore.name} on {host.name} in {datacenter.name} Since 4.0 Reference | |
info | VC | Extended VMFS datastore {datastore.name} on {host.name} in {datacenter.name} Since 4.0 Reference | |
info | VC | Guest OS reboot for {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Guest OS shut down for {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Guest OS standby for {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | VM monitoring state in {computeResource.name} in {datacenter.name} changed to {state} Since 4.0 Reference | |
info | VC | Assign a new instance UUID ({instanceUuid}) to {vm.name} Since 4.0 Reference | |
info | VC | The instance UUID of {vm.name} has been changed from ({oldInstanceUuid}) to ({newInstanceUuid}) Since 4.0 Reference | |
error | VirtualMachine | The instance UUID ({instanceUuid}) of {vm.name} conflicts with the instance UUID assigned to {conflictedVm.name} Since 4.0 Reference | |
info | VC | New MAC address ({mac}) assigned to adapter {adapter} for {vm.name} Since 2.0 Reference | |
warning | VC | Changed MAC address from {oldMac} to {newMac} for adapter {adapter} for {vm.name} Since 2.0 Reference | |
error | VirtualMachine | The MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name} Since 2.0 Reference | |
warning | VirtualMachine | Reached maximum Secondary VM (with FT turned On) restart count for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 4.0 Reference | |
warning | VirtualMachine | Reached maximum VM restart count for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. Since 4.0 Reference | |
error | VirtualMachine | Error message on {vm.name} on {host.name} in {datacenter.name}: {message} Since 4.0 Reference | |
info | VC | Message on {vm.name} on {host.name} in {datacenter.name}: {message} Since 2.0 Reference | |
warning | VirtualMachine | Warning message on {vm.name} on {host.name} in {datacenter.name}: {message} Since 4.0 Reference | |
info | VC | Migration of virtual machine {vm.name} from {sourceHost.name} to {host.name} completed Since 2.0 Reference | |
warning | VirtualMachine | No compatible host for the Secondary VM {vm.name} Since 4.0 Reference | |
warning | VirtualMachine | Not all networks for {vm.name} are accessible by {destHost.name} Since 2.0 Reference | |
warning | VirtualMachine | {vm.name} does not exist on {host.name} in {datacenter.name} Since 2.0 Reference | |
error | VC | A VMotion license for {host.name} has expired Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is powered off Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is powered on Since 2.0 Reference | |
info | VC | Virtual machine {vm.name} powered On with vNICs connected to dvPorts that have a port level configuration, which might be different from the dvPort group configuration. Since 4.0 Reference | |
info | VC | {vm.name} was powered Off on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name} Since 2.0 Reference | |
error | VirtualMachine | VM ({vm.name}) failed over to {host.name}. {reason.@enum.VirtualMachine.NeedSecondaryReason} Since 4.0 Reference | |
info | VC | Reconfigured {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Registered {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Relayout of {vm.name} on {host.name} in {datacenter.name} completed Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is in the correct format and relayout is not necessary Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} reloaded from new configuration {configPath} Since 4.1 Reference | |
error | VirtualMachine | {vm.name} on {host.name} could not be reloaded from {configPath} Since 4.1 Reference | |
info | VC | Completed the relocation of the virtual machine Since 2.0 Reference | |
error | VirtualMachine | Cannot relocate virtual machine '{vm.name}' in {datacenter.name} Since 2.0 Reference | |
info | VC | Remote console connected to {vm.name} on host {host.name} Since 4.0 Reference | |
info | VC | Remote console disconnected from {vm.name} on host {host.name} Since 4.0 Reference | |
info | VC | Removed {vm.name} on {host.name} from {datacenter.name} Since 2.0 Reference | |
warning | VC | Renamed {vm.name} from {oldName} to {newName} in {datacenter.name} Since 2.0 Reference | |
warning | VirtualMachine | Feature requirements of {vm.name} exceed capabilities of {host.name}'s current EVC mode. Since 5.1 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is reset Since 2.0 Reference | |
info | VC | Moved {vm.name} from resource pool {oldParent.name} to {newParent.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Changed resource allocation for {vm.name} Since 2.0 Reference | |
info | VC | Virtual machine {vm.name} was restarted on {host.name} since {sourceHost.name} failed Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is resumed Since 2.0 Reference | |
info | VC | A Secondary VM has been added for {vm.name} Since 4.0 Reference | |
error | VirtualMachine | vCenter disabled Fault Tolerance on VM '{vm.name}' because the Secondary VM could not be powered On. Since 4.0 Reference | |
info | VC | Disabled Secondary VM for {vm.name} Since 4.0 Reference | |
info | VC | Enabled Secondary VM for {vm.name} Since 4.0 Reference | |
info | VC | Started Secondary VM for {vm.name} Since 4.0 Reference | |
info | VC | {vm.name} was shut down on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}: {shutdownResult.@enum.VmShutdownOnIsolationEvent.Operation} Since 4.0 Reference | |
info | VC | {vm.name} on host {host.name} in {datacenter.name} is starting Since 2.0 Reference | |
info | VC | Starting Secondary VM for {vm.name} Since 4.0 Reference | |
info | VC | Start a recording session on {vm.name} Since 4.0 Reference | |
info | VC | Start a replay session on {vm.name} Since 4.0 Reference | |
error | VC | The static MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name} Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is stopping Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is suspended Since 2.0 Reference | |
info | VC | {vm.name} on {host.name} in {datacenter.name} is being suspended Since 2.0 Reference | |
error | VirtualMachine | Starting the Secondary VM {vm.name} timed out within {timeout} ms Since 4.0 Reference | |
warning | VirtualMachine | Unsupported guest OS {guestId} for {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
info | VC | Virtual hardware upgraded to version {version} Since 2.0 Reference | |
error | VirtualMachine | Cannot upgrade virtual hardware Since 2.0 Reference | |
info | VC | Upgrading virtual hardware on {vm.name} in {datacenter.name} to version {version} Since 2.0 Reference | |
info | VC | Assigned new BIOS UUID ({uuid}) to {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
warning | VC | Changed BIOS UUID from {oldUuid} to {newUuid} for {vm.name} on {host.name} in {datacenter.name} Since 2.0 Reference | |
error | VC | BIOS ID ({uuid}) of {vm.name} conflicts with that of {conflictedVm.name} Since 2.0 Reference | |
info | VC | The reservation violation on the virtual NIC network resource pool {vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on {dvs.name} is cleared Since 5.5 Reference | |
info | VC | The reservation allocated to the virtual NIC network resource pool {vmVnicResourcePoolName} with key {vmVnicResourcePoolKey} on {dvs.name} is violated Since 5.5 Reference | |
info | VC | New WWNs assigned to {vm.name} Since 2.5 Reference | |
warning | VirtualMachine | WWNs are changed for {vm.name} Since 2.5 Reference | |
error | VirtualMachine | The WWN ({wwn}) of {vm.name} conflicts with the currently registered WWN Since 2.5 Reference | |
error | ESXHostNetwork | vprob.net.connectivity.lost| Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. Since 4.0 Reference | |
error | ESXHostNetwork | vprob.net.e1000.tso6.notsupported| Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter. Since 4.0 Reference | |
warning | ESXHostNetwork | vprob.net.migrate.bindtovmk| The ESX advanced config option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that VMotion binds to for improved performance. Please update the config option with a valid vmknic or, if you don't want VMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank. Since 4.0 Reference | |
error | ESXHostNetwork | vprob.net.proxyswitch.port.unavailable| Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. No more ports available on the host proxy switch. Since 4.0 Reference | |
warning | ESXHostNetwork | vprob.net.redundancy.degraded| Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. {3} uplinks still up. Affected portgroups:{4}. Since 4.0 Reference | |
warning | ESXHostNetwork | vprob.net.redundancy.lost| Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. Since 4.0 Reference | |
warning | VC | vprob.scsi.device.thinprov.atquota| Space utilization on thin-provisioned device {1} exceeded configured threshold. Since 4.1 Reference | |
error | ESXHostStorage | vprob.storage.connectivity.lost| Lost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}. Since 4.0 Reference | |
warning | ESXHostStorage | vprob.storage.redundancy.degraded| Path redundancy to storage device {1} degraded. Path {2} is down. {3} remaining active paths. Affected datastores: {4}. Since 4.0 Reference | |
warning | ESXHostStorage | vprob.storage.redundancy.lost| Lost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}. Since 4.0 Reference | |
error | ESXHostStorage | vprob.vmfs.error.volume.is.locked| Volume on device {1} is locked, possibly because some remote host encountered an error during a volume operation and could not recover. Since 5.0 Reference | |
warning | ESXHostStorage | vprob.vmfs.extent.offline| An attached device {1} might be offline. The file system {2} is now in a degraded state. While the datastore is still available, parts of data that reside on the extent that went offline might be inaccessible. Since 5.0 Reference | |
info | ESXHostStorage | vprob.vmfs.extent.online| Device {1} backing file system {2} came online. This extent was previously offline. All resources on this device are now available. Since 5.0 Reference | |
info | ESXHostStorage | vprob.vmfs.heartbeat.recovered| Successfully restored access to volume {1} ({2}) following connectivity issues. Since 4.0 Reference | |
warning | ESXHostStorage | vprob.vmfs.heartbeat.timedout| Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. Since 4.0 Reference | |
error | ESXHostStorage | vprob.vmfs.heartbeat.unrecoverable| Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed. Since 4.0 Reference | |
warning | ESXHostStorage | vprob.vmfs.journal.createfailed| No space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support. Since 4.0 Reference | |
error | ESXHostStorage | vprob.vmfs.lock.corruptondisk| At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume may be damaged too. Since 4.0 Reference | |
error | ESXHostStorage | vprob.vmfs.nfs.server.disconnect| Lost connection to server {1} mount point {2} mounted as {3} ({4}). Since 4.0 Reference | |
info | ESXHostStorage | vprob.vmfs.nfs.server.restored| Restored connection to server {1} mount point {2} mounted as {3} ({4}). Since 4.0 Reference | |
error | ESXHostStorage | vprob.vmfs.resource.corruptondisk| At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume may be damaged too. Since 4.0 Reference | |
error | ESXHostStorage | vprob.vmfs.volume.locked| Volume on device {1} locked, possibly because remote host {2} encountered an error during a volume operation and could not recover. Since 4.0 Reference | |
warning | VC | {message} Since 2.0 Reference |