DRBD is not configured on this host, exiting.
Starting The: [#!variable!program!#] DRBD resource agent.
DRBD has been found to be configured on this host.
- Disk flushes: ....... [#!variable!new_scan_drbd_flush_disk!#]
- Meta-data flushes: .. [#!variable!new_scan_drbd_flush_md!#]
- Network Timeout: .... [#!variable!new_scan_drbd_timeout!# seconds]
- Current Resync Speed: [#!variable!say_scan_drbd_total_sync_speed!#]
Note: Disk and metadata flushes should be enabled _unless_ you're using nodes with RAID controllers with flash-backed write cache.
The disk flush configuration has changed from: [#!variable!old_value!#] to: [#!variable!new_value!#].
NOTE: Disk flushes show _only_ be disabled when a RAID controller with flash-backed write-caching is used!
The metadata flush configuration has changed from: [#!variable!old_value!#] to: [#!variable!new_value!#].
NOTE: Metadata (MD) flushes show _only_ be disabled when a RAID controller with flash-backed write-caching is used!
The network timeout has changed from: [#!variable!old_value!# seconds] to: [#!variable!new_value!# seconds].
The current resync speed across all syncing resources changed from: [#!variable!old_value!#/sec] to: [#!variable!new_value!#/sec].
The base configuration (as reported by 'drbdadm dump-xml' has changed. The change is:
========
#!variable!difference!#
========
The full new config is:
========
#!variable!new_config!#
========
A new DRBD resource has been found on this host.
- Resource Name: ...... [#!variable!resource_name!#]
- Resource State: ..... [#!variable!resource_state!#]
A resource was found with a resource UUID that isn't valid on this host. An attempt to find a valid database entry was made but no candidate was found. Adding the resource to the database as if it were new, and generating a new resource UUID for the resource configuration file.
- Resource Name: ...... [#!variable!resource_name!#]
- Resource State: ..... [#!variable!resource_state!#]
The resource config: [#!variable!resource_name!#] has been deleted. The backing storage may or may not have been removed.
The resource: [#!variable!old_value!#] has been renamed to: [#!variable!new_value!#].
The resource: [#!variable!resource_name!#] state has changed from: [#!variable!old_value!#] to: [#!variable!new_value!#].
The resource: [#!variable!resource_name!#] has returned.
The new config is:
========
#!variable!new_config!#
========
The resource: [#!variable!resource_name!#]'s XML configuration (as reported by 'drbdadm dump-xml' has changed. The change is:
========
#!variable!difference!#
========
The new config is:
========
#!variable!new_config!#
========
A new DRBD resource volume has been found on this host.
- On resouce: .. [#!variable!resource_name!#]
- Volume Number: [#!variable!volume_number!#]
- Device Path: . [#!variable!device_path!#]
- Minor Number: [#!variable!minor_number!#]
- Volume Size: . [#!variable!volume_size!#]
Note: The "minor number" translates to the base '/dev/drbdX' where 'X' is the minor number. The 'device_path' is a convenient symlink to the base 'drbdX' device.
Note: The volume size is always a bit less than the backing LVM logical volume size. Some space is used by the internal DRBD metadata. The size of the metadata is explained here: https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-meta-data-size
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] has been deleted. The backing storage may or may not have been removed.
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] has returned.
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] device path has changed from: [#!variable!old_value!#] to: [#!variable!new_value!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] device minot number changed from: [#!variable!old_value!#] to: [#!variable!new_value!#]. This relates to the '/dev/drbdX' device path assignment used behind the device path symlink.
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] size has changed from: [#!variable!old_value!#] to: [#!variable!new_value!#].
A new peer connection has been found for the resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#];
- Peer Name: ............... [#!variable!peer_name!#]
- Connection State: ........ [#!variable!connection_state!#]
- Local disk state: ........ [#!variable!local_disk_state!#]
- Peer disk state: ......... [#!variable!disk_state!#]
- Local Role: .............. [#!variable!local_role!#
- Peer Role: ............... [#!variable!peer_role!#]
- Out of sync size: ........ [#!variable!out_of_sync_size!#]
- Current replication speed: [#!variable!replication_speed!#/sec]
- Estimated time to sync: .. [#!variable!estimated_time_to_sync!#]
- Peer's storage IP:Port: .. [#!variable!peer_ip_address!#:#!variable!peer_tcp_port!#]
- Replication Protocol: .... [#!variable!peer_protocol!#]
- Peer fencing policy: ..... [#!variable!peer_fencing!#]
Note: Node peers should always use protocol C and fencing set to 'resource-and-stonith'. DR Host peers can use either protocol A or C, and fencing should always be set to 'dont-care'.
Protocol A is suitable for DR hosts with higher latency connections, but the DR host will be allowed to fall slightly behind the nodes. Protocol C ensures that the DR host is never behind, but could hurt storage performance.
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] connection state to: [#!variable!peer_name!#] has changed from: [#!variable!old_connection_state!#] to: [#!variable!new_connection_state!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] local disk state relative to: [#!variable!peer_name!#] has changed from: [#!variable!old_local_disk_state!#] to: [#!variable!new_local_disk_state!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] peer: [#!variable!peer_name!#] disk state has changed from: [#!variable!old_disk_state!#] to: [#!variable!new_disk_state!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] local role relative to: [#!variable!peer_name!#] has changed from: [#!variable!old_local_role!#] to: [#!variable!new_local_role!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] peer: [#!variable!peer_name!#] role has changed from: [#!variable!old_role!#] to: [#!variable!new_role!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] peer: [#!variable!peer_name!#]'s out-of-sync size has changed from: [#!variable!old_out_of_sync_size!#] to: [#!variable!new_out_of_sync_size!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] peer: [#!variable!peer_name!#]'s replication speed has changed from: [#!variable!old_replication_speed!#/sec] to: [#!variable!new_replication_speed!#/sec].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] peer: [#!variable!peer_name!#]'s time to resync changed from: [#!variable!old_estimated_time_to_sync!#] to: [#!variable!new_estimated_time_to_sync!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] IP address/port used to replicate with the peer: [#!variable!peer_name!#] has changed from: [#!variable!old_ip_address!#:#!variable!old_tcp_port!#] to: [#!variable!new_ip_address!#:#!variable!new_tcp_port!#].
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] replication protocol used to sync with the peer: [#!variable!peer_name!#] has changed from: [#!variable!old_protocol!#] to: [#!variable!new_protocol!#].
Note: Protocol A is OK when replicating to a DR host. When used, it allows the DR host to fall behind the nodes, which helps avoids a performance hit when the network latency / speed to the DR host is higher than tolerable. Between nodes, protocol C must always be used, which ensures synchronous replication.
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] fencing policy towards the peer: [#!variable!peer_name!#] has changed from: [#!variable!old_fencing!#] to: [#!variable!new_fencing!#].
Note: The fencing policy 'resource-and-stonith' must always be used between nodes. The fencing policy 'dont-care' must be used between nodes and DR hosts.
The resource: [#!variable!resource_name!#] volume: [#!variable!volume_number!#] peer: [#!variable!peer_name!#] has been deleted.
The DRBD resource was not found in the database, but appears to have been in the past. Re-adding it.
- Resource Name: ...... [#!variable!resource_name!#]
- Resource State: ..... [#!variable!resource_state!#]
The global common configuration file: [#!variable!file!#] needs to be updated. The difference is:
====
#!variable!diff!#
====
Enabled
Disabled
s
Up
Down
Down
The resource is stopped.
StandAlone
No network configuration available. The resource has not yet been connected, or has been administratively disconnected (using drbdadm disconnect), or has dropped its connection due to failed authentication or split brain.
Connecting
This node is waiting until the peer node becomes visible on the network.
Connected
A DRBD connection has been established, data mirroring is now active. This is the normal state.
Disconnected
This indicates that the connection is down.
Disconnecting
Temporary state during disconnection. The next state is StandAlone.
Unconnected
Temporary state, prior to a connection attempt. Possible next states: Connecting.
Timeout
Temporary state following a timeout in the communication with the peer. Next state: Unconnected.
BrokenPipe
Temporary state after the connection to the peer was lost. Next state: Unconnected.
NetworkFailure
Temporary state after the connection to the partner was lost. Next state: Unconnected.
ProtocolError
Temporary state after the connection to the partner was lost. Next state: Unconnected.
TearDown
Temporary state. The peer is closing the connection. Next state: Unconnected.
Off
The volume is not replicated over this connection, since the connection is not Connected.
Established
All writes to that volume are replicated online. This is the normal state.
StartingSyncS
Full synchronization, initiated by the administrator, is just starting. The next possible states are: SyncSource or PausedSyncS.
StartingSyncT
Full synchronization, initiated by the administrator, is just starting. Next state: WFSyncUUID.
WFBitMapS
Partial synchronization is just starting. Next possible states: SyncSource or PausedSyncS.
WFBitMapT
Partial synchronization is just starting. Next possible state: WFSyncUUID.
WFSyncUUID
Synchronization is about to begin. Next possible states: SyncTarget or PausedSyncT.
SyncSource
Synchronization is currently running, with the local node being the source of synchronization.
SyncTarget
Synchronization is currently running, with the local node being the target of synchronization.
PausedSyncS
The local node is the source of an ongoing synchronization, but synchronization is currently paused. This may be due to a dependency on the completion of another synchronization process, or due to synchronization having been manually interrupted by drbdadm pause-sync.
PausedSyncT
The local node is the target of an ongoing synchronization, but synchronization is currently paused. This may be due to a dependency on the completion of another synchronization process, or due to synchronization having been manually interrupted by drbdadm pause-sync.
VerifyS
On-line device verification is currently running, with the local node being the source of verification.
VerifyT
On-line device verification is currently running, with the local node being the target of verification.
Ahead
Data replication was suspended, since the link can not cope with the load. This state is enabled by the configuration on-congestion option (see Configuring congestion policies and suspended replication).
Behind
Data replication was suspended by the peer, since the link can not cope with the load. This state is enabled by the configuration on-congestion option on the peer node (see Configuring congestion policies and suspended replication).
Diskless
No local block device has been assigned to the DRBD driver. This may mean that the resource has never attached to its backing device, that it has been manually detached using drbdadm detach, or that it automatically detached after a lower-level I/O error.
Inconsistent
The data is inconsistent. This status occurs immediately upon creation of a new resource, on both nodes (before the initial full sync). Also, this status is found in one node (the synchronization target) during synchronization.
Outdated
Resource data is consistent, but outdated.
DUnknown
This state is used for the peer disk if no network connection is available.
Consistent
Consistent data of a node without connection. When the connection is established, it is decided whether the data is UpToDate or Outdated.
UpToDate
Consistent, up-to-date state of the data. This is the normal state
Attaching
Transient state while reading meta data.
Detaching
Transient state while detaching and waiting for ongoing IOs to complete.
Failed
Transient state following an I/O failure report by the local block device. Next state: Diskless. Note: Despite the name, this is rarely an actual issue.
Negotiating
Transient state when an Attach is carried out on an already-Connected DRBD device.
Primary
The resource is currently in the primary role, and may be read from and written to. This role only occurs on one of the two nodes, unless dual-primary mode is enabled.
Secondary
The resource is currently in the secondary role. It normally receives updates from its peer (unless running in disconnected mode), but may neither be read from nor written to. This role may occur on one or both nodes.
Unknown
The resource’s role is currently unknown. The local resource role never has this status. It is only displayed for the peer’s resource role, and only in disconnected mode.