* Updated DRBD->delete_resource() to return a success if asked to delete a non-existent resource (as can happen when partial anvil-delete-server runs are re-run).
* Reworked DRBD->get_next_resource() to pull from the database, and to no longer do that increments-of-three nonsense. Avoidable complexity. Also added a call to Cluster->get_anvil_uuid() if the 'anvil_uuid' parameter wasn't passed.
* Updated Database->get_host_from_uuid() and ->get_hosts() to now take 'include_deleted' parameter and default to not returning deleted hosts. This fixed issues where anvil-{delete,provision}-server calls could assign jobs to now-deleted hosts with reused host names.
* Updated anvil-delete-server to print log entries to STDOUT. Also updated it to not wait of shutdown of a server in pacemaker to complete, and instead to destroy it after calling pacemaker's resource stop. Updated to also check to see if the server being deleted is already out of pacemaker and, if so, skip that step and directly try to destroy the server, if it's running.
* Updated anvil-provision-server to force 'peer_mode' runs to pull their TCP Port and DRBD minor numbers from the job. This fixes a bug where the same resource on two machines could use different TCP ports.
Signed-off-by: Digimer <digimer@alteeve.ca>
ThisistheAnvil!inwhichwe're looking for the next free resources. It'srequired,butgenerallyitdoesn'tneedtobespecifiedaswecanfinditviaC<<Cluster->get_anvil_uuid()>>.
<keyname="error_0225">Unable to delete the server resource: [#!variable!server_name!#] as this node is not (yet) a full member of the cluster.</key>
<keyname="error_0225">Unable to delete the server resource: [#!variable!server_name!#] as this node is not (yet) a full member of the cluster.</key>
<keyname="error_0226">It looks like to removal of the server resource: [#!variable!server_name!#] failed. The return code should have been '0', but: [#!variable!return_code!#] was returned. The 'pcs' command output, if any, was: [#!variable!output!#].</key>
<keyname="error_0226">It looks like to removal of the server resource: [#!variable!server_name!#] failed. The return code should have been '0', but: [#!variable!return_code!#] was returned. The 'pcs' command output, if any, was: [#!variable!output!#].</key>
<keyname="error_0227">It looks like to removal of the server resource: [#!variable!server_name!#] failed. Unsafe to proceed with the removal of the server. Please check the logs for more information.</key>
<keyname="error_0227">It looks like to removal of the server resource: [#!variable!server_name!#] failed. Unsafe to proceed with the removal of the server. Please check the logs for more information.</key>
<keyname="error_0228">Unable to delete the resource: [#!variable!resource!#] because it wasn't found in DRBD's config.</key>
<keyname="error_0228">Unable to delete the resource: [#!variable!resource!#] because it wasn't found in DRBD's config. This can happen is a previous delete partially completed, in which case this is not a problem.</key>
<keyname="error_0229">One or more peers need us, and we're not allowed to wait. Deletion aborted.</key>
<keyname="error_0229">One or more peers need us, and we're not allowed to wait. Deletion aborted.</key>
<keyname="error_0230">The shell call: [#!variable!shell_call!#] was expected to return '0', but instead the return code: [#!variable!return_code!#] was received. The output, if any, was: [#!variable!output!#].</key>
<keyname="error_0230">The shell call: [#!variable!shell_call!#] was expected to return '0', but instead the return code: [#!variable!return_code!#] was received. The output, if any, was: [#!variable!output!#].</key>
<keyname="error_0231">This host is not an Anvil! node or DR host, unable to migrate servers.</key>
<keyname="error_0231">This host is not an Anvil! node or DR host, unable to migrate servers.</key>
@ -319,6 +319,8 @@ Output (if any):
<keyname="error_0234">Unable to find the target host to migrate to the job UUID: [#!variable!job_uuid!#].</key>
<keyname="error_0234">Unable to find the target host to migrate to the job UUID: [#!variable!job_uuid!#].</key>
<keyname="error_0235">The migration target host: [#!variable!target_host_uuid!#] is either invalid, or doesn't match one of the nodes in this Anvil! system.</key>
<keyname="error_0235">The migration target host: [#!variable!target_host_uuid!#] is either invalid, or doesn't match one of the nodes in this Anvil! system.</key>
<keyname="error_0236">There appears to be no resource data in the database for the host: [#!variable!host_name!#]. Has ScanCore run and, specifically, has 'scan-hardware' run yet? Unable to provide available resources for this Anvil! system.</key>
<keyname="error_0236">There appears to be no resource data in the database for the host: [#!variable!host_name!#]. Has ScanCore run and, specifically, has 'scan-hardware' run yet? Unable to provide available resources for this Anvil! system.</key>
<keyname="error_0237">The resource name: [#!variable!resource_name!#] already exists, and 'force_unique' is set. This is likely a name conflict, returning '!!error!!'.</key>
<keyname="error_0238">This node is not yet fully in the cluster. Sleeping for a bit, then we'll exit. The job will try again shortly after.</key>
<!-- Files templates -->
<!-- Files templates -->
<!-- NOTE: Translating these files requires an understanding of which likes are translatable -->
<!-- NOTE: Translating these files requires an understanding of which likes are translatable -->
@ -657,6 +659,9 @@ It should be provisioned in the next minute or two.</key>
<keyname="job_0218">Manually calling 'scan-drbd' to ensure that the new agent is recorded.</key>
<keyname="job_0218">Manually calling 'scan-drbd' to ensure that the new agent is recorded.</key>
<keyname="job_0219">The server name: [#!variable!server_name!#] is already used by another server.</key>
<keyname="job_0219">The server name: [#!variable!server_name!#] is already used by another server.</key>
<keyname="job_0220">Deleting the server's definition file: [#!variable!file!#]...</key>
<keyname="job_0220">Deleting the server's definition file: [#!variable!file!#]...</key>
<keyname="job_0221">The server: [#!variable!server_name!#] was not found in the cluster configuration. This can happen if a server was partially deleted and we're trying again.</key>
<keyname="job_0222">Preparing to delete the server: [#!variable!server_name!#].</key>
<keyname="job_0223">Using virsh to destroy (force off) the server: [#!variable!server_name!#], if it is still running.</key>