Alteeve
Anvil!
Striker
ScanCore
Alteeve's Niche! Inc., Toronto, Ontario, Canada]]>
Anvil!]]>
Node
DR Host
Unknown Type
Red Hat Enterprise Linux
CentOS Linux
CentOS Stream Linux
[ #!string!brand_0004!# ] - Critical level alert from #!variable!host_name!#
[ #!string!brand_0004!# ] - Warning level alert from #!variable!host_name!#
[ #!string!brand_0004!# ] - Notice level alert from #!variable!host_name!#
[ #!string!brand_0004!# ] - Informational level alert from #!variable!host_name!#
--
This alert email was sent from the machine:
- #!variable!host_name!#
It was generated by #!string!brand_0004!#, which is part of the #!string!brand_0002!# Intelligent Availability platform running on the host above.
This email was *not* sent by #!string!brand_0001!#. If you do not know why you are receiving this email, please speak to your system's administrator.
If you need any assistance, please feel free to contact #!string!brand_0001!# (https://alteeve.com) and we will do our best to assist.
There are not enough network interfaces on this machine. You have: [#!variable!interface_count!#] interface(s), and you need at least: [#!variable!required_interfaces_for_single!#] interfaces to connect to the requested networks (one for Back-Channel and one for each Internet-Facing network).
The local system UUID can't be read yet. This might be because the system is brand new and/or ScanCore hasn't run yet. Please try again in a minute.
None of the databases are accessible, unable to proceed. Please be sure that 'anvil-daemon' is enabled and running on the database machine(s).
The gateway address doesn't match any of your networks.
This program must run with 'root' level privileges.
No password was given, exiting.
The passwords don't match, exiting.
Failed to read the file: [#!variable!file!#]. It doesn't appear to exist.
Failed to add the target: [#!variable!target!#]:[#!variable!port!#]'s RSA fingerprint to: [#!variable!user!#]'s list of known hosts.
There was a problem adding the local machine to the: [#!data!path::configs::anvil.conf!#] file. Please see the log for details.
Something went wrong while trying to update the password. The return code was: [#!variable!return_code!#], but '0' was expected.
host name has to be set to a valid value.]]>
A user name must be set. This is usually 'admin'.
You must set a password. There are no complexity rules, but a long password is strongly recommended.
A DNS entry is bad. One or more IPv4 addresses can be specified, with a comma separating multiple IPs.
The IPv4 address assigned to: [#!variable!network!#] is invalid.
An interface to use in: [#!variable!network!# - Link #!variable!link!#] must be selected.
Network interfaces can only be selected once.
The gateway appears to have an invalid IPv4 address set.
The: [#!variable!field!#] field can't be empty.
The prefix needs to be set, and be between 1 and 5 alphanumeric characters long.
The: [#!variable!field!#] must be a positive integer.
There was a problem reading your session details. To be safe, you have been logged out. Please try logging back in.
It appears that your session has expired. To be safe, you have been logged out. Please try logging back in.
read_details: [#!variable!uuid!#] is not a valid UUID.]]>
read_details: [#!variable!uuid!#] was not found in the database.]]>
Login failed, please try again.
#!data!path::log::main!#] for details.]]>
#!variable!template!#] in the template file: [#!variable!file!#].]]>
#!variable!template!#] in the template file: [#!variable!file!#]. Details of the problem should be in: [#!data!path::log::main!#].]]>
The 'host-uuid': [#!variable!host_uuid!#] is not valid.
The '#!variable!switch!#' switch is missing and no pending job was found.
The job UUID was passed via '--job-uuid' but the passed in value: [#!variable!uuid!#] is not a valid UUID.
The job UUID was passed via '--job-uuid': [#!variable!uuid!#] doesn't match a job in the database.
The update appears to have not completed successfully. The output was:
====
#!variable!output!#
====
parse_banged_string(), an infinite loop was detected while processing: [#!variable!message!#].]]>
The TCP port: [#!variable!port!#] is not a valid.
').]]>
Logging out failed. The user's UUID wasn't passed and 'sys::users::user_uuid' wasn't set. Was the user already logged out?
Failed to install the Alteeve repo, unable to proceed.
No BCN interface found. Unable to configure the install target feature yet.
Failed to write or update the file: [#!variable!file!#]. Please see the system log for more information.
This is not a configured Striker dashboard, exiting.
[ Error ] - There was a problem downloading packages. The error was:
====
#!variable!error!#
====
This Striker system is not configured yet. This tool will not be available until it is.
Failed to start the Install Target feature. Got a non-zero return code when starting: [#!data!sys::daemon::dhcpd!#] (got: [#!variable!rc!#]).
Failed to stop the Install Target feature. Got a non-zero return code when starting: [#!data!sys::daemon::dhcpd!#] (got: [#!variable!rc!#]).
A request to rename a file was made, but no file name was given.
A request to rename the file: [#!variable!file!#] was made, but the new name wasn't given. Was '--to X' given?
A request to rename the file: [#!variable!file!#] was made, but that file doesn't exist.
A request to delete a file was made, but no file name was given.
A request to delete the file: [#!variable!file!#] was received, but it is not under '/mnt/shared/'. This program can only work on or under that directory.
Failed!
A request to toggle the script flag was received, but no file name was given.
A request to rename the file: [#!variable!file!#] to: [#!variable!to!#], but there is an existing file or directory with that name.
Failed to generate an RSA public key for the user: [#!variable!user!#]. The output, if any, is below:
====
#!variable!output!#
====
Failed to backup: [#!variable!file!#], skipping.
The file to be downloaded: [#!variable!file!#], already exists. Either remove it, or call again with '--overwrite'.
Something went wrong moving the downloaded file from the temporary location: [#!variable!source_file!#] to the output: [#!variable!target_file!#]. Useful errors may be above this message.
The download job with UUID: [#!variable!job_uuid!#] is not valid.
The download job with UUID: [#!variable!job_uuid!#] is already being handled by another process.
Something went wrong trying to download: [#!variable!packages!#]. The return code should have been '0'. but: [#!variable!return_code!#] was received. Is a package missing upstream?
A request to active the logical volume: [#!variable!path!#] was made, but that path doesn't exist or isn't a block device.
), unable to proceed.]]>
Something went wrong trying to write: [#!variable!file!#], unable to proceed.
Something went wrong trying to compile the C-program: [#!variable!file!#], unable to proceed.
The job UUID was not passed via '--job-uuid' and no unclaimed job was found in the database.
The initialization target: [#!variable!target!#] is not accessible. Will keep trying...
There are no databases available. Will check periodically, waiting until one becomes available.
There was a problem adding out database to the target's anvil.conf file.
Unable to connect to the database, unable to read the details of the key to remove.
Did not find any offending keys on this host, exiting.
Job data not found for job_uuid: [#!variable!job_uuid!#].
No job UUID was passed .
The job_uuid: [#!variable!job_uuid!#] appears valid, but there was no job_data.
The state UUID: [#!variable!state_uuid!#] does not appear to be a valid UUID.
No (good) state UUIDs found, unable to run this job.
Unable to find a common network between the target and this machine. This shouldn't be possible, given we're able to talk to it. This is probably a program error.
The URL: [#!variable!url!#] is not supported. The URL must start with 'http://', 'https://' or 'ftp://'.
The requested URL: [#!variable!url!#] was not found on the remote server.
The requested URL: [#!variable!url!#] does not resolve to a known domain.
The requested URL: [#!variable!url!#] failed because the remote host refused the connection.
The requested URL: [#!variable!url!#] failed because there is no route to that host.
The requested URL: [#!variable!url!#] failed because the network is unreachable.
The requested URL: [#!variable!url!#] failed, access was forbidden (error 403).
The requested URL: [#!variable!url!#] failed, the file was not found on the source (error 404).
The requested URL: [#!variable!url!#] failed with HTTP error: [#!variable!error_code!#] (message: [#!variable!error_message!#]).
Aborting the download of: [#!variable!url!#] to: [#!variable!save_to!#]. The target file already exists and 'overwrite' was not set.
There was a problem downloading: [#!variable!url!#] to: [#!variable!file!#]. Aborting parsing of the OUI data.
The 'oui_mac_prefix': [#!variable!oui_mac_prefix!#] string doesn't appear to be a valid 6-byte hex string.
/' (subnet can be dotted-decimal or CIDR notation) or be 'bcn', 'sn', 'ifn' or a specific variant like 'bcn1', 'sn2', or 'ifn2'. Alternatively, so not use '--network X' at all and all networks with host is connected to will be scanned.]]>
Failed to create the archive directory: [#!variable!directory!#]. Skipping the archive process.
There was a problem writing out the records to file: [#!variable!file!#]. There may be more information in #!data!path::log::main!#. Skipping further attempts to archive: [#!variable!table!#].
Compression appears to have failed. The return code '0' was expected from the bzip2 call, but: [#!variable!return_code!#] was returned. The output, if any, was: [#!variable!output!#].
Compression appears to have failed. The output file: [#!variable!out_file!#] was not found.
Failed to check the existence and size of the file: [#!variable!file!#] on the target: [#!variable!target!#] as: [#!variable!remote_user!#]. The error (if any) was: [#!variable!error!#] and the output (if any) was: [#!variable!output!#].
The file: [#!variable!file!#] wasn't found.
The parameter get_company_from_oui->mac must be a valid MAC address or be in the format 'xx:xx:xx'. Received: [#!variable!mac!#].
The file: [#!variable!file!#] was not found.
find_matches() was given the hash key: [#!variable!key!#], but it does not reference a hash. Are any IPs associated with this target? The caller was: [#!variable!source!#:#!variable!line!#].]]>
Failed to reconnect after reconfiguring the network. Will reboot in hopes of coming up cleanly.
The 'recipient_level': [#!variable!recipient_level!#] is invalid. It should be '0', '1', '2', or '3'.
The 'notification_alert_level': [#!variable!notification_alert_level!#] is invalid. It should be '0', '1', '2', or '3'.
The 'notification_uuid': [#!variable!notification_uuid!#] was not found in the database.
[ Error ] - The was a problem parsing the unified metadata:
===========================================================
#!variable!xml_body!#
===========================================================
The error was:
===========================================================
#!variable!eval_error!#
===========================================================
]]]>
The unified metadata file: [#!data!path::data::fences_unified_metadata!#] was not found. There may have been a problem creating it.
This row's modified_date wasn't the first column returned in query: [#!variable!query!#]
This row's UUID column: [#!variable!uuid_column!#] wasn't the second column returned in query: [#!variable!query!#]
This is a CentOS machine, and tried to move the directory: [#!variable!source!#] to: [#!variable!target!#], but that renane failed.
The domain name: [#!variable!name!#] does not appear to be valid.
The IP address: [#!variable!ip!#] does not appear to be valid.
The IP given for the network: [#!variable!name!#] does not appear to be the network base IP. Did you mean: [#!variable!ip!#]?
The IP given for the network: [#!variable!network!#] with the subnet mask: [#!variable!subnet!#] does not appear to be a valid network range.
The gateway: [#!variable!gateway!#] does not apear to be in the network: [#!variable!network!#]/[#!variable!subnet!#].
An NTP entry is bad. One or more IPv4 addresses can be specified, with a comma separating multiple IPs.
The MTU needs to be a positive integer equal or above '512' bytes.
The IP address: [#!variable!ip!#] does not appear to be within any of the configured networks.
The IPv4 address assigned to the IPMI interface on: [#!variable!network!#] is invalid.
The IP address: [#!variable!ip!#] does not appear to be in the network: [#!variable!network!#].
I was asked to delete and entry from: [#!variable!table!#] but neither the name or UUID was passed.
The host UUID: [#!variable!uuid!#] was set as the value for: [#!variable!column!#], but that host doesn't appear to exist.
Unable to connect to any database, unable to read the job details.
The answer: [#!variable!answer!#] is invalid. Please try again.
The host UUID: [#!variable!host_uuid!#] was not found. Has it already been purged?
Failed to remove the symlink: [#!variable!symlink!#]!
Failed to read or parse the CIB! Is pacemaker running?
Failed to start the daemon: [#!variable!daemon!#] on the local system, unable to boot the server.
Failed to start the daemon: [#!variable!daemon!#] on [#!variable!host!#], unable to boot the server.
System->test_ipmi() was called with an invalid 'lanplus' parameter. It must be 'yes', 'no', 'yes-no' or 'no-yes'. Received: [#!variable!lanplus!#].
All attempts to change the IPMI user: [#!variable!user_name!#] (number: [#!variable!user_number!#] failed. The last try's output (if any) was: [#!variable!output!#] (return code: [#!variable!return code!#]).
The system call: [#!variable!shell_call!#] failed. The output (if any) was: [#!variable!output!#] (return code: [#!variable!return code!#]).
The DRBD global common config file: [#!data!path::configs::global-common.conf!#] doesn't exist, unable to update it.
Failed to parse the JSON string:
===========================================================
#!variable!json!#
===========================================================
The error was:
===========================================================
#!variable!error!#
===========================================================
There appears to be no mail server in the database with the UUID: [#!variable!uuid!#].
There alert level: [#!variable!alert_level!#] is invalid. Valid values are '1' / 'critical', '2' / 'warning, '3' / 'notice', and '4' / 'info'.
Failed to write the email alert file: [#!variable!file!#]! Unable to process the alert. Check the logs above for possible reasons for the error.
I was asked to change the preferred host node of the server: [#!variable!server!#] to: [#!variable!node!#], but that doesn't match the name of either node in the cluster. The node names are: [#!variable!node1!#] and [#!variable!node2!#].
Unable to boot the server: [#!variable!server!#] as the cluster isn't running or there was a problem parsing the cluster CIB.
Unable to boot the server: [#!variable!server!#] as this host is not a node.
Unable to boot the server: [#!variable!server!#] as this node is not (yet) a full member of the cluster.
Unable to set the preferred host of the server: [#!variable!server!#] to: [#!variable!node!#] as this node is not (yet) a full member of the cluster.
Unable to boot the server: [#!variable!server!#] as this server was not found in the cluster information base (CIB).
Unable to shut down the server: [#!variable!server!#] as this host is not a node.
Unable to shut down the server: [#!variable!server!#] as the cluster isn't running or there was a problem parsing the cluster CIB.
Unable to shut down the server: [#!variable!server!#] as this node is not (yet) a full member of the cluster.
Unable to shut down the server: [#!variable!server!#] as this server was not found in the cluster information base (CIB).
Unable to migrate the server: [#!variable!server!#] as this host is not a node.
Unable to migrate the server: [#!variable!server!#] as the cluster isn't running or there was a problem parsing the cluster CIB.
Unable to migrate the server: [#!variable!server!#] as this node is not (yet) a full member of the cluster.
Unable to migrate the server: [#!variable!server!#] as the peer node is not (yet) a full member of the cluster.
Unable to migrate the server: [#!variable!server!#] as this server was not found in the cluster information base (CIB).
Unable to read the stat information for the file: [#!variable!file_path!#], the file doesn't appear to exist.
The '#!variable!name!#': [#!variable!uuid!#] is not valid.
Unable to mark the server with UUID: [#!variable!uuid!#] as "deleted" because it doesn't apprear to exist in the database in the first place.
The 'anvil_uuid': [#!variable!anvil_uuid!#] in invalid.
The MIB file: [#!variable!mib!#] doesn't exist or can't be read.
The date: [#!variable!date!#] is not in either the 'mm/dd/yy' or 'mm/dd/yyyy' formats. Can't convert to 'yyyy/mm/dd'.
The temperature: [#!variable!temperature!#] does not appear to be valid.
The resource: [#!variable!resource!#] in the config file: [#!variable!file!#] was found, but does not appear to be a valid UUID: [#!variable!uuid!#].
The resource: [#!variable!resource!#] in the config file: [#!variable!file!#] was found, and we were asked to replace the 'scan_drbd_resource_uuid' but the new UUID: [#!variable!uuid!#] is not a valud UUID.
The 'fence_ipmilan' command: [#!variable!command!#] does not appear to be valid.
The Anvil! UUID: [#!variable!anvil_uuid!#] doesn't appear to exist in the database.
Unable to move an uploaded file from the: [#!data!path::directories::shared::incoming!#] directory as a file name wasn't set (or failed to parse) from the 'job_data' in the job: [#!variable!job_uuid!#].
Unable to move the uploaded file: [#!variable!file!#], it doesn't appear to exist.
Unable to move the uploaded file: [#!variable!file!#] to: [#!variable!target_directory!#]. The cause of the failure should be in the logs.
Unable to move pull a file from because a file UUID wasn't set (or failed to parse) from the 'job_data' in the job: [#!variable!job_uuid!#].
Unable to pull a file as the file UUID: [#!variable!file_uuid!#] is either invalid or doesn't exist in the database.
Unable to pull the file: [#!variable!file!#], we're not an Anvil! member.
The downloaded file's md5sum: [#!variable!local_md5sum!#] doesn't match what is expected: [#!variable!file_md5sum!#]. The file has been removed. We'll wait for a minute and then exit, and the download will be attempted again.
Something went wrong and the file wasn't downloaded. More information should be in the logs. We'll wait for a minute and then exit, and the download will be attempted again.
Unable to purge the a because a file UUID wasn't set (or failed to parse) from the 'job_data' in the job: [#!variable!job_uuid!#].
Unable to purge a file as the file UUID: [#!variable!file_uuid!#] is either invalid or doesn't exist in the database.
Failed to delete: [#!variable!file_path!#]. The error returned was: [#!variable!error!#]. There may be more details in the logs.
Unable to rename a file because a file UUID wasn't set (or failed to parse) from the 'job_data' in the job: [#!variable!job_uuid!#].
Unable to purge a file as the file UUID: [#!variable!file_uuid!#] is either invalid or doesn't exist in the database.
Unable to rename the file: [#!variable!file_name!#] because the new file name wasn't set (or failed to parse) from the 'job_data' in the job: [#!variable!job_uuid!#].
Moving the file failed. The problem should be logged. We'll sleep for a minute and then exit. We'll try again after that.
Unable to check the file mode because a file UUID wasn't set (or failed to parse) from the 'job_data' in the job: [#!variable!job_uuid!#].
Unable to check the file mode because the file UUID: [#!variable!file_uuid!#] is either invalid or doesn't exist in the database.
Unable to find the new server name from the job UUID: [#!variable!job_uuid!#].
Unable to get the number of CPU cores for the new server: [#!variable!server_name!#] from the job UUID: [#!variable!job_uuid!#].
The new server: [#!variable!server_name!#] was asked to have: [#!variable!requested_cores!#] CPU cores, but only: [#!variable!available_cores!#] are available.
Unable to get the amount of RAM for the new server: [#!variable!server_name!#] from the job UUID: [#!variable!job_uuid!#].
The new server: [#!variable!server_name!#] was asked to have: [#!variable!requested_ram!#] RAM, but only: [#!variable!available_ram!#] is available.
Unable to get the storage group UUID for the new server: [#!variable!server_name!#] from the job UUID: [#!variable!job_uuid!#].
Unable to get the amount of storage to use for the new server: [#!variable!server_name!#] from the job UUID: [#!variable!job_uuid!#].
The new server: [#!variable!server_name!#] was asked to have: [#!variable!requested_size!#] disk space, but only: [#!variable!available_size!#] is available on the requested storage group: [#!variable!storage_group!#].
Unable to get the install ISO to use for the new server: [#!variable!server_name!#] from the job UUID: [#!variable!job_uuid!#].
The install disc ISO: [#!variable!install_iso!#] to be used for the new server: [#!variable!server_name!#] wasn't found on this system.
The driver disc ISO: [#!variable!install_iso!#] to be used for the new server: [#!variable!server_name!#] wasn't found on this system.
The new server's name: [#!variable!server_name!#] is already in use. Has this job already run?
The storage group UUID: [#!variable!storage_group_uuid!#] wasn't found in the database.
The new DRBD resource will need a "minor" number and a TCP port. One or both are not provided or are invalid.
Failed to create the logical volume: [#!variable!lv_path!#]. Without this, we can't create the replicated storage backing the server, aborting.
Command: ... [#!variable!lv_create!#]
Return Code: [#!variable!return_code!#]
Output (if any):
====
#!variable!output!#
====
Failed to write the DRBD resource file: [#!variable!drbd_res_file!#]. The cause of the failure should be in the logs.
Failed to load the DRBD resource file: [#!variable!drbd_res_file!#]. Tried dumping the new DRBD config and the file new resource wasn't found.
It appears that creating the DRBD meta data on the new logic volume(s) failed. Expected the return code '0' but got: [#!variable!return_code!#]. The command returned: [#!variable!output!#].
It appears that the initial forced primary role to initialize the new DRBD resource failed. Expected the return code '0' but got: [#!variable!return_code!#]. The command returned: [#!variable!output!#].
The logical volume behind the resource: [#!variable!resource!#] existed, and started the resource has the disk state 'diskless'. This is likely because the LV doesn't have DRBD meta-data. We can't (safely) create it. Please either remove the LV backing this resource or create the meta data manually.
Failed to make the resource: [#!variable!resource!#] disk state to 'UpToDate'. After attempt, the disk state is: [#!variable!disk_state!#].
No operating system type was found for the server: [#!variable!server_name!#] in the job: [#!variable!job_uuid!#].
The call to create the server appears to have failed. The attempt to parse the server's definition failed. The command was run as a background process so exact error details are not available here. Please check the logs for more details. The call used to create the server was:
====
#!variable!shell_call!#
====
The call to create the new server appears to have failed. It hasn't shown up as running after 10 seconds. The status, if any, was last seen as: [#!variable!status!#].
Failed to add the server: [#!variable!server_name!#] because we failed to parse the CIB. Is the cluster running?
Failed to add the server: [#!variable!server_name!#] because we are not a full cluster member?
Failed to add the server: [#!variable!server_name!#] because it appears to already exist in the cluster.
Failed to add the server: [#!variable!server_name!#]. After the commands to add it ran, it was still not found in the cluster.
It looks like something went wrong while adding the server to the cluster. There should be more information in the logs.
It looks like something went wrong while removing the server from the cluster. There should be more information in the logs.
This host is not an Anvil! node or DR host, unable to delete servers.
Unable to connect to any databases, unable to continue.
Unable to find the server uuid to delete from the job UUID: [#!variable!job_uuid!#].
Unable to find a server name to match the server UUID: [#!variable!server_uuid!#].
This tool is only designed to migrate servers between nodes, and this is a DR host.
The cluster does not appear to be running, unable to delete a server at this time. We'll sleep for a bit and then exit, and the try again.
The server: [#!variable!server_name!#] appears to have failed to stop.
Unable to delete the server resource: [#!variable!server_name!#] as the cluster isn't running or there was a problem parsing the cluster CIB.
Unable to delete the server resource: [#!variable!server_name!#] as this node is not (yet) a full member of the cluster.
It looks like to removal of the server resource: [#!variable!server_name!#] failed. The return code should have been '0', but: [#!variable!return_code!#] was returned. The 'pcs' command output, if any, was: [#!variable!output!#].
It looks like to removal of the server resource: [#!variable!server_name!#] failed. Unsafe to proceed with the removal of the server. Please check the logs for more information.
Unable to delete the resource: [#!variable!resource!#] because it wasn't found in DRBD's config. This can happen is a previous delete partially completed, in which case this is not a problem.
One or more peers need us, and we're not allowed to wait. Deletion aborted.
The shell call: [#!variable!shell_call!#] was expected to return '0', but instead the return code: [#!variable!return_code!#] was received. The output, if any, was: [#!variable!output!#].
This host is not an Anvil! node or DR host, unable to migrate servers.
Unable to find the server to migrate in the job UUID: [#!variable!job_uuid!#].
The cluster does not appear to be running, unable to migrate servers at this time. We'll sleep for a bit and then exit, and the try again.
Unable to find the target host to migrate to the job UUID: [#!variable!job_uuid!#].
The migration target host: [#!variable!target_host_uuid!#] is either invalid, or doesn't match one of the nodes in this Anvil! system.
There appears to be no resource data in the database for the host: [#!variable!host_name!#]. Has ScanCore run and, specifically, has 'scan-hardware' run yet? Unable to provide available resources for this Anvil! system.
The resource name: [#!variable!resource_name!#] already exists, and 'force_unique' is set. This is likely a name conflict, returning '!!error!!'.
This node is not yet fully in the cluster. Sleeping for a bit, then we'll exit. The job will try again shortly after.
call() was called without a target being set. Other values passed in that may help locate the source of this call:
- remote_user: [#!variable!remote_user!#]
- port: ...... [#!variable!port!#]
- close: ..... [#!variable!close!#]
- secure: .... [#!variable!secure!#]
- shell_call: [#!variable!shell_call!#]
]]>
Usage: [#!variable!program!# --config /path/to/config].
The file: [#!variable!file!#] doesn't appear to be valid.
Failed to find a matching entry in the file: [#!variable!file!#]. Please make sure the MAC addresses in the config are accurate for these systems.
Missing variable: [#!variable!variable!#] from config file: [#!data!switches::config!#].
The length of the prefix: [#!variable!prefix!#] is: [#!variable!length!#]. The prefix needs to be not more than 5.
The DNS IP: [#!variable!ip!#] is invalid.
The gateway IP: [#!variable!ip!#] is invalid.
The variable: [#!variable!variable!#] is invalid: [#!variable!value!#].
Failed to add the UPS: [#!variable!ups_name!#] at: [#!variable!aups_ip_address!#] using the agent: [#!variable!ups_agent!#]!
Failed to add the fence device: [#!variable!fence_name!#] using the agent: [#!variable!fence_agent!#]!
This machine is a an active cluster member, aborting job.
We were asked to call 'drbdadm' but it doesn't exist. Is DRBD installed?
The call to 'drbdadm dump-xml' returned the exit code: [#!variable!return_code!#].
[ Warning ] - Failed to parse the DRBD XML. The XML read was:
========
#!variable!xml!#
========
The error was:
========
#!variable!error!#
========
Failed to read the lvm.conf file. The reason why should be logged above.
Failed to write the lvm.conf file. The reason why should be logged above.
The attempt to start the cluster appears to have failed. The return code '0' was expected, but: [#!variable!return_code!#] was received. The output was:
====
#!variable!output!#
====
' or '--server-uuid .]]>
This host is not a node or DR, unable to boot servers.
The definition file: [#!variable!definition_file!#] doesn't exist, unable to boot the server.
This host is not in an Anvil! system, aborting.
The definition file: [#!variable!definition_file!#] exists, but the server: [#!variable!server!#] does not appear to be in the cluster. Unable to boot it.
The server: [#!variable!server!#] status is: [#!variable!status!#]. We can only boot servers that are off, not booting it.
' or '--server-uuid .]]>
This host is not a node or DR, unable to shut down servers.
This feature isn't enabled on DR hosts yet.
The server: [#!variable!server!#] does not appear to be in the cluster. Unable to shut it down.
The server: [#!variable!server!#] failed to boot. The reason why should be in the logs.
The server: [#!variable!server!#] failed to shut down. The reason why should be in the logs.
The server UUID: [#!variable!server_uuid!#] is not valid.
' or '--server-uuid .]]>
This host is not a node, unable to migrate servers.
'.]]>
The target: [#!variable!target!#] appears to be invalid. The --target switch needs to be set to 'peer', 'local', '#!variable!local_name!#' or '#!variable!peer_name!#'.
The server: [#!variable!server!#] failed to migrate. The reason why should be in the logs.
The attempt to start the servers appears to have failed. The return code '0' was expected, but: [#!variable!return_code!#] was received. The output was:
====
#!variable!output!#
====
' or '--server-uuid .]]>
Could not find the server: [#!variable!server!#] on this Anvil! in the database.
This host is not a node, unable to rename the server from here.
'. The new name can not contain spaces.]]>
The server wasn't found in the cluster configuration... Did a previous attempt to rename fail? Aborting.
Failed to read the file: [#!variable!file!#] from the host: [#!variable!host!#].
Failed to rename the old LV: [#!variable!old_lv!#] to: [#!variable!new_lv!#] on the host: [#!variable!host_name!#]! Aborting.
Failed to delete the file: [#!variable!file!#]. The error, if any, was: [#!variable!error!#].
Failed to delete the file: [#!variable!file!#] on the host: [#!variable!target!#].
Failed to delete the file: [#!variable!file!#] on the host: [#!variable!target!#]. This might be a connection issue. The call's error was: [#!variable!error!#] output was: [#!variable!output!#].
Failed to write the file: [#!variable!file!#] on the host: [#!variable!target!#].
Failed to add the server: [#!variable!server_name!#] to the cluster. The return code from the pcs command was: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
The server: [#!variable!server!#] already exists on this Anvil!. Please use a different new name.
' or '--host-uuid UUID'.]]>
The attempt to boot the machine failed! The output, if anything, was: [#!variable!output!#].
The attempt to check the power status of the machine failed. The output, if anything, was: [#!variable!output!#].
There is no IPMI information or fence options available to boot this machine, unable to proceed.
The host: [#!variable!host_name!#] is not in an Anvil!, unable to parse fence methods.
The Anvil!: [#!variable!anvil_name!#] does not have a recored CIB in the database, unable to parse fence methods.
Either we failed to find a fence method, or all fence methods failed to boot this machine, unable to proceed.
check_stonith_config() only runs on nodes, and this host is a: [#!variable!host_type!#].]]>
This host is not in a cluster, or it's in the cluster but not ready yet. Either way, unable to check the config.
Failed to find the install manifest for the: [#!variable!anvil_name!#] Anvil! system. Unable to check or update the fence configuration.
Failed to parse the install manifest with UUID: [#!variable!manifest_uuid!#]. Unable to check or update the fence configuration.
The passed in Anvil! UUID: [#!variable!anvil_uuid!#] was not found in the database.
The passed in host UUID: [#!variable!host_uuid!#] was not found in the database.
Failed to parse the request body: [#!variable!request_body_string!#]. Reason: [#!variable!json_decode_error!#]
Unable to connect to the database, unable to manage a server at this time.
Unable to connect to the database, unable to provision a server at this time.
Failed to perform requested task(s) because the requester is not authenticated.
,manifest_uuid=,anvil_uuid='. Either the parse failed, or the data was somehow invalid.]]>
I tried to change the fencing preferred node to: [#!variable!prefer!#], but it doesn't appear to have worked. The preferred node is: [#!variable!current!#] ('--' means there is no preferred node)
I tried to remove the fence delay from the node: [#!variable!node!#], but it doesn't appear to have worked. The preferred node is: [#!variable!current!#] ('--' means there is no preferred node)
Failed to find the UUID column for the table: [#!variable!table!#].
The 'set_to' parameter: [#!variable!set_to!#] is invalid. It must be 'yes' or 'no'.
While opening VNC pipe, failed to get server VM information with server UUID [#!variable!server_uuid!#] and host UUID [#!variable!host_uuid!#].
While opening VNC pipe, failed to get server VM VNC information with server UUID [#!variable!server_uuid!#] and host UUID [#!variable!host_uuid!#].
While opening VNC pipe, failed to get websockify instance information with server UUID [#!variable!server_uuid!#] and host UUID [#!variable!host_uuid!#].
While opening VNC pipe, failed to get SSH tunnel instance information with server UUID [#!variable!server_uuid!#] and host UUID [#!variable!host_uuid!#].
While closing VNC pipe, failed to get VNC pipe information with server UUID [#!variable!server_uuid!#] and host UUID [#!variable!host_uuid!#].
The server UUID: [#!variable!server_uuid!#] is not valid or was not found in the database.
The Anvil! name: [#!variable!anvil_name!#] was not found in the database.
The Anvil! UUID: [#!variable!anvil_uuid!#] is not valid or was not found in the database.
There are no Anvil! systems yet in the database, nothing to do.
There are no servers yet in the database, nothing to do.
You need to specify the updated definition file with '--file /path/to/definition.xml'.
You definition file: [#!variable!file!#] doesn't exist or couldn't be read.
The server name was not found in the new definition file.
The server UUID was not found (or is not valid) in the new definition file.
Failed to parse the XML in the new definition file. The error was:
====
#!variable!error!#
====
Y'.]]>
The server UUID: [#!variable!server_uuid!#] in the definition file wasn't found in the database, unable to update.
The new definition has changed the server's name from: [#!variable!current_name!#] to: [#!variable!new_name!#]. Changing the server's name must be done with the 'anvil-rename-server' tool.
[ Error ] - The IPMI BMC administrator (oem) user was not found. The output (if any) of the call: [#!variable!shell_call!#] was:
====
#!variable!output!#
====
Giving up.
This must be run on a node active in the cluster hosting the server being managed. Exiting.
This Anvil! does not seem to have a DR host. Exiting.
Failed to find an IP we can access the DR host: [#!variable!host_name!#]. Has it been configured? Is it running? Exiting.
Failed to access the DR host: [#!variable!host_name!#] using the IP: [#!variable!ip_address!#]. Is it running? Exiting.
Failed to parse the CIB. Is this node in the cluster? Exiting.
We're not a full member of the cluster yet. Please try again once we're fully in. Exiting.
We can't setup a server to be protected unless both nodes are up, and the peer isn't at this time. Exiting.
We can't remove a server from DR unless both nodes are up, and the peer isn't at this time. Exiting.
'. Exiting.]]>
Failed to find the server: [#!variable!server!#] by name or UUID? Exiting.
The protocol: [#!variable!protocol!#] is invalid. Please use '--help' for more information.
The DR host: [#!variable!host_name!#] doesn't appear to be storage group: [#!variable!storage_group!#]. Unable to proceed.
We need: [#!variable!space_needed!# (#!variable!space_needed_bytes!# Bytes)] from the storage group: [#!variable!storage_group!#], but only: [#!variable!space_on_dr!# (#!variable!space_on_dr_bytes!# bytes)] is available on DR. Unable to proceed.
[ Error ] - The check appears to have failed. Expected a return code of '0', but got: [#!variable!return_code!#]
The output, if any, was
====
#!variable!output!#
====
- Restoring the old config now.
- The problematic new config has been saved as: [#!variable!file!#].
- The old config has been restored. Exiting.
- The logical volume: [#!variable!lv_path!#] creation failed. Unable to proceed.
Only the root user can load a database file and start the database.
[ Error ] - The 'pg_dump' call to backup the database failed. Expected a return code of '0', but got: [#!variable!return_code!#].
Full command called: [#!variable!shell_call!#]
The output, if any, was
====
#!variable!output!#
====
Only the root user can backup a database.
[ Error ] - The 'dropdb' call to drop the database failed. Expected a return code of '0', but got: [#!variable!return_code!#].
Full command called: [#!variable!shell_call!#]
The output, if any, was
====
#!variable!output!#
====
[ Error ] - The 'createdb' call to create the database failed. Expected a return code of '0', but got: [#!variable!return_code!#].
Full command called: [#!variable!shell_call!#]
The output, if any, was;
====
#!variable!output!#
====
Failed to load the database file: [#!variable!file!#]. Deleting it so it's not considered in the next load attempt.
Failed to read the kernel release on the host: [#!variable!target!#]. The return code was: [#!variable!return_code!#] (expected '0') and the release output, if any, was: [#!variable!output!#].
The program: [#!variable!program!#] is using: [#!variable!ram_used!#] (#!variable!ram_used_bytes!# Bytes). This is probably caused by a memory leak, so we will now exit so that systemctl can restart us. If this is happening repeatedly, please contact support.
This is not a Striker host.
There are no databases available, exiting.
Unable to find the Anvil! information for the Anvil! UUID: [#!variable!anvil_uuid!#].
Unable to find the DRBD config from either node in the Anvil! with the Anvil! UUID: [#!variable!anvil_uuid!#]. Has scan_drbd (as part of scancore) run on either nodes?
' to specify the alert level of the test message.]]>
There are two or more entries on the host: [#!variable!host!#] in the history table: [#!variable!table!#]! The duplicate modidied_date and column UUID are: [#!variable!key!#] (time is UTC), and the query that exposed the dupplicate was: [#!variable!query!#]. This is likely caused by two database writes where the 'modified_date' wasn't updated between writes.
[ Error ] - There was a problem purging records. The details of the problem should be in the logs.
The table: [#!variable!table!#] has an entry in the history schema that doesn't have a corresponding record in the public schema. This is likely a resync artifact of a deleted record. Purging the record: [#!variable!uuid_column!#:#!variable!column_uuid!#] from all databases.
[ Error ] - Failed to reconnect to the database, and now no connections remain.
' for a server that was not running.
The definition data passed in was:
====
#!variable!definition!#
====
]]>
[ Error ] - Failed to wipe and delete the logical volume: [#!variable!local_lv!#] that was volume number: [#!variable!volume!#] under the server: [#!variable!server!#].
There was a problem deleting: [#!variable!config_file!#]. The rest of the process completed successfully. Please manually remove this file if it still exists.
[ Error ] - Failed to connect the DRBD resource. Expected return code '0', but got: [#!variable!return_code!#]. The error output, if anything, was
====
#!variable!output!#
====
Can not (dis)connect the server: [#!variable!server!#] as the resource config file: [#!variable!config_file!#] doesn't exist. Do you need to '--protect' it?
We're set to migrate servers (--stop-servers not used) but both nodes are not in the cluster, so migrations would fail. Aborting.
Long-throw requires a license, and the license file is not installed, and '--license-file /path/to/drbd-proxy.license' was not passed.
The long-throw license file: [#!variable!file!#] was not found, so unable to install it.
There was a problem with the "Long-throw" lincense file. This will prevent Long-Throw DR from working. Details of the error will be recorded in the log file.
[ Error ] - (At least) two interfaces have the same MAC address assigned to them. This should not happen, and would cause endless reboots. Unable to complete configuration, please re-map the network again and watch for duplicates. The duplicate MAC address is: [#!variable!mac_address!#] which is used by both: [#!variable!iface1!#] and: [#!variable!iface2!#].
...bz2' and the archives are synced between dashboards for safe keeping. Archive
# files are never removed automatically.
#
# To disable auto-archiving entirely, set 'trigger' to '0'.
#
# NOTE: If the archive directory doesn't exist, Anvil! will create it
# automatically the first time it is needed.
sys::database::archive::compress = 1
sys::database::archive::trigger = 50000
sys::database::archive::count = 25000
sys::database::archive::division = 30000
sys::database::archive::directory = /usr/local/anvil/archives/
# This puts a limit on how many queries (writes, generally) to make in a single batch transaction. This is
# useful when doing very large transacions, like resync'ing a large table, by limiting how long a given
# transaction can take and how much memory is used.
sys::database::maximum_batch_size = 25000
### Apache stuff
# By default, we try to determine the host type which anvil RPM is installed. If, for some reason, you want
# to statically assign the host type, you can do so with this variable. Note that this sets the host type of
# this host only. You will need to set this appropriately on other hosts.
#
# Normally, you should not need to set this.
#sys::host_type = node
# This configuration file provides a way to override Anvil::Tools' built-in defaults.
# This controls the default language. The value is the ISO code of the country's language you want to use by
# default. Note that the logging language is set with 'defaults::log::language' below.
# NOTE: Be sure the language exists before changing it!
defaults::languages::output = en_CA
# This controls how many loops Anvil::Tools::Words is allow to make while processing a string. This acts as a
# mechanism to exit infinite loops, and generally should not need to be changed.
defaults::limits::string_loops = 1000
### Logging options
# This controls whether all database transactions are recorded or not. Genreally this should be left off
# unless you are debugging the program.
# WARNING: This ignores 'secure', and will always be logged. Be careful about exposing sensitive data!
sys::database::log_transactions = 0
# By default, if a configured database is not accessible, a log level 1 alert is registered. This can cause a
# lot of log traffic. If you want to silence these log alerts, you can set the value below to be higher than
# your current active log level (default is '1', so set to '2' or '3' to silence).
# NOTE: It's important to only use this temporarily.
sys::database::failed_connection_log_level = 1
# This controls what log facility to use by default.
# NOTE: This will always be 'authpriv' when a log entry is marked as secure.
defaults::log::facility = local0
# This controls what language logs are recorded in. Be sure that the language exists before changing it!
defaults::log::language = en_CA
# This controls the default log level. See 'perldoc Anvil::Tools::Logs' for details.
defaults::log::level = 1
# This controls whether sensitive log entries are logged or not. Generally, this should be left disabled!
defaults::log::secure = 0
# THis sets the default log server to send the log entries to. Leave it blank in most cases.
#defaults::log::server =
# This sets the default log tag used when logging an entry. Most programs will likely override this.
defaults::log::tag = anvil
### Templates
# This sets the default template used when rendering HTML pages. It must be the same as the directory name
# under /var/www/html/skins/
defaults::template::html = alteeve
### Install Target options
# Note; Please see 'pxe.txt' for editable templates for 'dhcpd.conf', (tftpboot's BIOS) 'default' and the
# kickstart templates.
#
# This section allows for adapting certain installations of systems via the Install Target feature.
# Generally, these don't need to be edited.
#
# This controls the keyboard configuration. See:
# - https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/appendixes/Kickstart_Syntax_Reference/#sect-kickstart-commands-keyboard
#kickstart::keyboard = --vckeymap=us --xlayouts='us'
#
# This sets the default password of newly stage-1 built machines. Generally, this shouldn't be change. It is
# recorded in plain text and it is used in the stage-2 configuration tools.
#kickstart::password = Initial1
#
# This is the system timezone to be set. Generally, it's recommended to leave the Anvil! machines to UTC, but
# you might want to change this is if you spend time working directly on the various Anvil! cluster machines.
#kickstart::timezone = Etc/GMT --isUtc
# If this is set to '1', the packages used to build machines via the Install Target feature will not
# auto-update.
install-manifest::refresh-packages = 1
# This controls how often the local RPM repository is checked for updates. The default is '86400' seconds
# (one day). If anything, you might want to increase this. Common values;
# 86400 = Once per day
# 604800 = Once per week
# 2419200 = Once per month (well, 4 weeks)
install-manifest::refresh-period = 86400
### This controls Striker-specific features
# This can be set a a comma-separated list of packages to be added to Striker's RPM repos. Note that this is
# only useful is you want to store EL8-specific packages needed outside the Anvil!.
striker::repo::extra-packages =
### System functions
# The machines used in the Anvil! are treated as appliances, and thus fully under our control. As such, much
# of the system is monitored, managed and auto-repaired. This can frustrate sysadmins. As such, an admin may
# use the 'system::*' options to retake control over some system behaviour.
# Setting this to '0' will disable auto-management of the firewall.
sys::manage::firewall = 1
### Server related options
# This is the "short list" of servers shown when provisioning a new server. To see the full list of options,
# run '/usr/bin/osinfo-query os' on any machine in the Anvil!.
#sys::servers::os_short_list = debian10,fedora32,freebsd12.1,gentoo,macosx10.7,msdos6.22,openbsd6.7,opensuse15.2,rhel5.11,rhel6.10,rhel7.9,rhel8.3,sles12sp5,solaris11,ubuntu20.04,win10,win2k16,win2k19
]]>
= 16MB per proxy connection to DRBD proxy for it to bring up a connection
memlimit #!variable!memlimit!#M;
}
]]>
Current Network Interfaces and States
MAC Address
Name
State
Speed
Change Order
Inbound Connections
Via network:
Peer Connections
Ping
Jobs
Target
User
File
On Host
Unconfigured Hosts
Accessible?
Configured Hosts
Type
At IP
Agent and Arguments
Delete
Fence Device
UPS
IP Address
Host Name
Brand
Install Manifest
Select Machine
#!string!brand_0006!# Password
Confirm Password
DNS Server(s)
NTP Server(s)
MTU Size (Bytes)
Network common
BCN link #!variable!number!#
SN link #!variable!number!#
IFN link #!variable!number!#
New Hostname
#!string!brand_0006!# Description
Scan Agent
Manage
File Name
File Type
File Size
md5sum
New File Name
New File Type
Purge File
#!string!brand_0006!# List
Synced
Remove
#!string!brand_0006!# Name
Active Interface
Bond Mode
Active Interface
Link State
Duplex
Link Drops
Table
public
history
Server
CPU
RAM
Disk
Size
Storage Group
Bridge
Model
Last Known IP
Description
RAM Used
RAM Free
Bridges
Storage Group
Used
Free
Anvil! Node Pair
Interface
Gateway (*DG)
Transmitted
Received
Configure Network
The network configuration will be updated based on the variables stored in the database. Reconnecting to the machine using the new IP address may be required.
Update Striker
This system is now scheduled to be updated.
Reboot Striker
This system will be rebooted momentarily. It will not respond until it has booted back up.
Poweroff Striker
This system will be powered off momentarily. It will not respond until it has turned back on.
Reboot...
Powering off...
Add a Striker Peer
The Striker peer will now be added to the local configuration.
Remove a Striker Peer
The Striker peer will now be removed from the local configuration.
Manager Install Target.
Enable or disable the 'Install Target' feature.
Update the 'Install Target' source files and refresh RPM repository.
Download a file
The referenced file will be downloaded by the target host.
Initialize a new #!string!brand_0002!# Node
Initialize a new DR Host
The target will be setup to talk to this and our peer dashboards. When initialization is complete, you will be able to map the target's network.
Connecting to the target: [#!variable!target!#]...
Connected!
Unable to connect to: [#!variable!target!#]. Will keep trying for another: [#!variable!timeout!#] seconds...
Unable to connect, the job has failed.
'Initialize host' job: [#!variable!job-uuid!#] picked up.
Adding repositories.
Added the repository for this dashboard.
Red Hat subscription information provides, attempting to register now.
This machine is already registered with Red Hat. Skipping it.
Unable to reach the Red Hat subscription service. Is the Internet working on this host?
Please be patient, subscription can take a while to complete.
Success!
Failure! The return code: [#!variable!return_code!#] was received ('0' was expected). Possibly helpful information:
* Output: [#!variable!output!#]
* Error: [#!variable!error!#]
Adding the repo: [#!variable!repo!#]
Verifying the the needed repos are enabled now.
[ Warning ] - The repo: [#!variable!repo!#] is not subcribed to this system! Initialization will continue, but it might fail.
Updating the target's operating system prior to package install.
[ Note ] - This step can take a while to finish, and there will be no input here until it completes.
Removing conflicting packages.
Will now install: [#!variable!package!#].
Verifying installation.
[ Failed ] - There may be more information in #!data!path::log::main!#.
Success!
Adding our database connection information to the target's anvil.conf file.
Finished! The target should be ready for initial configuration shortly. If it isn't, please check that the 'anvil-daemon' daemon is running.
Removing bad machine keys.
Removing existing entries for the target machine: [#!variable!target!#] from: [#!variable!file!#].
[ Error ] - The known hosts file: [#!variable!file!#] was not found. Skipping it.
Finished.
[ Error ] - There was a problem reading the known hosts file: [#!variable!file!#]. Skipping it.
Found an entry for the target at line: [#!variable!line!#], removing it.
[ Error ] - The line number: [#!variable!line!#] in: [#!variable!file!#] does not appear to be for the target: [#!variable!target!#]. Has the file already been updated? Skipping it.
Rewriting: [#!variable!file!#].
Manage Keys
The selected bad key(s) will be removed from the specified files.
The state UUID: [#!variable!state_uuid!#] is for the machine with the host UUID: [#!variable!host_uuid!#], which is not us. This is probably a progrem error, skipping this.
[ Error ] - There was a problem writing the file: [#!variable!file!#]. Details will be found in the logs.
Success! The file: [#!variable!file!#] has been updated.
Setting the host name to: [#!variable!host_name!#]...
[ Error ] - The host name: [#!variable!host_name!#] is invalid. Skipping host name setup.
[ Error ] - Something went wrong. The host name was set to: [#!variable!host_name!#], but the host name returned was: [#!variable!current_host_name!#].
OUI Database.
Refresh the 'OUI' database used to cross reference MAC addresses to the companies that own them.
Network Scan.
This job does a simple ping scan of the networks connected to this host. Any detected hosts have their MAC / IP addresses recorded. This is designed to help determine IP addresses assigned to servers hosted on the #!string!brand_0002!# system.
Adding the database connection information for the dashboard: [#!variable!host_name!#] to the target's anvil.conf file.
Unable to find a matching network, skipping this database.
Something went wrong adding this database. Please see: [#!data!path::log::main!#] for details.
The network configuration will be updated based on the variables stored in the database. When complete, the system will reboot.
Join this machine to an #!string!brand_0006!#.
This machine will join an #!string!brand_0006!# as a node or DR host. The role and #!string!brand_0006!# will be determined by the associated Install Manifest UUID.
'Join #!string!brand_0002!#' job: [#!variable!job-uuid!#] picked up.
This will become: [#!variable!machine!#] using data from the install manifest UUID: [#!variable!manifest_uuid!#].
[ Error ] - Failed to load and parse the install manifest. Details will be found in the logs. Exiting, This is a fatal error.
The host name is already: [#!variable!host_name!#], no change needed.
Updating the network configuration for: [#!variable!interface!#].
Disconnected from all database(s). Will reconnect after the network configuration changes have taken effect.
About to update the network, as necessary.
Checking: [#!variable!name!#].
No changes needed.
Backing up and writting out the new version of: [#!variable!file!#].
Reconnected to: [#!data!sys::database::connections!#] database(s).
The default 'virbr0' libvirtd bridge exists. Removing it.
Checking if the MTU needs to be updated on any interfaces.
The MTU on the interface: [#!variable!interface!#] is already: [#!variable!mtu!#] bytes, no update needed.
The MTU on the interface: [#!variable!interface!#] is currently: [#!variable!old_mtu!#] bytes, changing it to: [#!variable!mtu!#] bytes now.
Adding NTP (network time protocol) servers, if needed.
Adding the NTP server: [#!variable!server!#].
Restarting the daemon: [#!variable!daemon!#].
,manifest_uuid=,anvil_uuid='. Either the parse failed, or the data was somehow invalid.]]>
Updated the password for the: [#!variable!user!#] user.
Enabled and started the daemon: [#!variable!daemon!#].
Disable and stop the daemon: [#!variable!daemon!#].
This is a DR host, skipping pacemaker configuration.
Successfully authorized using 'pcsd' on both nodes.
No existing cluster found, will run initial setup.
The corosync.conf file does not exist locally, but it does exist on the peer. Copying the file to here.
Starting the cluster (on both nodes) now.
We're node 2, so we will wait until the peer starts the cluster.
Both nodes are up!
Still waiting. Node 1: [#!variable!node1_name!#] ready: [#!variable!node1_ready!#] (in_ccm/crmd/join: [#!variable!node1_in_ccm!#/#!variable!node1_crmd!#/#!variable!node1_join!#]), Node 2: [#!variable!node2_name!#] ready: [#!variable!node2_ready!#] (in_ccm/crmd/join: [#!variable!node2_in_ccm!#/#!variable!node2_crmd!#/#!variable!node2_join!#])
Cluster hasn't started, calling local start.
Corosync is not yet configured, waiting. It will be created when node 1 initializes the cluster.
Corosync is configured. Will wait for the cluster to start. If it hasn't started in two minutes, we'll try to join it.
We will now wait for the cluster to start.
The interface: [#!variable!interface!#] has a DNS entry: [#!variable!dns_line!#], but it is not the default gateway. Removing the line.
The interface: [#!variable!interface!#] has a GATEWAY entry: [#!variable!gateway_line!#], but it is not the default gateway. Removing the line.
Updating the '/etc/hosts' file.
Checking the SSH configuration.
Configuring the IPMI BMC. Please be patient, this could take a minute.
Checking the fence configuration for the node: [#!variable!node!#].
IPMI exists on this node, but it is not yet setup as a fence device, adding it.
The IPMI information in the existing fence configuration is different from the details stored in the database. Will reconfigure.
There is an IPMI fence device configured, but there is no host IPMI information in the database. Removing it.
Deleting the old fence device: [#!variable!device!#].
Creating the new fence device: [#!variable!device!#].
The fence device: [#!variable!device!#] information in the existing fence configuration is different from the details stored in the database. Will reconfigure.
The fence device: [#!variable!device!#] does not exist as a fence device, adding it.
Adding a fence delay agent to provide time for the IPMI BMC to boot before trying it again.
Configuring the cluster to loop fence attempts indefinitely.
Enabling fencing!
Checking to see if: [#!data!path::configs::global-common.conf!#] needs to be configured or updated.
Update completed successfully.
Update not required, nothing changed.
Completed joining the #!string!brand_0002!#.
No job was found that needs to be run.
Reconnecting will start a synchonization of the database. This step might take a while to complete, please be patient.
Sync Uploaded File
This moves an uploaded file from the: [#!data!path::directories::shared::incoming!#] directory to: [#!data!path::directories::shared::files!#] directory, adds it to the Anvil! database, and pushed it out to other systems.
Successfully deleted the file: [#!variable!file_path!#].
No need to delete the file: [#!variable!file_path!#], it already doesn't exist.
Purge File.
This askes the host to delete the associated file from their system.
Rename File.
This askes all systems to rename the associated file.
No need to rename the file: [#!variable!file_path!#], it doesn't exist on this host.
About to rename the old file: [#!variable!old_file!#] to: [#!variable!new_file!#].
File renamed successfully.
Check File Mode.
This is used when a file type changes, setting the executable bits when the type is script, and removing the executable bits when set to another type.
The file: [#!variable!file_path!#]'s mode has been set to: [#!variable!new_mode!#].
No need to set the mode on the file: [#!variable!file_path!#], it doesn't exist here.
Provision a new server
This takes the information and creates a new server.
There are no known Anvil! systems at this time. Please setup an Anvil! and try again later.
Provision a new server menu:
Anvil! name: ... [#!variable!anvil_name!#]
* That was not a recognized Anvil! name. please try again.
-=] Existing Anvil! systems [=-
Loading available resources for: [#!variable!anvil_name!#] (#!variable!anvil_uuid!#)
There is not enough RAM available on this Anvil! to provision new servers.
- Available RAM: [#!variable!available_ram!#]
Server name: ... [#!variable!server_name!#]
CPU Cores: ..... [#!variable!cpu_cores!#]
* Please enter a unique server name.
-=] Existing Servers on the Anvil! [#!variable!anvil_name!#] [=-
* Please enter a number between 1 and #!variable!max_cores!#.
-=] Available cores / threads: [#!variable!cores!# / #!variable!threads!#]
- Node #!variable!core!# CPU Model: [#!variable!model!#]
- DR Host CPU: .... [#!variable!model!#], [#!variable!cores!#c]/[#!variable!threads!#t]
RAM: ........... [#!variable!ram!#]
* Please enter a valid amount up to: [#!variable!ram_total!# / #!variable!ram_available!#].
-=] Available RAM: [#!variable!ram_available!#]
- Reserved by host: ... [#!variable!ram_reserved!#]
- Allocated to servers: [#!variable!ram_allocated!#]
- Node 1 RAM (total): . [#!variable!ram_node1!#]
- Node 2 RAM (total): . [#!variable!ram_node2!#]
- DR Host RAM (total): [#!variable!ram_available!#]
Available on Anvil!: [#!variable!vg_free!#], Total: [#!variable!vg_size!#]
Available on DR: ... [#!variable!dr_free!#], Total: [#!variable!dr_size!#]
Storage Group: . [#!variable!storage_group!#]
* Please enter a number beside the storage group you want to use.
-=] Storage groups
Storage Size: .. [#!variable!storage_size!#]
* Please enter a size up to: [#!variable!max_size!#].
-=] Storage group: [#!variable!storage_group!#], Available Space: [#!variable!available_size!#]
- Note: You can add additional drives later.
Install Media: . [#!variable!install_media!#]
* Please enter a number that corresponds to an install disc.
-=] Installation media
- 0) #!string!unit_0005!#
Driver Disc: ... [#!variable!driver_disc!#]
* Please enter a number that corresponds to a driver disc.
-=] Driver disc
Saving the job details to create this server. Please wait a few moments.
Job Data:
====
#!variable!job_data!#
====
The job to create the new server has been registered as job: [#!variable!job_uuid!#].
It should be provisioned in the next minute or two.
Sanity checks complete.
The new DRBD resource will use minor number: [#!variable!minor!#] and the base TCP port: [#!variable!port!#].
[ Warning ] - The logical volume: [#!variable!lv_path!#] to use for this server already exists. We will NOT initialize it! If the LV does not have DRBD metadata, the server install will fail. If the LV is a DRBD resource, and it is inconsistent or outdated, provisioning will stall until the peer comes online. If the install fails, please determine why (or remove the existing LV) and try again.
The peer job: [#!variable!job_uuid!#] has been created for the peer: [#!variable!peer_name!#] to create it's side of the storage.
The new logical volume: [#!variable!lv_path!#] has been created. This will back the replicated storage used for the new server.
The DRBD resource: [#!variable!resource!#] configuration has been created and loaded.
The DRBD resource: [#!variable!resource!#] metadata has been created.
Bringing up the new resource.
Waiting for the disk state to be ready. The current volume: [#!variable!volume!#] disk state is: [#!variable!disk_state!#], waiting for it to become 'UpToDate', 'Consistent', 'Outdated' or 'Inconsistent'.
The LV(s) behind the resource: [#!variable!resource!#] already existed, and the DRBD resource is not in the disk state 'UpToDate'. As such, we'll keep waiting before provisioning the server.
The resource needs to be forced to UpToDate as it is brand now, doing that now.
-=] OS Short List
* Please enter an OS key that is closest to your target OS. Run 'osinfo-query os' for a full list.
Optimize for: .. [#!variable!os!#]
Ready to provision the server! Please be patient, this could take a moment. The call to create the server will be:
====
#!variable!shell_call!#
====
Provision call made, waiting for the server to start...
Started! Verifying that it looks good and adding it to the database.
Done! The server should now be booting. Connect now and finish the OS install.
The resource: [#!variable!resource!#] is now up.
We're the peer for this new server, and so we're now done. The other node will complete the server's install momentarily.
As we're the peer, we're now going to wait for the new server definition to be added to the database, then write it out to disk.
The definition file: [#!variable!file!#] has been saved.
Preparing to add the server to the central cluster manager.
Deleting a server
This deletes a server from an Anvil! system.
Asking pacemaker to stop the server: [#!variable!server_name!#].
The server: [#!variable!server_name!#] is now stopped in pacemaker.
Registered a job with: [#!variable!host_name!#] to delete it's records of this server.
Deleting the replicated storage resource behind this server.
Storage has been released. Checking that the server has flagged as deleted in the database.
The server has been flagged as deleted now.
The server delete is complete on this host!
It looks like ScanCore has not yet run on one or both nodes in this Anvil! system. Missing resource data, so unable to proceed.
Manually calling 'scan-drbd' to ensure that the new agent is recorded.
The server name: [#!variable!server_name!#] is already used by another server.
Deleting the server's definition file: [#!variable!file!#]...
The server: [#!variable!server_name!#] was not found in the cluster configuration. This can happen if a server was partially deleted and we're trying again.
Preparing to delete the server: [#!variable!server_name!#].
Using virsh to destroy (force off) the server: [#!variable!server_name!#], if it is still running.
Enabled the HA repository for CentOS Stream.
Initialize Stage-1 installed systems into a full Anvil!.
This program is designed the automation of turning a set of stage-1 (bare OS + anvil-repo) systems and turn them into a fully functioning Anvil! system.
We need to setup pairing with Striker: [#!variable!number!#]. We will wait for it to come up. Be sure that you're run 'striker-auto-initialize-all' on it.
Successfully connected to Striker: [#!variable!number!#] using the IP: [#!variable!ip!#]!
No connection to Striker: [#!variable!number!#] via the IP: [#!variable!ip!#].
Failed to connect Striker: [#!variable!number!#] over any IPs. Sleeping a bit and then trying again.
Waiting now for the peer Striker: [#!variable!number!#] with host UUID: [#!variable!peer_host_uuid!#] to show up in our database.
The peer Striker: [#!variable!number!#] with host name: [#!variable!peer_host_name!#] has successfully peered with us!
The peer Striker: [#!variable!number!#] with host UUID: [#!variable!peer_host_uuid!#] has not yet started using our database. Waiting a bit before checking again...
Peering Striker dashbaords
Striker peers now working with us!
Adding UPSes now.
Successfully added/updated the UPS: [#!variable!ups_name!#] at: [#!variable!ups_ip_address!#] using the agent: [#!variable!ups_agent!#]. It's UPS UUID is: [#!variable!ups_uuid!#].
Failed to assemble the Anvil!, aborting.
All UPSes added/updated.
Adding fence devices now.
Successfully added/updated the fence device: [#!variable!fence_name!#] using the agent: [#!variable!fence_agent!#]. It's fence UUID is: [#!variable!fence_uuid!#].
All fence devices added/updated.
Creating Install Manifest(s).
Created the manifest: [#!variable!manifest_name!#] with the UUID: [#!variable!manifest_uuid!#].
Install Manifest(s) created.
Initializing nodes and, if applicable, DR host(s).
The machine: [#!variable!machine!#] is already initialized and has the host UUID: [#!variable!host_uuid!#]. No need to initialize.
Trying to connect to: [#!variable!machine!#] using IP: [#!variable!ip!#] with the initial password.
Trying to connect to: [#!variable!machine!#] using IP: [#!variable!ip!#] with the desired password.
Connected! We will initialize using the IP: [#!variable!ip!#]
Failed to connect to: [#!variable!machine!#] using any IP address. We'll sleep and then try again shortly.
Created the job to initialize: [#!variable!host_name!#] via the IP address: [#!variable!ip!#] with job UUID: [#!variable!job_uuid!#].
All machines should now be initializing. Waiting now for all machines to register in the database.
The machine: [#!variable!machine!#] hasn't connected to the database yet.
One (or more) machines have not yet initialized. Waiting a few seconds, then checking again.
All machines have been initialized!
Ready to create jobs to assemble Anvil! systems.
Created (or updated) the Anvil! [#!variable!anvil_name!#] with the UUID: [#!variable!anvil_uuid!#].
Created the job for: [#!variable!machine_name!#] with host UUID: [#!variable!host_uuid!#] to the Anvil!: [#!variable!anvil_name!#] with the job UUID: [#!variable!job_uuid!#].
Add machines have been asked to joing their Anvil! system(s). We'll now wait for all jobs to complete.
The job UUID: [#!variable!job_uuid!#] is at: [#!variable!progress!#%].
Not all jobs are done yet, will check again in a bit.
All jobs are complete! Baring problems, the Anvil! system(s) should now be ready to use.
The peer Striker: [#!variable!number!#] with host name: [#!variable!peer_host_name!#] is already peered with us.
Configuring the network of all machines now.
Created a job for: [#!variable!host_name!#] to configure it's network under job UUID: [#!variable!job_uuid!#].
All machines should be configuring their network now. Waiting for all to become accessible over BCN 1.
The machine: [#!variable!host_name!#] is not yet accessible at: [#!variable!ip_address!#].
One or more machines are not yet accessible on the first BCN. Will check again in a moment.
All machines are now available on the first BCN!
One of the Striker dashboards has not yet updated network information in the database. We need this to know which IP to tell the peer to use to connect to us. We'll wait a moment and check again.
The cluster still hasn't started. Calling startup again (will try once per minute).
Successfully added/confirmed the filter in lvm.conf.
Failed to add/confirmed the filter in lvm.conf! This should be corrected later by 'scan-drbd' though.
The cluster isn't up. Provisioning the server will hold until it is. Will check every 10 seconds.
The cluster is up.
The cluster is not started yet, waiting. Will check again shortly.
The cluster is up, but waiting for this node to become ready. Will check again shortly.
The cluster is up and the node is ready.
The server: [#!variable!server!#] has booted!
Done!
Booting server(s)...
Shutting down server(s)...
The server: [#!variable!server!#] is already off, nothing to do.
The server: [#!variable!server!#] has shut down.
The server: [#!variable!server!#] has been asked to stop. You may need to verify that it is actually stopped (some OSes ignore power button events).
The server: [#!variable!server!#] has been asked to boot. It should come up soon.
The server: [#!variable!server!#] will now be booted...
The server: [#!variable!server!#] will now be asked to shut down. If the server doesn't stop, please log into it and make sure it reacted to the power button event. Shut it down manually, if needed.
Booting server(s)...
Source node: [#!variable!source!#], target node is: [#!variable!target!#].
The server: [#!variable!server!#] has been migrated to: [#!variable!target!#].
The server: [#!variable!server!#] will now be migrated to: [#!variable!target!#]. This could take some time! How much RAM is allocated to this server, the speed of the back-channel network and how busy the server is all contribute to migration time. Please be patient!
The server: [#!variable!server!#] has been asked to migrate. We are not waiting for it to complete.
The cluster is up and both nodes are ready.
The cluster is up and both one or both nodes are not yet ready. Will wait until both are up. Current states; [#!variable!local_name!#] is: [#!variable!local_ready!#], and [#!variable!peer_name!#] is: [#!variable!peer_ready!#].
The peer: [#!variable!host_name!#] can't be reached yet. Will wait for it to be available before proceeding with the rename.
The peer(s) of this server are accessible. Ready to proceed with the rename.
The server: [#!variable!server!#] status is: [#!variable!status!#]. Waiting for it to be off.
The server: [#!variable!server!#] is verified to be off everywhere.
The DRBD connection from: [#!variable!source_host!#] to: [#!variable!peer_host!#] for the resource/volume: [#!variable!resource!#/#!variable!volume!#] is: [#!variable!replication_state!#]. Will wait for the sync to finish before taking down the resource.
The DRBD resource behind the server is ready to be taken down.
Taking down the DRBD resource: [#!variable!resource!#] on the peer: [#!variable!peer!#] via the IP: [#!variable!ip!#].
The DRBD resource is down.
On the host: [#!variable!host_name!#], we'll now rename the LV: [#!variable!old_lv!#] to: [#!variable!new_lv!#].
The new LV: [#!variable!new_lv!#] now exists on the host: [#!variable!host_name!#].
Successfully wrote the file: [#!variable!file!#] on the host: [#!variable!host_name!#].
Successfully added the new server name: [#!variable!server_name!#] to the cluster!
Verifying that the server name: [#!variable!server_name!#] is not defined.
Verifying that the server name: [#!variable!server_name!#] is not defined on: [#!variable!host_name!#].
Renamed the server name to: [#!variable!server_name!#] in the database.
We are the SyncSource for the peer: [#!variable!peer_host!#] for the resource/volume: [#!variable!resource!#/#!variable!volume!#]. We have to wait for the peer to complete the sync or close it's connection before we can proceed with shut down.
The cluster has stopped.
Stopping all DRBD resources.
The server: [#!variable!server!#] is migrating. Will check again shortly to see if it is done.
Asking the cluster to shut down the server: [#!variable!server!#] now.
The server: [#!variable!server!#] has not shut down yet. Asking 'virsh' to shut it down. If the cluster stop woke it up, this should trigger a shutdown. If not, manual shutdown will be required.
The server: [#!variable!server!#] will now be migrated to: [#!variable!node!#]. This could take some time, depending on the amount of RAM allocated to the server, the speed of the BCN and the activity on the server. Please be patient!
No servers are running on this node now.
Will now shut down any servers running on the cluster.
Will now migrate any servers running on the cluster.
Checking to see if we're "SyncSource" for any peer's replicated storage.
Withdrawing this node from the cluster now.
Waiting for the node to finish withdrawing from the cluster.
Shutdown complete, powering off now.
Done. This node is no longer in the cluster.
The machine: [#!variable!host_name!#] appears to have IPMI, trying to boot it using that...
The target machine is already on, nothing to do.
The target machine is confirmed off, will try to start now.
The target machine is now booting!
The machine: [#!variable!host_name!#] does not have a (known) IPMI BMC, but it is a member of the Anvil! [#!variable!anvil_name!#]. Searching for a fence method to boot it...
Power On Host
Power on the target host by executing a start script on a striker.
Power Off Host
Power off the target host by executing a stop script on the host itself.
Host Join Cluster
Make target host join its anvil cluster.
Host Leave Cluster
Make target host leave its anvil cluster.
Power On Server VM
Power on the target server VM by executing a start script on the first host within the cluster.
Power Off Server VM
Power off the target server VM by executing a stop script on the first a host within the cluster.
Verifying that corosync is configured to use the SN1 as a fall-back communication channel.
Verifying (and waiting if needed) for the cluster to be be up and both BCN1 and SN1 connections to be active.
The cluster is up.
Both the BCN1 and SN1 links are working between the nodes. Checking corosync now...
Synchronizing the new corosync config exited with return code: [#!variable!return_code!#] and output: [#!variable!output!#]
Loading the new corosync config exited with return code: [#!variable!return_code!#] and output: [#!variable!output!#]
Manage VNC Pipes
Perform VNC pipe operation [#!variable!operation!#] for server UUID [#!variable!server_uuid!#] from host UUID [#!variable!host_uuid!#].
Manage a server menu:
* Please enter the name of the server you want to manage
-=] Servers available to manage on the Anvil! [#!variable!anvil_name!#] [=-
-=] Managing the server: [#!variable!server_name!#] on the Anvil!: [#!variable!anvil_name!#]
Get Server VM Screenshot
Fetch a screenshot of the specified server VM and represent it as a Base64 string.
Running sanity checks.
Sanity checks complete!
Beginning to protect the server: [#!variable!server!#]!
Verified that there is enough space on DR to proceed.
* The connection protocol will be: ..... [#!variable!protocol!#]
* We will update the DRBD resource file: [#!variable!config_file!#]
The following LV(s) will be created:
- Resource: [#!variable!resource!#], Volume: [#!variable!volume!#]
- The LV: [#!variable!lv_path!#] with the size: [#!variable!lv_size!# (#!variable!lv_size_bytes!# Bytes)] will be created.
The resource file: [#!variable!file!#] doesn't need to be updated.
- Backed up old config as: [#!variable!backup_file!#]. Updating it now.
- Updated! Verifying...
- The new config looks good!
- Updating the peers now...
- Updating the resource file: [#!variable!file!#] on the host: [#!variable!host_name!#] via IP: [#!variable!ip_address!#].
- Creating logical volumes on DR, if needed. New LVs will have metadata created.
- Volume: [#!variable!volume!#], logical volume: [#!variable!lv_path!#].
- The logical volume: [#!variable!lv_path!#] already exists, skipping it, and NOT create DRBD meta data.
- Reloading the local DRBD resource config.
- Reloading the resource: [#!variable!server!#] on the host: [#!variable!host_name!#].
- Checking, and starting where needed, the: [#!variable!server!#] resource locally and on peers.
- Checking locally.
- Checking the host: [#!variable!host_name!#]
- Checking to see if the DR host has connected to this resource yet.
- Not up yet, will check again at: [#!variable!next_check!#].
- Up!
Done! The server: [#!variable!server!#] is now being protected on DR!
It will take time for it to initialize, please be patient.
- Running the scan agent 'scan-drbd' locally to record the newly used TCP ports.
- Running the scan agent 'scan-drbd' on: [#!variable!host_name!#] to record the newly used TCP ports.
The job has been recorded with the UUID: [#!variable!job_uuid!#], it will start in just a moment if anvil-daemon is running.
Manage DR tasks for a given server
This job can protect, remove (unprotect), connect, disconnect or update (connect, sync, disconnect) a given server.
Do you want to connect the DR host for the server: [#!variable!server!#]?
Note: Depending on the disk write load and storage network speed to the DR host,
this could cause reduced disk write performance.
About to connect the DR resource for the server: [#!variable!server!#].
Brought up the connection locally. Now checking that the resource is up on the nodes.
Making sure the resource is up on: [#!variable!host_name!#].
Waiting now for the resource to connect.
Done! The server: [#!variable!server!#] is now connected.
Do you want to disconnect the DR host for the server: [#!variable!server!#]?
Note: Once down, no further changes will be written to the DR host.
About to disconnect the DR resource for the server: [#!variable!server!#].
Done! The server: [#!variable!server!#] is now disconnected.
Do you want to update the DR host for the server: [#!variable!server!#]?
Note: This will connect the DR host until the disk(s) on DR are (all) UpToDate.
Depending on the disk write load and storage network speed to the DR host,
this could cause reduced disk write performance.
Still sync'ing from: [#!variable!sync_source!#] at a rate of: [#!variable!sync_speed!#/sec]. Estimated time remaining is: [#!variable!time_to_sync!#].
Sync'ed! Bringing the resource back down now.
Waiting for the connection to come up...
Manage Firewall
This will wait for the named server to appear, then update the firewall to ensure needed ports are open for access to the server's desktop.
Waiting until the server: [#!variable!server!#] appears.
[ Error ] - Timed out waiting for the server: [#!variable!server!#] to appear!
Waiting for the server: [#!variable!server!#] to appear. Will wait: [#!variable!time_left!#] more seconds.
Failed to access: [#!variable!host_name!#], will check again in: [#!variable!waiting!#] seconds.
There was a problem writing the new resource config file: [#!variable!file!#] on the host: [#!variable!host_name!#].
When checking, a difference was found:
====
#!variable!difference!#
====
The new version should have been:
====
#!variable!new_resource_config!#
====
The version read in (if anything) was:
====
#!variable!check_resource_config!#
====
Beginning to remove DR host protection from the server: [#!variable!server!#]!
Do you want to remove protection for the server: [#!variable!server!#]?
Note: This is a permanent action! If you protect this server again later, a full sync will be required.
The DRBD resource volume: [#!variable!volume!#] for the server: [#!variable!server!#] is backed by the logical volume: [#!variable!local_lv!#]. This volume exists, and will now be removed.
The DRBD resource volume: [#!variable!volume!#] for the server: [#!variable!server!#] is backed by the logical volume: [#!variable!local_lv!#]. This volume appears to already be removed.
The backing disk has been removed.
Generating and testing the new resource config.
Tests passed, copying new config to nodes now.
New replicated storage config copied to nodes.
Telling: [#!variable!host_name!#] to update it's replicates storage config.
The old replicated storage config file: [#!variable!config_file!#] will now be removed locally.
Done! The server: [#!variable!server!#] is no longer being protected on DR!
The resource config file: [#!variable!config_file!#] doesn't exist locally, pulling a copy over from: [#!variable!source!#].
Re-parsing the replicated storage configuration.
The server: [#!variable!server!#] was found to be running outside the cluster. Asking it to shut down now.
The server: [#!variable!server!#] is still running two minutes after asking it to stop. It might have woken up on the first press and ignored the shutdown request (Hi Windows). Pressing the poewr button again.
Copying the Long-throw (drbd proxy) license file: [#!variable!file!#] into place.
Starting: [#!variable!program!#].
This is a "test" entry.
It is multiple lines with single quotes ['] and double-quotes (") and here are random brackets{!}.
It also has replacement variables: [#!variable!first!#] and [#!variable!second!#].
This is a test log entry that contains a secret [#!variable!passwaord!#]!
This is a test log entry at log level 2.
This is a test log entry at log level 3.
This is a test log entry at log level 4.
This is a test critical log entry.
This is a test error log entry.
This is a test alert log entry.
This is a test emergency log entry.
About to run the shell command: [#!variable!shell_call!#]
About to read the file: [#!variable!shell_call!#]
About to write the file: [#!variable!shell_call!#]
[ Error ] - There was a problem running the shell command: [#!variable!shell_call!#]. The error was: [#!variable!error!#].
[ Error ] - There was a problem reading the file: [#!variable!shell_call!#]. The error was: [#!variable!error!#].
[ Error ] - There was a problem writing the file: [#!variable!shell_call!#]. The error was: [#!variable!error!#].
Output: [#!variable!line!#].
About to open the directory: [#!variable!directory!#]
Variables:
read_file() was asked to read the file: [#!variable!file!#], but that file does not exist.]]>
read_file() was asked to read the file: [#!variable!file!#] which exists but can't be read.]]>
Reading: [#!variable!line!#].
get().]]>
get().]]>
Successfully read the words file: [#!variable!file!#].
find() failed to find: [#!variable!file!#].]]>
skin() was asked to set the skin: [#!variable!set!#], but the source directory: [#!variable!skin_directory!#] doesn't exist. Ignoring.]]>
search_directories() was passed the array: [#!variable!array!#], but it wasn't actually an array. Using @INC + path::directories::tools + \$ENV{'PATH'} for the list of directories to search instead.]]>
read()' called without a file name to read.]]>
read()' asked to read: [#!variable!file!#] which was not found.]]>
read()' asked to read: [#!variable!file!#] which was not readable by: [#!variable!user!#] (uid/euid: [#!variable!uid!#]).]]>
read_variable() was called but both the 'variable_name' and 'variable_uuid' parameters were not passed or both were empty.]]>
insert_or_update_variables() method was called but both the 'variable_name' and 'variable_uuid' parameters were not passed or both were empty.]]>
change_mode() was called without an invalid 'mode' parameter. It should have been three or four digits, or 'x+/-y' format, but: [#!variable!mode!#] was passed.]]>
The host: [#!variable!host!#] has released its database lock.
write_file() was asked to write the file: [#!variable!file!#] but it already exists and 'overwrite' was not set. Aborting.]]>
write_file() was asked to write the file: [#!variable!file!#] but it is not a full path. Aborting.]]>
string() was asked to process the string: [#!variable!string!#] which has insertion variables, but nothing was passed to the 'variables' parameter.]]>
call() was called but 'shell_call' was not passed or was empty.]]>
The host: [#!variable!host!#] has renewed its database lock.
The host: [#!variable!host!#] is requesting a database lock.
#!variable!method!#() was asked to copy: [#!variable!source_file!#] to: [#!variable!target_file!#], but the target already exists and 'overwrite' wasn't specified, skipping.]]>
level() was passed an invalid log level: [#!variable!set!#]. Only '0', '1', '2', '3' or '4' are valid.]]>
[ Error ] - There is a local database defined, but it does not appear to exist and we could not initialize the database server. Is 'postgresql-server' installed?
change_owner() was asked to change the ownership of: [#!variable!path!#] which doesn't exist.]]>
#!variable!method!#() was called but the source file: [#!variable!source_file!#] doesn't exist.]]>
connect()' method tried to connect to the same database twice: [#!variable!target!#].]]>
Connecting to Database with configuration ID: [#!variable!uuid!#]
- driver: . [#!variable!driver!#]
- host: ... [#!variable!host!#]
- port: ... [#!variable!port!#]
- name: ... [#!variable!name!#]
- user: ... [#!variable!user!#]
- password: [#!variable!password!#]
Initialized PostgreSQL.
Updated: [#!variable!file!#] to listen on all interfaces.
Updated: [#!variable!file!#] to require passwords for access.
call() was called but the port: [#!variable!port!#] is invalid. It must be a digit between '1' and '65535'.]]>
Started the PostgreSQL database server.
Database user: [#!variable!user!#] already exists with UUID: [#!variable!id!#].
users_home() was asked to find the home directory for the user: [#!variable!user!#], but was unable to do so.]]>
SSH session opened without a password to: [#!variable!target!#].
#!variable!name!#] with the UUID: [#!variable!uuid!#] did not respond to pings and 'database::#!variable!uuid!#::ping' is not set to '0' in '#!data!path::configs::anvil.conf!#', skipping it.]]>
[ Note ] - The database: [#!variable!name!#] on host: [#!variable!host!#] with UUID: [#!variable!uuid!#] is not available, skipping it.
The database connection error was:
----------
#!variable!dbi_error!#
----------
Is the database server running on: [#!variable!target!#] and does the target's firewall allow connections on TCP port: [#!variable!port!#]?
] in: [#!data!path::configs::anvil.conf!#].]]>
* If the user name is correct, please update:
database::#!variable!uuid!#::password =
]]>
The connection to the database: [#!variable!name!#] on host: [#!variable!host!#:#!variable!port!#] was refused. Is the database server running?
The connection to the database: [#!variable!name!#] on host: [#!variable!host!#:#!variable!port!#] failed because the name could not be translated to an IP address. Is this database server's host name in '/etc/hosts'?
Successfully Connected to the database: [#!variable!name!#] (id: [#!variable!uuid!#]) on host: [#!variable!host!#:#!variable!port!#].
query() was called without a database ID to query and 'sys::database::read_uuid' doesn't contain a database ID, either. Are any databases available? The query source was: [#!variable!source!#:#!variable!line!#] -> [#!variable!query!#].]]>
query() was asked to query the database with UUID: [#!variable!uuid!#] but there is no file handle open to the database. Was the connection lost?]]>
About to run: [#!variable!uuid!#]:[#!variable!query!#]
Log->secure' is not set.]]>
Log->secure' is not set.]]>
initialize() was called without a database ID to query and 'sys::database::read_uuid' doesn't contain a database ID, either. Are any databases available?]]>
initialize() was asked to query the database with UUID: [#!variable!uuid!#] but there is no file handle open to the database. Was the connection lost?]]>
initialize() was asked to initialize the database: [#!variable!server!#] (id: [#!variable!uuid!#]) but a core SQL file to load wasn't passed, and the 'database::#!variable!uuid!#::core_sql' variable isn't set. Unable to initialize without the core SQL file.]]>
initialize() was asked to initialize the database: [#!variable!server!#] (id: [#!variable!uuid!#]) but the core SQL file: [#!variable!sql_file!#] doesn't exist.]]>
initialize() was asked to initialize the database: [#!variable!server!#] (id: [#!variable!uuid!#]) but the core SQL file: [#!variable!sql_file!#] exist, but can't be read.]]>
The database: [#!variable!server!#] needs to be initialized using: [#!variable!sql_file!#].
About to record: [#!variable!uuid!#]:[#!variable!query!#]
query() was asked to query the database: [#!variable!server!#] but no query was given.]]>
write() was asked to write to the database: [#!variable!server!#] but no query was given.]]>
check_memory() was called without a program name to check.]]>
Testing access to the the database: [#!variable!server!#] prior to query or write. Program will exit if it fails.
Access confirmed.
write() was asked to write to the database with UUID: [#!variable!uuid!#] but there is no file handle open to the database. Was the connection lost?]]>
Log->secure' is not set.]]>
Failed to connect to any database.
check_alert_sent() was called but the 'modified_date' parameter was not passed and/or 'sys::database::timestamp' is not set. Did the program fail to connect to any databases?]]>
[ Error ] - Failed to start the Postgres server. Please check the system logs for details.
The database user: [#!variable!user!#] was created with UUID: [#!variable!id!#].
[ Error ] - Failed to add the database user: [#!variable!user!#]! Unable to proceed.
[ Error ] - Failed to find any tables in: [#!variable!file!#]. Unable to check/load the agent's schema.
[ Warning ] - Failed to set an alert because this host is not yet in the database. This can happen if the alert was set before this host was added to the database.
* Details of the alert:
- Type: ......... [#!variable!type!#]
- Clear? ........ [#!variable!clear!#]
- Record Locator: [#!variable!record_locator!#]
- Timestamp: .... [#!variable!modified_date!#]
[ Warning ] - There is no #!string!brand_0002!# database user set for the local machine. Please check: [#!data!path::configs::anvil.conf!#]'s DB entry: [#!variable!uuid!#]. Using 'admin'.
Database user: [#!variable!user!#] password has been set/updated.
Failed to connect to: [#!variable!target!#:#!variable!port!#], sleeping for a second and then trying again.
I am not recording the alert with message_key: [#!variable!message_key!#] to the database because its log level was lower than any recipients.
The local machine's UUID was not read properly. It should be stored in: [#!data!sys::host_uuid!#] and contain hexadecimal characters in the format: '012345-6789-abcd-ef01-23456789abcd' and usually matches the output of 'dmidecode --string system-uuid'. If this file exists and if there is a string in the file, please verify that it is structured correctly.
The database with UUID: [#!variable!uuid!#] for: [#!variable!file!#] is behind.
#!string!brand_0002!# database: [#!variable!database!#] already exists.
The table: [#!variable!table!#] (and possibly others) in the database on: [#!variable!host!#] (UUID: [#!variable!uuid!#]) is behind by: [#!variable!seconds!#] seconds. A database resync will be requested.
[ Warning ] - Failed to delete the temporary postgres password.
insert_or_update_states() was called but the 'state_host_uuid' parameter was not passed or it is empty. Normally this is set to 'sys::data_uuid'.]]>
[ Error ] - Failed to create the #!string!brand_0002!# database: [#!variable!database!#]
#!string!brand_0002!# database: [#!variable!database!#] created.
[ Warning ] - Failed to reload the Postgres server. Please check the system logs for details. The updated configuration is probably not active yet.
Reloaded the PostgreSQL database server.
configure_pgsql() method was called but the parent program is not running with root priviledges. Returning without doing anything.]]>
', but no program name was read in.]]>
#!variable!program!# has started.
human_readable_to_bytes()' was passed the byte size: [#!variable!size!#] in the string: [sign: #!variable!sign!#, size: #!variable!size!#, type: #!variable!type!#] contains an illegal value. Sizes can only be integers or real numbers. It may also have commas in it which will be removed automatically.]]>
human_readable_to_bytes()' was passed the byte size: [#!variable!size!#] in the string: [sign: #!variable!sign!#, size: #!variable!size!#, type: #!variable!type!#] appears to be a byte size already but the size does not seem to be an integer. Byte sizes can only be signed integers. It may also have commas in it which will be removed automatically.]]>
human_readable_to_bytes()' method was called with the value: [#!variable!value!#] which we split into the size: [#!variable!size!#] and type: [#!variable!type!#]. The type appears to be invalid.]]>
round()' was passed the number: [#!variable!number!#] which contains an illegal value. Only digits and one decimal place are allowed.]]>
Current memory used by: [#!variable!program_name!#] is approximately: [#!variable!bytes!#] bytes (#!variable!hr_size!#).
The 'smaps' proc file for the process ID: [#!variable!pid!#] was not found. Did the program just close?
About to query: [#!variable!query!#]
Entering method: [#!variable!method!#]
Exiting method: [#!variable!method!#]
Firewalld was not running, re-enabling it. If you do not want this behaviour, please set 'sys::daemons::restart_firewalld = 0' in: [#!data!path::configs::anvil.conf!#].
Firewalld was not running, and 'sys::daemons::restart_firewalld = 0' is set. NOT starting it.
]]>
Entering function: [#!variable!function!#]
Connected to: [#!data!sys::database::connections!#] database(s).
Failed to read the system UUID. Received a non-UUID string: [#!variable!uuid!#]. Is the user: [#!variable!user!#] in the 'kmem' group?
The host UUID: [#!variable!uuid!#] does not appear to be a valid UUID. Please check the contents of: [#!data!path::data::host_uuid!#] or the output from: [dmidecode --string system-uuid]. Note that some mainboards will report their UUID as all-0. If this is the case, manually create the 'host.uuid' file with a UUID created by 'uuidgen'.
- #!variable!caller!# runtime was approximately: [#!variable!runtime!#] seconds.
[#!variable!variable_value!#]. See 'perldoc Anvil::Tools::#!variable!module!#' for valid options.]]>
Failed to find a local ID, no databases are stored on this machine.
PostgreSQL server is not installed, unable to proceed.
A job to configure the network was found, but it has already been picked up by: [#!variable!pid!#].
A job to configure the network was found, and it was picked up by: [#!variable!pid!#], but that process is not running and it appears to only be: [#!variable!percent!# %] complete. Taking the job.
The network: [#!variable!network!#] has something set for the IP [#!variable!ip!#], but it appears to be invalid. Ignoring this network.
The network: [#!variable!network!#] is not set to be configured. Skipping it.
backup() method was called with the source file: [#!variable!source_file!#], which does not appear to be a full path and file name (should start with '/').]]>
backup() method was called with the source file: [#!variable!source_file!#], which does not appear to exist.]]>
backup() method was called with the source file: [#!variable!source_file!#], which can not be read (please check permissions and SELinux).]]>
backup() method was called with the source file: [#!variable!source_file!#], which isn't actually a file.]]>
The file: [#!variable!source_file!#] has been backed up as: [#!variable!target_file!#].
Removing the old network configuration file: [#!variable!file!#] as part of the network reconfiguration.
write_file() was asked to write the file: [#!variable!file!#] but it appears to be missing the file name. Aborting.]]>
Ensuring we've recorded: [#!variable!target!#]'s RSA fingerprint for the user: [#!variable!user!#].
Adding the target: [#!variable!target!#]:[#!variable!port!#]'s RSA fingerprint to: [#!variable!user!#]'s list of known hosts.
read_file() was asked to read the remote file: [#!variable!file!#] but it is not a full path. Aborting.]]>
read_file() was asked to read the remote file: [#!variable!file!#] but it appears to be missing the file name. Aborting.]]>
read_file() tried to rsync the remote file: [#!variable!remote_file!#] to the local temporary file: [#!variable!local_file!#], but it did not arrive. There might be more information above.]]>
The file: [#!variable!file!#] does not exist.
read_config()' was called without a file name to read.]]>
backup() method was asked to backup the file: [#!variable!source_file!#] on: [#!variable!target!#], but it looks like there was a problem connecting to the target.]]>
About to run the shell command: [#!variable!shell_call!#] on: [#!variable!target!#] as: [#!variable!remote_user!#]
Failed to create the directory: [#!variable!directory!#] on: [#!variable!target!#] as: [#!variable!remote_user!#]. The error (if any) was: [#!variable!error!#] and the output (if any) was: [#!variable!output!#].
Failed to create the directory: [#!variable!directory!#]. The error (if any) was: [#!variable!error!#].
Failed to copy the file: [#!variable!source_file!#] to: [#!variable!target_file!#] on the target: [#!variable!target!#] as: [#!variable!remote_user!#]. The error (if any) was: [#!variable!error!#] and the output (if any) was: [#!variable!output!#].
#!variable!method!#() was asked to copy: [#!variable!source_file!#] to: [#!variable!target_file!#], but the target's parent directory doesn't exist and we were unable to create it.]]>
encrypt_password() tried to use the algorithm: [#!variable!algorithm!#], which is not recognized. Only 'sha256', 'sha384' and 'sha512' are currently supported. The desired algorithm can be set via 'sys::password::algorithm'.]]>
The IP hash key: [#!variable!ip_key!#] does not exist, skipping it.
No cookies were read, the use is not logged in.
The user's UUID: [#!variable!uuid!#] was read, but it didn't match any known users.
The user has been logged out.
The user hash in the user's cookie is valid.
The user hash in the user's cookie was valid yesterday, updating the stored hash and allowing the user to proceed.
The user hash in the user's cookie is invalid. It is probably expired.
The user: [#!variable!user!#] logged in successfully.
Theew was a failed login attempt from: [#!variable!user_agent!#], trying to log in as: [#!variable!user!#]. log in rejected.
]]>
]]>
Host UUID cache file: [#!data!path::data::host_uuid!#] doesn't exists and we're not running as root so we can't read dmidecode. Unable to proceed.
Database archive check skipped, not running as root.
Database archiving is disabled, skipping archive checks.
Peer: [#!variable!peer!#], database: [#!variable!name!#], password: [#!variable!password!#], host UUID: [#!variable!uuid!#]
Connection only to: [#!variable!db_uuid!#], skipping: [#!variable!uuid!#].
The connection to the database: [#!variable!server!#] has failed. Will attempt to reconnect.
Switching the default database handle to use the database: [#!variable!server!#] prior to reconnect attempt.
Switching the default database to read from to the database: [#!variable!server!#] prior to reconnect attempt.
Ready to try to reconnect to: [#!variable!server!#], but delaying for: [#!variable!delay!#] seconds to give the database a chance to come back online in case this is a transient issue.
The reboot flag was set. Rebooting NOW!
maintenance_mode() was passed an invalid 'set' value: [#!variable!set!#]. No action taken.]]>
The user: [#!variable!user!#] logged out successfully.
A system reboot has been requested via the Striker UI.
A system power-off has been requested via the Striker UI.
Unable to connect to any database. Will try to initialize the local system and then try again.
Failed to connect to any databases. Skipping the loop of the daemon.
Disconnected from all databases. Will reconnect when entering the main loop.
Starting the background process: [#!variable!call!#] now.
Background process: [#!variable!call!#] running with PID: [#!variable!pid!#].
parse_banged_string(), while processing: [#!variable!message!#], a variable name was found to be missing.]]>
update_progress() called without 'job_uuid' being set, and 'jobs::job_uuid' was also not set. Unable to find the job to update.]]>
update_progress() called with the 'job_uuid': [#!variable!job_uuid!#], which was not found. Unable to find the job to update.]]>
update_progress() called with 'progress' set to an invalid value: [#!variable!progress!#]. This must be a whole number between '0' and '100' (fractions not allowed).]]>
find_matching_ip(), but it failed to resolve to an IP address.]]>
We've been asked to have the new peer add us. We will now wait for the peer to show up in the 'hosts' table and then request the job for it to add us.
The peer: [#!variable!peer_uuid!#] is not yet in 'hosts', continuing to wait.
The peer: [#!variable!peer_name!#] is now in 'hosts', proceeding.
Logging the user: [#!data!sys::users::user_name!#] out.
The #!variable!uuid_name!#: [#!variable!uuid!#] was passed in, but no record with that UUID was found in the database.
The variable with variable_uuid: [#!variable!variable_uuid!#], variable_source_table: [#!variable!variable_source_table!#] and variable_source_uuid: [#!variable!variable_source_uuid!#] was not found in the database, so unable to update.
The variable: [#!variable!name!#] was expected to be an array reference, but it wasn't. It contained (if anything): [#!variable!value!#].
The table: [#!variable!table!#] (and possibly others) in the database on: [#!variable!host!#] (UUID: [#!variable!uuid!#]) is missing: [#!variable!missing!#] row(s). A database resync will be requested.
insert_or_update_jobs() was called with 'update_progress_only' but without a 'job_uuid' being set.]]>
Writing: [#!variable!to_write!#] record(s) to resync the table: [#!variable!table!#] in database on: [#!variable!host_name!#].
The connection to the database on: [#!variable!host!#] isn't established, trying again...
The connection to the database on: [#!variable!host!#] has been successfully established.
The system has only been running for: [#!variable!uptime!#] seconds. To minimize the impact of a bug causing a rapid reboot cycle, the request to: [#!variable!task!#] will be paused until the system has been running for at least ten minutes. We will proceed in: [#!variable!difference!#] seconds (at #!variable!say_time!#).
power off
reboot
Delay complete, proceeding with the #!variable!task!# operation now.
Failed to read the file: [#!variable!file!#]. It might not exist, so we will try to write it now.
The body of the file: [#!variable!file!#] does not match the new body. The file will be updated.
The body of the file: [#!variable!file!#] does not match the new body. The file will be updated. The changes are:
==========
#!variable!diff!#
==========
The file: [#!variable!file!#] is already the same as the passed in body, so no update is needed.
The file: [#!variable!file!#] will now be updated.
There was a problem updating file: [#!variable!file!#], expected the write to return '0' but got: [#!variable!return!#]. Please check the logs for details.
Failed to backup the file: [#!variable!source!#] to: [#!variable!destination!#]. Details may be found in the logs above.
Refreshing RPM repository has been disabled in [#!data!path::configs::anvil.conf!#] ('install-manifest::refresh-packages' is set). Not refreshing.
Skipping the RPM repository refresh. The next scheduled refresh will be done in: [#!variable!next_refresh!#] second(s). Override with '--force'.
RPM repository refresh required, [#!data!path::directories::packages!#] doesn't exist (likely this is the first run or the directory was deleted).
RPM repository refresh required, it has been more than: [#!variable!seconds!#] seconds since the last refresh (or no previous refresh was logged).
'Install Target' job: [#!variable!job-uuid!#] picked up.
'Install Target' job: [#!variable!job-uuid!#] aborted, system not yet configured.
Package list loaded.
It looks like a user tried to upload a file without actually doing so.
[ Error ] - Failed to delete the file: [#!variable!file!#].
[ Warning ] - None of the databases are accessible. ScanCore will try to connect once a minute until a database is accessible.
[ Cleared ] - We now have databases accessible, proceeding.
[ Warning ] - The local system is not yet configured. Scancore will check once a minute and start running once configured.
[ Cleared ] - The local system is now configured, proceeding.
ScanCore is entering the main loop now.
----=] ScanCore loop finished after: [#!variable!runtime!#]. Sleeping for: [#!variable!run_interval!#] seconds. ]=--------------------------------------
The md5sum of: [#!variable!file!#] has changed since the daemon started.
* [#!variable!old_sum!#] -> [#!variable!new_sum!#]
Reading the scan agent: [#!variable!agent_name!#]'s words file: [#!variable!file!#].
Running the scan agent: [#!variable!agent_name!#] with a timeout of: [#!variable!timeout!#] seconds now...
The database user is not 'admin'. Changing table and function ownerships to: [#!variable!database_user!#].
[ Warning ] - The Storage->make_directory() method failed to create the directory: [#!variable!directory!#].
[ Note ] - Created the directory: [#!variable!directory!#].
[ Note ] - Downloaded: [#!variable!file!#] (#!variable!human_readable_size!# / #!variable!size_in_bytes!# bytes).
[ Warning ] - It appears that we failed to downloaded and save: [#!variable!file!#].
[ Warning ] - It appears that we failed to downloaded and save: [#!variable!file!#]. The output file has no size, and will be removed.
Starting download of file: [#!variable!file!#].
Finished Downloading: [#!variable!file!#].
- md5sum: ...... [#!variable!md5sum!#].
- Size: ........ [#!variable!size_human!# (#!variable!size_bytes!# bytes)].
- Took: ........ [#!variable!took!#] seconds.
- Download rate: [#!variable!rate!#]
#!variable!file!# was called, but no files where available for download in CGI. Was the variable name 'upload_file' used?
[ Error ] - Storage->scan_directory() was asked to scan: [#!variable!directory!#], but it doesn't exist or isn't actually a directory.
Now deleting the file: [#!variable!file!#].
Checking: [#!data!path::directories::shared::incoming!#] for new files.
About to calculate the md5sum for the file: [#!variable!file!#].
This file is large, [#!variable!size!#], this might take a bit of time...
Failed to move the file: [#!variable!source_file!#] to: [#!variable!target_file!#] on the target: [#!variable!target!#] as: [#!variable!remote_user!#]. The error (if any) was: [#!variable!error!#] and the output (if any) was: [#!variable!output!#].
The file: [#!variable!file!#] has been added to the database (if needed) moved to: [#!variable!target!#].
The file: [#!variable!file!#] should exist, but doesn't. We will try to find it now.
The user: [#!variable!user!#] doesn't appear to have an SSH key yet. Will create it now. This could take some time, depending on how long it takes to collect entropy. If this appears to not be responding, move the mouse or do other things to generate activity on the host.
The user: [#!variable!user!#]'s SSH key yet has been generated. The output is below;
====
#!variable!output!#
====
The user: [#!variable!user!#] doesn't appear to have a base SSH directory. Will now create: [#!variable!directory!#].
The user: [#!variable!user!#]'s: [#!variable!file!#] file needs to be updated.
The fingerprint of: [#!variable!machine!#] has changed! Updating it's entry in known hosts.
- From: [#!variable!old_key!#]
- To: . [#!variable!new_key!#]
Gathering data for: [#!variable!file!#]:
Found the missing file: [#!variable!file!#] on: [#!variable!host_name!# (#!variable!ip!#]). Downloading it now...
Downloaded the file: [#!variable!file!#]. Generating md5sum from local copy now...
The md5sum of file: [#!variable!file!#] matches what we expected!
The md5sum of file: [#!variable!file!#] failed to match. Discarding the downloaded file.
Failed to download: [#!variable!file!#] from: [#!variable!host_name!# (#!variable!ip!#). Will look on other hosts (if any left).
The file: [#!variable!file!#] on: [#!variable!host_name!# (#!variable!ip!#]) doesn't match the file we're looking for.
- Wanted; size: [#!variable!say_file_size!# (#!variable!file_size!# bytes)]
- Found; size: [#!variable!say_remote_size!# (#!variable!remote_size!# bytes)]
We will keep looking.
Already searched: [#!variable!host_name!# using another IP address, skipping this IP: [#!variable!ip!#].
Done.
[ Error ] - Failed to remove the file: [#!variable!file!#]! Please check the permissions or for SELinux denials.
The file: [#!variable!file!#] is marked as not sync'ed to this Anvil!, removing it now.
[ Error ] - The URL: [#!variable!url!#] to download appears to be invalid.
[ Error ] - The requested URL: [#!variable!url!#] was not found on the remote server.
[ Error ] - The requested URL: [#!variable!url!#] does not resolve to a known domain.
[ Error ] - The requested URL: [#!variable!url!#] failed because the remote host refused the connection.
[ Error ] - The requested URL: [#!variable!url!#] failed because there is no route to that host.
[ Error ] - The requested URL: [#!variable!url!#] failed because the network is unreachable.
[ Error ] - The requested URL: [#!variable!url!#] failed for an unknown reason.
time() was passed the 'time' of: [#!variable!time!#] which does not appear to be a whole number.]]>
call() was passed the 'timeout' of: [#!variable!timeout!#] which does not appear to be a whole number.]]>
We have a connection open already to: [#!variable!connection!#], skipping connect stage.
The file: [#!variable!file!#] has beed successfully downloaded.
ocf:alteeve:server invoked
We were asked to promote: [#!variable!server!#], which makes no sense and is not supported. Ignoreing.
We were asked to demote: [#!variable!server!#], which makes no sense and is not supported. Ignoreing.
We were asked to notify, but this is not a promotable (we're stateless) agent. Ignoring.
We were invoked with an unexpected (or no) command. Environment variables and arguments have been logged.
We've been asked to start the server: [#!variable!server!#].
It appears that the list the currently running servers returned a non-zero return code: [#!variable!return_code!#]. We will proceed as we may be able to fix this. The output, if any, was: [#!variable!output!#].
Sanity checks passed, ready to start: [#!variable!server!#].
The server: [#!variable!server!#] is already on this node in the state: [#!variable!state!#], aborting the start request.
All tests passed, yet the attempt to boot the server: [#!variable!server!#] exited with a non-zero return code: [#!variable!return_code!#]. The server is in an unknown state, so exiting with a fatal error. Human intervention is now required. The output, if any, was: [#!variable!output!#].
It appears that the call to boot the server: [#!variable!server!#] worked, but the call to list running servers exited with a non-zero return code: [#!variable!return_code!#]. The server is in an unknown state, so exiting with a fatal error. Human intervention is now required. The output, if any, was: [#!variable!output!#].
The server: [#!variable!server!#] has started successfully.
The server: [#!variable!server!#] should have been started, but it's state is: [#!variable!state!#]. Human intervention is required!
The server: [#!variable!server!#] should have been started, but it wasn't found in the list of running servers.
The attempt to list the running servers returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
The server: [#!variable!server!#] has been asked to shut down. If it is actually running, we will ask it to shut down now.
The server: [#!variable!server!#] is paused. Resuming it now so that it can react to the shutdown request.
The attempt to resume the server: [#!variable!server!#] returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
Pausing for a moment to give the server time to resume.
The server: [#!variable!server!#] is asleep. Waking it now so that it can react to the shutdown request.
The attempt to wake the server: [#!variable!server!#] returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
Pausing for half a minute to give the server time to wake up.
The server: [#!variable!server!#] is already shutting down. We'll monitor it until it actually shuts off.
The server: [#!variable!server!#] is already off.
The server: [#!variable!server!#] is hung. Its state is: [#!variable!state!#]. We will force it off now.
The attempt to force-off the server: [#!variable!server!#] returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
The server: [#!variable!server!#] is not running on this machine.
The server: [#!variable!server!#] is running, but it is in an unexpected state: [#!variable!state!#]. Human intervention is required!
The server: [#!variable!server!#] was not listed on this node, so it is not running here.
Asking the server: [#!variable!server!#] to shut down now. Please be patient.
The attempt to shut down the server: [#!variable!server!#] returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
The server: [#!variable!server!#] is no longer listed. It is now off.
The server: [#!variable!server!#] is not off yet, waiting a few seconds and then we'll check again.
The environment variable 'OCF_RESKEY_CRM_meta_timeout' was not set, so setting it to: [#!variable!timeout!#].
The 'virsh' call exited with the return code: [#!variable!return_code!#]. The 'libvirtd' may have failed to start. We won't wait any longer.
The 'virsh' call exited with the return code: [#!variable!return_code!#]. The 'libvirtd' service might be starting, so we will check again shortly.
It would appear that libvirtd is not operating (or not operating correctly). Expected the return code '0' but got: [#!variable!return_code!#].
Output of: [#!variable!command!#] was;
==========
#!variable!output!#
==========
The server: [#!variable!server!#] is: [#!variable!state!#], which is OK.
The server: [#!variable!server!#] is: [#!variable!state!#].
The server: [#!variable!server!#] is in a bad state: [#!variable!state!#]!
The server: [#!variable!server!#] is in an unexpected state: [#!variable!state!#]!
The server: [#!variable!server!#] is not running on this node.
We're pushing the: [#!variable!server!#] to: [#!variable!target!#].
It appears that the call to check if the server: [#!variable!server!#] is on this node returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
The server: [#!variable!server!#] state is: [#!variable!state!#]. A server must be 'running' in order to migrate it.
The server: [#!variable!server!#] wasn't found on this machine.
Verifying that the server: [#!variable!server!#] was successfully migrated here.
While verifying that the server: [#!variable!server!#] migrated here, the attempt to list servers running here returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
The migration of the server: [#!variable!server!#] to here was successful!
It looks like we were called to verify that the: [#!variable!server!#] migrated here, but it isn't here yet. We'll proceed with an attempt to pull the server over.
We're pulling the: [#!variable!server!#] from: [#!variable!target!#].
Temporarily enabling dual primary for the resource: [#!variable!resource!#] to the node: [#!variable!target_name!# (#!variable!target_node_id!#)].
The attempt to enable dual-primary for the resource: [#!variable!resource!#] to the node: [#!variable!target_name!# (#!variable!target_node_id!#)] returned a non-zero return code [#!variable!return_code!#]. The returned output (if any) was: [#!variable!output!#].
The migration of: [#!variable!server!#] to the node: [#!variable!target!#] will now begin.
The attempt to migrate the server: [#!variable!server!#] to the node: [#!variable!target!#] returned a non-zero return code [#!variable!return_code!#]. The returned output (if any) was: [#!variable!output!#].
The migration was successfully completed in: [#!variable!migration_time!#].
Re-disabling dual primary by restoring config file settings.
The attempt to reset DRBD to config file settings returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
Failure, exiting with '1'.
It appears that the call to list the running servers on the migration target: [#!variable!target!#] returned a non-zero return code: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
The migration of the server: [#!variable!server!#] to: [#!variable!target!#] was a success!
Success, exiting with '0'.
Running validation tests...
- Server definition was read.
- Server name is valid.
- Eumlator is valid.
- Sufficient RAM is available.
- Network bridge(s) are available.
- Storage is valid and ready.
The bridge: [#!variable!bridge!#] is available for this server.
The server wants to connect to the bridge: [#!variable!bridge!#] which we do not have on this node.
The attempt to read the DRBD configuration returned a non-zero code: [#!variable!return_code!#]. The returned output (if any) was: [#!variable!drbd_body!#].
Recording the local connection details for the resource: [#!variable!resource!#] -> [#!variable!address!#:#!variable!port!#].
Recording the peer's connection details for the resource: [#!variable!resource!#] -> [#!variable!address!#:#!variable!port!#].
Checking that the DRBD device: [#!variable!device_path!#] is ready.
The server wants to use: [#!variable!drbd_device!#] as a hard drive, but we couldn't find the backing logical volume: [#!variable!lv_path!#] on this node.
The server wants to use: [#!variable!drbd_device!#] as a hard drive, but the backing logical volume: [#!variable!lv_path!#] is inactive and an attempt to activate it failed.
The server wants to use: [#!variable!drbd_device!#] as a hard drive, which is backed by the logical volume: [#!variable!lv_path!#]. It is ready to use.
The attempt to read the DRBD status returned a non-zero code: [#!variable!return_code!#]. The returned output (if any) was: [#!variable!status_json!#].
The DRBD resource for this server is not running yet.
Bringing up the resource: [#!variable!resource!#] for the server's: [#!variable!device_path!#] disk.
The attempt to start the DRBD resource: [#!variable!resource!#] returned a non-zero code: [#!variable!return_code!#]. The returned output (if any) was: [#!variable!output!#].
Pausing briefly to give the resources time to start.
The attempt to read the DRBD status after bringing up the resource(s) for this server returned a non-zero code: [#!variable!return_code!#]. The returned output (if any) was: [#!variable!status_json!#].
The attempt to read the DRBD status after bringing up the resource(s) appears to have failed.
The DRBD resource: [#!variable!resource!#] backing the device: [#!variable!device_path!#] was not seen in the 'drbdsetup' status data. Attempting to bringing it up now.
Checking the DRBD status again.
The DRBD resource: [#!variable!resource!#] backing the device: [#!variable!device_path!#] was not able to start.
Checking that the peer's DRBD resources are Connected and UpToDate prior to migration.
The local replicated disk: [#!variable!device_path!#] is used by this server. Checking it out now.
The DRBD resource: [#!variable!resource!#] volume: [#!variable!volume!#] local disk state is: [#!variable!disk_state!#]. Unsafe to run the server unless the local disk state is UpToDate.
The DRBD resource: [#!variable!resource!#] volume: [#!variable!volume!#] local disk state is: [#!variable!disk_state!#], good.
Checking connection to: [#!variable!name!#].
The DRBD resource: [#!variable!resource!#] on the peer: [#!variable!name!#] is 'Primary'. Refusing to boot.
peer_short_name: [#!variable!peer_short_name!#], migration_target: [#!variable!migration_target!#].
Ignoring the connection to: [#!variable!peer_short_name!#], it isn't the migration target.
The DRBD resource: [#!variable!resource!#] on the peer: [#!variable!name!#] is not UpToDate (or SyncSource). Refusing to migrate.
Ignoring the local replicated disk: [#!variable!device_path!#], it is not used by this server.
Checking that the optical disc image: [#!variable!file!#] exists.
The server has the ISO: [#!variable!file!#] mounted in its optical drive, but that file doesn't exist on this system.
The server has the ISO: [#!variable!file!#] mounted in its optical drive, which we have, but we can't read it. Check permissions and for SELinux denials.
The server has the ISO: [#!variable!file!#] mounted in its optical drive, which we have.
The server wants to use the emulator: [#!variable!emulator!#] which doesn't exist on this node. Was this server migrated from a different generation #!string!brand_0002!# system? Please update '...' in the server's definition file: [#!variable!definition_file!#].
The server wants to use the emulator: [#!variable!emulator!#] which exists, but we can't run. Please check permissions and for SELinux denials.
The configured server name: [#!variable!server!#] does not match the name of the server in the definition file: [#!variable!name!#]!
The configured server name: [#!variable!name!#] needs: [#!variable!ram!# (#!variable!ram_bytes!# bytes)] of RAM, but only: #!variable!available_ram!# (#!variable!available_ram_bytes!# bytes)] are available!
The definition file: [#!variable!definition_file!#] for the server: [#!variable!server!#] does not exist here!
The definition file: [#!variable!definition_file!#] for the server: [#!variable!server!#] can not be read!
The server's disk: [#!variable!device_path!#] is part of the resource: [#!variable!resource!#] which was already started.
The server: [#!variable!server!#] no longer needs the DRBD resource: [#!variable!resource!#]. Taking it down on peer: [#!variable!peer!#] (via IP: #!variable!peer_ip!#) and then taking it down locally.
The server's disk: [#!variable!device_path!#] is part of the resource: [#!variable!resource!#] which was already taken down.
The DRBD resource: [#!variable!resource!#] local role is: [#!variable!role!#]. Promoting to primary now.
Failed to promote the DRBD resource: [#!variable!resource!#] primary. Expected a zero return code but got: [#!variable!return_code!#]. The output, if any, is below:
====
#!variable!output!#
====
The server: [#!variable!server!#] is already on this node in the state: [#!variable!state!#], aborting the migration request.
The logical volume: [#!variable!lv_path!#] is inactive. Attempting to activate it now.
The DRBD device: [#!variable!drbd_device!#] wasn't found in any DRBD resources on this machine.
- Seeing if the server: [#!variable!server!#] is running already.
The server: [#!variable!server!#] is already running. Exiting successfully.
The server: [#!variable!server!#] is already running on: [#!variable!host!#]. This appears to be a DR host, which is outside pacemaker. Exiting with OCF_ERR_CONFIGURED (6) to prevent pacemaker from trying to start the server on the other node.
The server: [#!variable!server!#] is already running on: [#!variable!host!#]. This appears to be our peer. Exiting with OCF_ERR_INSTALLED (5) to tell pacemaker to try to start it on the other node.
The server: [#!variable!server!#] needs the DRBD resource: [#!variable!resource!#]. Bringing it up locally and on the peer: [#!variable!peer!#] (via IP: #!variable!peer_ip!#).
DRBD's 'auto-promote' is disabled. Promoting the resource: [#!variable!resource!#].
The server: [#!variable!server!#] is now running on the host: [#!variable!host!#].
The request to shutdown the server: [#!variable!server!#] was given the wait period of: [#!variable!wait!#], which is not a valid number of seconds.
The server: [#!variable!server!#] is already off.
The server: [#!variable!server!#] will now be forced off!
The server: [#!variable!server!#] will now be gracefully shut down.
The server: [#!variable!server!#] is now off.
[ Warning ] - The server: [#!variable!server!#] is not yet off after: [#!variable!wait!#] seconds. Giving up waiting.
[ Error ] - The server: [#!variable!server!#] can't by migrated to: [#!variable!target!#] because the resource: [#!variable!resource!#] isn't connected. The current connection state is: [#!variable!connection_state!#].
[ Error ] - The server: [#!variable!server!#] can't by migrated to: [#!variable!target!#] because we can't reach it at all right now.
The migration of the server: [#!variable!server!#] over to: [#!variable!target!#] isn't needed, it's already running on the target. Exiting successfully.
All DRBD resources appear to be up, skipping individual DRBD resource startup.
archive_database() was not passed an array reference of tables to archive. Please pass an array reference using the 'tables' parameter.]]>
The 'smaps' proc file for the process ID: [#!variable!pid!#] was not found. Did the program just close?
- The DRBD resource: [#!variable!resource!#] is in the role: [#!variable!role!#] already, no need to bring it up.
Program: [#!variable!program!#] running as the real user: [#!variable!real_user!# (#!variable!real_uid!#)] and effective user: [#!variable!effective_user!# (#!variable!effective_uid!#)].
The setuid c-wrapper: [#!variable!wrapper!#] already exists, no need to create it.
The anvil version cache file: [#!variable!file!#] for: [#!variable!target!#] needs to be created/updated.
No databases available yet, continuing to wait.
The variable: [#!variable!name!#] is an array reference, but it doesn't have any entries in it.
The variable: [#!variable!name!#] was expected to be a positive integer, but: [#!variable!value!#] was received.
The domain: [#!variable!name!#] does not appear to be a valid domain name or an ipv4 IP address. Skipping it.
The bridge output wasn't in JSON format. Received: [#!variable!output!#].
[ Warning ] - Parsed the IP: [#!variable!ip!#] and MAC: [#!variable!mac!#], but something seems wrong. The section in question was:
====
#!variable!section!#
====
Found the network device: [#!variable!mac!#] (owned by #!variable!company!#) using the IP address: [#!variable!ip!#].
About to download: [#!variable!url!#] and save it to: [#!variable!file!#].
Ready to parse: [#!variable!file!#].
Parsed: [#!variable!records!#], adding/updating them to the database now.
Skipping the network scan. The next scheduled scan will be done in: [#!variable!next_scan!#]. Override with '--force'.
Checking to see if any data needs to be archived.
Skipping archiving, not a Striker dashboard.
Archiving: [#!variable!records!#] over: [#!variable!loops!#] segments from the table: [#!variable!table!#] from the database on: [#!variable!host!#]. This might take a bit, please be patient.
Writing: [#!variable!records!#] to the file: [#!variable!file!#].
The file to be compressed: [#!variable!file!#] has a current size of: [#!variable!size!#]. Please be patient, this can take a bit of time.
The compressed file is: [#!variable!file!#] is: [#!variable!size!#], a reduction of: [#!variable!difference!#]. The compression took: [#!variable!took!#].
Removing archived records.
Vacuuming the database to purge the removed records.
Skipping the table: [#!variable!table!#], it is excluded from archiving.
Queing up to run: [#!variable!uuid!#]:[#!variable!query!#]
About to delete the network interface: [#!variable!interface!#]
About to take the network interface: [#!variable!interface!#] down
Requesting network manager reload config files.
About to bring up the network interface: [#!variable!interface!#]
About to rename the network interface: [#!variable!old_interface!#] to: [#!variable!new_interface!#]
Disconnected from all databases and closing all open SSH sessions. Will reconnect after the network configuration changes have taken effect.
Network reconfiguration is complete!
Skipping the OUI parse. The next scheduled parse will be done in: [#!variable!next_parse!#]. Override with '--force'.
The rpm: [#!variable!rpm_path!#] appears to be a problem, removing it.
The network mapping flag has aged out, clearing it.
The network mapping flag is set. If it isn't cleared by the user, it will expire in: [#!variable!timeout!#] second(s).
The unified fences metadata file: [#!data!path::data::fences_unified_metadata!#] doesn't exist yet. It will be created now.
The unified fences metadata file: [#!data!path::data::fences_unified_metadata!#] will be refreshed on user request (--refresh passed).
The unified fences metadata file: [#!data!path::data::fences_unified_metadata!#] old and will now be refreshed.
This is a CentOS machine, moving the directory: [#!variable!source!#] to: [#!variable!target!#].
The database on: [#!variable!host!#] (UUID: [#!variable!uuid!#]) has been forced to resync via '--resync-db'.
It looks like you connected to the same database twice! The conflicting databases are:
- [#!variable!db1!#]
- [#!variable!db2!#].
The databases both report the same identifier (as reported by: #!variable!query!#).
If the targets are unique, did you copy the full database directory? A unique identifier is generated when 'initdb' is run, and exists on disk. Exiting.
The libvirtd' daemon isn't running. Will check for and remove virsh networks set to start on boot.
Removing the symlink: [#!variable!symlink!#].
Updating the cache state file.
[ Note ] - The host: [#!variable!host!#] entry in /etc/hosts has changed IP from: [#!variable!old_ip!#] to: [#!variable!new_ip!#].
Starting the daemon: [#!variable!daemon!#] locally.
Verifying that the daemon: [#!variable!daemon!#] has started.
Waiting for the daemon: [#!variable!daemon!#] to start...
The daemon: [#!variable!daemon!#] was already running locally, no need to start.
Starting the daemon: [#!variable!daemon!#] on: [#!variable!host!#].
Verifying that the daemon: [#!variable!daemon!#] has started on: [#!variable!host!#].
Waiting for the daemon: [#!variable!daemon!#] to start on: [#!variable!host!#]...
The daemon: [#!variable!daemon!#] was already running on: [#!variable!host!#], no need to start.
There are no servers running on either node, stopping daemons.
There are no servers running on locally and the peer is not in the cluster, stopping daemons.
The daemon: [#!variable!daemon!#] is already stopped locally, nothing to do.
Stopping the daemon: [#!variable!daemon!#] locally.
The daemon: [#!variable!daemon!#] is already stopped on: [#!variable!host!#], nothing to do.
Stopping the daemon: [#!variable!daemon!#] on: [#!variable!host!#].
One or more servers are still running on the Anvil!, not stopping daemons.
About to remove the old host type file: [#!variable!file!#].
This machine is not in an #!string!brand_0002!#, not configuring IPMI.
This machine does not appear to have an IPMI BMC (no BMC reported by 'dmidecode'). Not attempting to configure IPMI.
This machine appears to have an IPMI BMC, but the LAN channel (used to configure the BMC's network) wasn't found. Channels 0 to 9 were checked.
Configuring the local IPMI is dependent on knowing what #!string!brand_0002!# this host is a member of. This involves looking for a 'job' for this host to be run by 'anvil-join-anvil' (used to determine the IPMI password to set and to know which machine we are in the #!string!brand_0002!#). No job was found, so unable to configure IPMI at this time.
The IPMI BMC is configured to be set to: [#!variable!ip_address!#], but this doesn't match any of the networks in the install manifest with the UUID: [#!variable!manifest_uuid!#].
The IPMI BMC was set to DHCP, changing to static.
The IPMI BMC currently has the IP address: [#!variable!old!#], changing it to: [#!variable!new!#].
The IPMI BMC currently has the subnet mask of: [#!variable!old!#], changing it to: [#!variable!new!#].
The IPMI BMC currently has the default gateway of: [#!variable!old!#], changing it to: [#!variable!new!#].
The IPMI BMC administrator (oem) user was not found. The output (if any) of the call: [#!variable!shell_call!#] was:
====
#!variable!output!#
====
This host's manufacturer is: [#!variable!manufacturer!#], for the changes to take effect, the BMC will be reset now.
Successfully pinged: [#!variable!ip_address!#].
Timed out waiting to ping: [#!variable!ip_address!#]! Configuration will process in case we can't ping our own BMC, but the fence test may fail.
The password for the IPMI BMC works, no need to update it.
The password for the IPMI BMC works, no need to update it. Note that we had to use another machine to confirm, it looks like we can't talk to our own BMC using the IP address.
The password for the IPMI BMC appears to have been successfully updated. Will test to confirm.
The password for the IPMI BMC appears to have been successfully updated, though we had to reduce it to 20-bytes long. Will test to confirm.
The password for the IPMI BMC appears to have been successfully updated, though we had to reduce it to 16-bytes long. Will test to confirm.
Waiting: [#!variable!reset_delay!#] seconds to give the BMC time to reset...
The file: [#!variable!file!#] needs to be updated. The difference is:
====
#!variable!diff!#
====
Appending the file: [#!variable!file!#] with the line: [#!variable!line!#].
Attempting to parse bridge information using standard output after failing to parse JSON status information.
The server: [#!variable!server!#] is indeed running. It will be shut down now.
Checking the status of the server: [#!variable!server!#].
The 'libvirtd' daemon is not running. It may be starting up, will wait: [#!variable!wait_time!#] seconds...
Found the server to be running using it's PID. The state of the server can't be determined, however. Please start the 'libvirtd' daemon!
No PID for the server was found. It is not running on this host.
The server: [#!variable!server_name!#] is shutting down. Will wait for it to finish...
The server: [#!variable!server_name!#] is off.
The server: [#!variable!server_name!#] is running (state is: [#!variable!state!#]).
We've been asked to migrating the server: [#!variable!server!#] to: [#!variable!target_host!#].
Checking server state after: [#!variable!server!#] was migrated to this host.
Updating the postfix relay password file: [#!data!path::configs::postfix_relay_password!#].
Generating the binary hash of the postfix relay password file: [#!data!path::configs::postfix_relay_password!#].
It looks like the initial configuration of the postfix main configuration file, injecting the relay password file.
Injecting the configuration line: [#!variable!line!#].
Updating the configuration line from: [#!variable!old_line!#] to: [#!variable!new_line!#].
Starting and enabling the daemon: [#!variable!daemon!#].
Creating the Anvil! alert email spool directory: [#!data!path::directories::alert_emails!#].
Connected to the database named: [#!variable!name!#] as: [#!variable!user!#@#!variable!host!#:#!variable!port!#].
This IS the database queries are read from.
This is NOT the database queries are read from.
This host UUID is: [#!variable!uuid!#] and the database identifier is: [#!variable!identifier!#].
Writing out alert email to: [#!variable!file!#].
Sending email to: [#!variable!to!#].
I was asked to process alerts, but there are no configured email servers. No sense proceeding.
The table: [#!variable!table!#] already exists in the database on the host: [#!variable!host!#], no need to load the schema.
The table: [#!variable!table!#] does NOT exists in the database on the host: [#!variable!host!#]. Will load the schema file: [#!variable!file!#] now.
The passed in 'temperature_state' value: [#!variable!temperature_state!#] is invalid. The value must be 'ok', 'warning' or 'critical'.
The passed in 'temperature_is' value: [#!variable!temperature_is!#] is invalid. The value must be 'nominal', 'warning' or 'critical'.
The server: [#!variable!server!#] is already running, no need to boot it.
The server: [#!variable!server!#] is already running on the target node: [#!variable!requested_node!#], migration not needed.
Waiting for the server: [#!variable!server!#] to finish migrating to the node: [#!variable!requested_node!#]...
The migration of the server: [#!variable!server!#] to the node: [#!variable!requested_node!#] is complete!
Waiting for the server: [#!variable!server!#] to boot...
The server: [#!variable!server!#] has booted on: [#!variable!host_name!#]!
Waiting for the server: [#!variable!server!#] to shut down...
The server: [#!variable!server!#] is now off.
The server: [#!variable!server!#] (#!variable!server_uuid!#) has a definition change:
====
#!variable!difference!#
====
- Scan agent: [#!variable!agent_name!#] exited after: [#!variable!runtime!#] seconds with the return code: [#!variable!return_code!#].
I'm not on the same network as: [#!variable!host_name!#]. Unable to check the power state.
The host: [#!variable!host_name!#] appears to be off, but there's no IPMI information, so unable to check the power state or power on the machine.
The host: [#!variable!host_name!#] has no IPMI information. Wouldn't be able to boot it, even if it's off, so skipping it.
The host: [#!variable!host_name!#] will be checked to see if it needs to be booted or not.
The host: [#!variable!host_name!#] is up, no need to check if it needs booting.
The host: [#!variable!host_name!#] couldn't be reached directly, but IPMI reports that it is up. Could the IPMI BMC be hung or unplugged?
The host: [#!variable!host_name!#] is off. Will check now if it should be booted.
The host: [#!variable!host_name!#] has no stop reason, so we'll check to see if we should power it on, in case it lost power or overheated without warning.
The host: [#!variable!host_name!#] was stopped by the user, so we'll leave it off.
The host: [#!variable!host_name!#] was powered off because of power loss. Checking to see if it is now safe to restart it.
The host: [#!variable!host_name!#] was powered off because of thermal issues. Checking to see if it is now safe to restart it.
Unable to find an install manifest for the Anvil! [#!variable!anvil_name!#]. As such, unable to determine what UPSes power the machine: [#!variable!host_name!#]. Unable to determine if the power feeding this node is OK or not.
Unable to parse the install manifest uuid: [#!variable!manifest_uuid!#] for the Anvil! [#!variable!anvil_name!#]. As such, unable to determine what UPSes power the machine: [#!variable!host_name!#]. Unable to determine if the power feeding this node is OK or not.
The UPS referenced by the 'power_uuid': [#!variable!power_uuid!#] under the host: [#!variable!host_name!#] has no record of being on mains power, so we can't determine how long it's been on batteries. Setting the "shortest time on batteries" to zero seconds.
Marking the host as 'online' and clearing the host's stop reason.
There appears to be a problem translating the 'fence_ipmilan' into a workable 'ipmitool' command for the host: [#!variable!host_name!#]. Unable to check the thermal data of the host.
The host: [#!variable!host_name!#] was powered off because of power loss. Power is back and the UPSes are sufficiently charged. Booting it back up now.
The host: [#!variable!host_name!#] was powered off for thermal reasons. All available thermal sensors read as OK now. Booting it back up now.
The file: [#!variable!file_path!#] isn't on (or isn't the right size on) Striker: [#!variable!host_name!#]. Not using it to pull from.
The job: [#!variable!job_uuid!#] was assigned to our Anvil! and this is the primary node. Assigning the job to this machine.
I was about to start: [#!variable!command!#], but I last tried to run this: [#!variable!last_start!#] seconds ago. We'll wait at least a minute before we try to run it again.
The LV(s) behind the resource: [#!variable!resource!#] have had their DRBD metadata created successfully.
The LV(s) behind the resource: [#!variable!resource!#] have been forced to primary to initialize the resource.
Asked to validate that the server: [#!variable!server!#] is able to run.
We've been asked to stop the server: [#!variable!server!#].
The server: [#!variable!server_name!#] is already off.
The request to stop: [#!variable!server_name!#] has been sent. We'll now check periodically waiting for it to stop.
The server: [#!variable!server_name!#]'s current status is: [#!variable!status!#].
The server: [#!variable!server_name!#] is now off.
The server: [#!variable!server_name!#] has been removed from Pacemaker.
We're required by at least one peer, so we'll wait a bit and check to see if they still need us before we proceed with the deletion.
Deleting the file: [#!variable!file!#].
Wiping the metadata from the DRBD resource: [#!variable!resource!#].
Wiping any file system signatures and then deleting the logical volume: [#!variable!device_path!#].
The resource name: [#!variable!resource_name!#] was found, returning the first TCP port and minor number.
The job: [#!variable!command!#] with UUID: [#!variable!job_uuid!#] is a start-time job, not running it now.
The lvm.conf already has the filter: [#!variable!filter!#], will not change it.
Updated the lvm.conf file to add the filter: [#!variable!filter!#] to prevent LVM from seeing the DRBD devices as LVM devices.
The host: [#!variable!host_name!#] last updated the database: [#!variable!difference!#] seconds ago, skipping power checks.
The host: [#!variable!host_name!#] has no entries in the 'updated' table, so ScanCore has likely never run. Skipping this host for now.
This host is not a node, this program isn't designed to run here.
Enabled 'anvil-safe-start' locally on this node.
Enabled 'anvil-safe-start' on both nodes in this Anvil! system.
Disabled 'anvil-safe-start' locally on this node.
Disabled 'anvil-safe-start' on both nodes in this Anvil! system.
This node is not in an Anvil! yet, so there's no reason to run this program.
Scuccessful acess over the network: [#!variable!network!#] to the peer: [#!variable!peer!#] using the peer's IP: [#!variable!peer_ip!#].
Failed to acess over the peer: [#!variable!peer!#] over the network: [#!variable!network!#] via the peer's IP: [#!variable!peer_ip!#].
At least one network connection to the peer: [#!variable!peer!#] is still down. Waiting a bit and then will check again.
All connections to the peer: [#!variable!peer!#] are up!
The cluster does not appear to be running, starting it now.
The cluster isn't up yet, waiting a bit before checking again.
We're online as: [#!variable!node_name!#], but we're not quorate yet. Continuing to wait.
We're online as: [#!variable!node_name!#] and quorate!
We're not online yet. Waiting for 'in_ccm/crmd/join': [#!variable!in_ccm!#/#!variable!crmd!#/#!variable!join!#]. ('in_ccm' = consensus cluster member, communication layer. 'crmd' = cluster resource manager daemon is up, 'join' = allowed to host resources).
The file: [#!variable!file_name!#] is not recorded for the Anvil! [#!variable!anvil_name!#] yet. Registering it now as not sync'ed to this Anvil! system.
Asking 'anvil-boot-server' to boot the servers now.
We were asked to delete the file: [#!variable!file!#], but it doesn't exist, so nothing to do.
The file: [#!variable!file!#] has been successfully removed.
We were asked to delete the file: [#!variable!file!#] on the target: [#!variable!target!#], but it doesn't exist, so nothing to do.
Successfully deleted the file: [#!variable!file!#] on the target: [#!variable!target!#].
The host: [#!variable!host_name!#] has shut down for thermal reasons: [#!variable!count!#] times. To prevent a frequent boot / thermal excursion / shutdown loop, we will wait: [#!variable!wait_for!#] before marking it's temperature as being OK again.
This host has been running for: [#!variable!uptime!#]. The cluster will not be started (uptime must be less than 10 minutes for 'anvil-safe-start' to be called automatically).
- The Scan agent: [#!variable!agent_name!#] ran a bit long, exiting after: [#!variable!runtime!#] seconds with the return code: [#!variable!return_code!#].
Aging out one or more records that are more than: [#!variable!age!#] hours old from the table: [#!variable!table!#] on the database host: [#!variable!database!#].
Starting the process of aging out old data. This can take about a minute, please be patient.
Aging out old data completed after: [#!variable!runtime!#] seconds.
Updating the apache configuration file: [#!variable!file!#]. The changes are:
====
#!variable!difference!#
====
This system will reboot in: [#!variable!seconds!#] seconds...
The bond: [#!variable!bond!#] is completely down, trying to recover member interfaces.
The bond: [#!variable!bond!#] is up, but at least one interface is down. Will try to recover now.
The bond: [#!variable!bond!#]'s interface: [#!variable!interface!#] is not in this bond. Trying to bring it up now...
The bond: [#!variable!bond!#] will now be brought up (even if it already is up).
Network device names have changed, rebooting to ensure they take effect. The job will restart once the network comes back up.
The bridge: [#!variable!bridge!#] is down, tryin to bring it up now.
Our peer is offline and we're already the preferred fence node. Nothing to do.
Our peer is offline and we're not the preferred fence node. Updating the fence config to prefer this node.
The server: [#!variable!server_name!#] is migrating. Skipping fence delay preference checks for now.
No servers are running on either node. Skipping fence delay preference checks for now.
We've got: [#!variable!local_server_count!#] servers, and the peer has: [#!variable!peer_server_count!#] servers. Skipping fence delay preference checks for now.
We're hosting servers, and our peer is not. Making the fence delay favours this node.
The Anvil! daemon is in startup mode, and the job: [#!variable!job_uuid!#], command: [#!variable!job_command!#] is not a startup job, ignoring it for now.
Our peer is online, no need to check server location constraints.
The server: [#!variable!server!#] has a location constraint that preferres our peer, but our peer is offline. Updating the location constraint to prefer this node.
Disabling dual primary for the resource: [#!variable!resource!#] to the node: [#!variable!target_name!# (#!variable!target_node_id!#)].
The corosync config file is being updated with these differences;
====
#!variable!difference!#
====
Synchronizing corosync config.
Reloading corosync config.
#!variable!program!# is disabled in anvil.conf. and '--force' was not used. Exiting.
[ Note ] - The network interface: [#!variable!name!#] with 'network_interface_uuid': [#!variable!uuid!#] is a duplicate, removing it from the database(s).
[ Note ] - Managing /etc/hosts has been disabled.
[ Note ] - The Anvil!: [#!variable!anvil_name!#]'s storage group: [#!variable!storage_group!#] didn't have an entry for the host: [#!variable!host_name!#]. The volume group: [#!variable!vg_internal_uuid!#] is a close fit and not in another storage group, so adding it to this storage group now.
[ Note ] - We're a Striker and we did not connect to a peer's database. Will check now if we can load a recent backup, then start postgres locally (with or without a load).
Evaluating the dump file: [#!variable!full_path!#].
The database host UUID: [#!variable!host_uuid!#] is not configured here, ignoring: [#!variable!full_path!#].
We created the database dump file: [#!variable!full_path!#], will compare it's modidified time to other dumps we may find.
The database was dumped to: [#!variable!file!#] in: [#!variable!took!#] second(s). The size of the dump file is: [#!variable!size!#] (#!variable!size_bytes) bytes).
The database was loaded successfull from the file: [#!variable!file!#] in: [#!variable!took!#] second(s)!
No databases were available, so we will become primary after loading: [#!variable!file!#], which is: [#!variable!size!#] (#!variable!size_bytes!# bytes). Please be patient, this could take a moment.
The database was loaded, clear it and other DB dumps out now so that they don't get reloaded again in the future.
Sync'ed the file: [#!variable!file!#] to the peer Striker: [#!variable!host_name!#]. The sync took: [#!variable!took!#] seconds, and the file was: [#!variable!size!#] (#!variable!size_bytes!# bytes).
We're going to shut down our database. Creating a backup first.
Stopped the postgresql daemon as a peer is currently primary.
Our most recent database dump is newer than any from our peers. As such, we'll just start the database without a load.
Retrying to connect to the database.
The target can be reached on the dedicated migration network: [#!variable!target!#] via the IP address: [#!variable!ip!#], switching to use that for the RAM copy.
[ Note ] - The IP address: [#!variable!ip!#] with 'ip_address_uuid': [#!variable!uuid!#] is a duplicate, removing it from the database(s).
The database dump file: [#!variable!file!#] exists, skipping database setup.
query() was asked to query the database with UUID: [#!variable!old_uuid!#] but there is no file handle open to the database. Switched the read to: [#!variable!new_uuid!#].]]>
Opening the firewall zone: [#!variable!zone!#] to allow the service: [#!variable!service!#].
No password for the database on the host with UUID: [#!variable!uuid!#], skipping it.
The firewalld daemon isn't running, skipping firewall setup. Is 'sys::daemon::firewalld' set to '0' in anvil.conf?
The postgresql server is installed.
The host: [#!variable!host_name!#] was powered off for an unknown reason, and 'feature::scancore::disable::boot-unknown-stop' is set to: [#!data!feature::scancore::disable::boot-unknown-stop!#]. Will not boot this host.
The host: [#!variable!host_name!#] was powered off for an unknown reason, and 'feature::scancore::disable::boot-unknown-stop' is set to: [#!data!feature::scancore::disable::boot-unknown-stop!#]. If power and temperature looks good, we'll boot it.
The host: [#!variable!host_name!#] has good power and temperature readings. Booting it back up now.
The resync has completed in: [#!variable!took!#] second(s).
Log->secure' is not set. ]]>
[ Note ] - The DRBD kernel module failed to load. It is possible the kernel was updated. We will check to see if we can install a pre-built RPM, or if we need to build one ourselves.
Found an installable DRBD kernel module RPM that matches the current kernel. Installing it now.
[ Note ] - We need to build the DRBD kernel module. This can take a few minutes, please be patient! Use 'journalctl -f' to monitor the build process.
Successfully built and installed the new DRBD kernel module!
We were asked to resync the database, but this host is hosting: [#!variable!count!#] server(s). Resync is not allowed when servers are running to reduce the risk the kernel's out of memory handler shooting a VM if the resync consumes too much RAM. You can see which servers are running with 'virsh list' and look for servers whose states are "running", "paused", "in shutdown" or "pmsuspended".
Testing that our short host name resolves to one of our IP prior to starting the cluster.
Changing the ownership of: [#!variable!file!#] to be owned by: [#!variable!user!#:#!variable!user!#].
Enabling 'ping' for all users.
The network interface: [#!variable!nic!#] on the host: [#!variable!host!#] is recorded in the 'history.network_interfaces' table, but has not corresponding entry in the public table. Removing it.
[ Note ] - The network bridge: [#!variable!name!#] with 'bridge_uuid': [#!variable!uuid!#] is a duplicate, removing it from the database(s).
Skipping resync, not a Striker dashboard.
### REBOOT REQUESTED ### - [#!variable!reason!#]
Reboot flag set by command line switch to 'anvil-manage-power'.
Poweroff flag set by command line switch to 'anvil-manage-power'.
Kernel updated, reboot queued.
Requested to power-off as part of the anvil-safe-stop job.
The anvil-safe-stop job has completed and will now power off.
The anvil-configure-host tool is requesting a reboot.
The connection to: [#!variable!host!#] for the resource: [#!variable!resource!#] is in the connection state: [#!variable!connection_state!#]. Will try to connect to the peer and up the resource now.
About to request the start of the resource: [#!variable!resource!#] on: [#!variable!host!#].
The peer: [#!variable!peer!#] is defined in the resource: [#!variable!resource!#] but we don't connect to it, ignoring it.
All clients using our database are gone, ready to stop the postgresql daemon.
[ Note ] - Marking our database as active.
[ Note ] - The Striker database host: [#!variable!host!#] is inactive, skipping it.
[ Note ] - Deleting the contents of the hash: [#!variable!hash!#].
Running the scan agent: [#!variable!agent_name!#]...
I was asked to update the timestamp, but the returned timestamp matches the last one. Will loop until a new timestamp is returned.
The timestamp has been updated from: [#!variable!old_time!#] to: [#!variable!new_time!#].
read_state() was called but both the 'state_name' and 'state_uuid' parameters were not passed or both were empty.]]>
Forcing the dailing resync and checking to clear records in the history schema no longer in public schema.
Updating the OUI list will happen after the system has been up for at least an hour. You can force an update now by running 'striker-parse-oui --force' at the command line.
Updated: [#!data!path::configs::firewalld.conf!#] to disable 'AllowZoneDrifting'. See: https://firewalld.org/2020/01/allowzonedrifting
Created the firewall zone: [#!variable!zone!#].
Added the interface: [#!variable!interface!#] to the firewall zone: [#!variable!zone!#].
Opening the firewall service: [#!variable!service!#] for the zone: [#!variable!zone!#]!
Closing the firewall service: [#!variable!service!#] for the zone: [#!variable!zone!#]!
Opening the firewall port: [#!variable!port!#/#!variable!protocol!#] for the zone: [#!variable!zone!#]!
Opening the firewall port range: [#!variable!port!#/#!variable!protocol!#] for the zone: [#!variable!zone!#]!
Closing the firewall port: [#!variable!port!#/#!variable!protocol!#] for the zone: [#!variable!zone!#]!
Closing the firewall port range: [#!variable!port!#/#!variable!protocol!#] for the zone: [#!variable!zone!#]!
Changes were made to the firewall, reloading now.
This server will boot: [#!variable!delay!#] after the server: [#!variable!server!#]. Checking if it's time to boot it or not.
The server: [#!variable!boot_after_server!#] hasn't booted yet, holding off booting: [#!variable!this_server!#].
Evaluating the booting of the server: [#!variable!server!#].
The server: [#!variable!boot_after_server!#] has booted, but we need to wait: [#!variable!time_to_wait!#] seconds before we can start this server: [#!variable!this_server!#].
The server: [#!variable!server!#] is ready to boot.
The server: [#!variable!server!#] was found to be running already, but it wasn't marked as booted. Marking it as if it just booted to handle any dependent servers.
The server: [#!variable!server!#] is configured to stay off, ignoring it.
The file: [#!variable!file!#] needs to be added to the database, but since the last scan it's size grew from: [#!variable!old_size_bytes!# (#!variable!old_size_hr!#)] to: [#!variable!new_size_bytes!# (#!variable!new_size_hr!#)]. A difference of: [#!variable!difference_bytes!# (#!variable!difference_hr!#)]. It might still be being uploaded, so we'll keep checking periodocally until the size stops changing.
Found the missing file: [#!variable!file!#] in the directory: [#!variable!directory!#]. Updating the database now.
Deleting the hash key: [#!variable!hash_key!#].
[ Note ] - The server: [#!variable!server!#] is not yet off, but we've been told not to wait for it to stop.
The DRBD Proxy license file: [#!data!path::configs::drbd-proxy.license!#] doesn't exist.
The DRBD Proxy license file has expired.
None of the MAC sddresses in the The DRBD Proxy license file match any of the MAC addresses on this system.
The DRBD Proxy license file: [#!data!path::configs::drbd-proxy.license!#] is missing expected data or is malformed.
The host name: [#!variable!target!#] does not resolve to an IP address.
The connection to: [#!variable!connection!#] was refused. If you recently booted the target, the network might have started, the ssh daemon might not be running yet.
There is no route to: [#!variable!target!#]. Is the machine (or the interface) up?
Timed out while waiting for a reply from: [#!variable!target!#]. Is the machine booting up? If so, please wait a minute or two and try again.
There was an unknown error while connecting as: [#!variable!user!#] to: [#!variable!remote_user!#@#!variable!target!#]. The error was: [#!variable!error!#]
We were unable to log in to: [#!variable!connection!#]. Please check that the password is correct or that passwordless SSH is configured properly.
An SSH session was successfully opened to: [#!variable!target!#].
The remote shell call: [#!variable!shell_call!#] to: [#!variable!connection!#] failed with the error: [#!variable!error!#].
The SSH session to: [#!variable!target!#] was successfully closed.
The SSH session to: [#!variable!target!#] was closed because 'no_cache' was set and there was an open SSH connection.
Wrote the system UUID to the file: [#!variable!file!#] to enable the web based tools to read this system's UUID.
Wrote the journald config file: [#!variable!file!#] to disable rate limiting to ensure high log levels are not lost.
Updated the journald config file: [#!variable!file!#] to enable persistent storage of logs to disk. Will restart the journald daemon now.
One or more files on disk have changed. Exiting to reload.
The reconfigure of the network has begun.
The host name: [#!variable!host_name!#] has been set.
Failed to set the host name: [#!variable!host_name!#]! The host name is currently [#!variable!bad_host_name!#]. This is probably a program error.
What would you like the new password to be?
Please enter the password again to confirm.
About to update the local passwords (shell users, database and web interface).
Proceed? [y/N]
Aborting.
Auto-approved by command line switch, proceeding.
Updating the Striker user: [#!variable!user!#] password...
Done.
Updating the database user: [#!variable!user!#] password...
Updating the local config file: [#!variable!file!#] database password...
Updating the shell user: [#!variable!user!#] password...
Finished!
NOTE: You must update the password of any other system using this host's
database manually!
Failed to write the new password to the temporary file: [#!variable!file!#]. Please check the logs for details.
Beginning configuration of local system.
Peer: [#!variable!peer!#], database: [#!variable!name!#], UUID: [#!variable!uuid!#]
Clearing update cache and checking for available updates.
#!data!sys::users::user_name!#]]]>
Downloading approximately: [#!variable!size!#] worth of updates.
ERROR: There was a problem with the OS update process. Please check the system logs for more details.
Downloading complete. Installation of updates now underway.
Updates finished. Verifying now.
System update complete! The kernel was updated, so a reboot is required.
System update complete! A reboot is not required.
This system has been placed into maintenance mode.
This system was already in maintenance mode, nothing changed.
This system has been removed from maintenance mode.
This system was not in maintenance mode, nothing changed.
Bad call. Usage:
Set maintenance mode: #!variable!program!# --set 1
Clear maintenance mode: #!variable!program!# --set 0
Report maintenance mode: #!variable!program!#
This system is in maintenance mode.
This system is NOT in maintenance mode.
This system has been set to need a reboot.
This system was already set to need a reboot, nothing changed.
This system has has been set to no longer need a reboot.
This system was not in maintenance mode, nothing changed.
Bad call. Usage:
Set that a reboot is required: #!variable!program!# --reboot-needed 1
Clear the need for a reboot: #!variable!program!# --reboot-neededset 0
Report if a reboot is needed: #!variable!program!#
Reboot the system: #!variable!program!# --reboot [-y]
Poweroff the system: #!variable!program!# --poweroff [-y]
The '-y' option prevents a confirmation prompt.
This system needs to be rebooted.
This system does NOT need to be rebooted.
Asked to only run once, so exiting now.
Previous run exited early. Restarting momentarily.
No updates were found or needed.
* Packages downloaded: [#!variable!downloaded!#], Installed or updated: [#!variable!installed!#], Verified: [#!variable!verified!#], Output lines: [#!variable!lines!#].
Are you sure you want to reboot this system? [y/N].
Are you sure you want to power off this system? [y/N].
Aborting.
Powering off the local system now.
Rebooting the local system now.
The #!string!brand_0006!# has restarted at: [#!variable!date_and_time!#] after powering back on.
You will now be logged out and this machine will now be rebooted so that the new configuration can take effect.
Starting the job to add or update an #!string!brand_0006!# database peer.
Starting the job to remove an #!string!brand_0006!# database peer.
Sanity checkes passed.
Added the peer to the config file.
Old peer found and removed from the config file.
Existing peer found and update needed and made.
Configuration changed, existing config backed up as: [#!variable!backup!#].
New config written to disk.
Reconnecting to the database(s) to ask the peer to add us. Will hold here until the peer is added to the 'hosts' table. Please be patient.
The peer: [#!variable!host!#] is now in the database. Proceeding.
The job the peer add us has been registered. It should add us as soon as it looks for new jobs (generally within a second or two).
NOTE: Please be patient!
The 'dnf' cache will be cleared to ensure the freshest RPMs are download. This will cause a delay
before output starts to appear. Once started, each RPM will be reported after it is downloaded. Large
RPMs may cause the output to appear stalled. You can verify that the download it proceeding by using
'df -hs #!variable!directory!#' to verify the numbers are increasing.
Output: [#!variable!line!#].
Error: [#!variable!line!#].
#!string!brand_0002!# - Install Target Menu
Will boot the next device as configured in your BIOS in # second{,s}.
key to edit the boot parameters of the highlighted option.]]>
Editing of this option is disabled.
Install a Striker dashboard (#!data!host_os::os_name!# #!data!host_os::os_arch!#)
This install will choose the largest available fixed disk (spindle or platter), remove any data from it,
repartition in and install. This is a fully automated process! Once selected, the only way to abort will be
a manual reboot on the system.
*** ALL EXISTING DATA ON SELECTED DRIVE WILL BE LOST! ***
*** THERE WILL BE NO FURTHER PROMPT! PROCEED CAREFULLY! ***
Install an #!string!brand_0002!# Node (#!data!host_os::os_name!# #!data!host_os::os_arch!#)
This install will choose the smallest available fixed rotating disk, if available. If none is found, the
smallest solid-state fixed disk will be chosen. All data will be removed, the disk repartitioned and a new OS
will be installed. This is a fully automated process! Once selected, the only way to abort will be a manual
reboot on the system.
*** ALL EXISTING DATA ON SELECTED DRIVE WILL BE LOST! ***
*** THERE WILL BE NO FURTHER PROMPT! PROCEED CAREFULLY! ***
Install an #!string!brand_0002!# Disaster Recover Host (#!data!host_os::os_name!# #!data!host_os::os_arch!#)
This install will choose the smallest available fixed rotating disk, if available. If none is found, the
smallest solid-state fixed disk will be chosen. All data will be removed, the disk repartitioned and a new OS
will be installed. This is a fully automated process! Once selected, the only way to abort will be a manual
reboot on the system.
*** ALL EXISTING DATA ON SELECTED DRIVE WILL BE LOST! ***
*** THERE WILL BE NO FURTHER PROMPT! PROCEED CAREFULLY! ***
Boot into a rescue session
This will boot into a rescue shell. From there, you can access the bare hard drive on the machine to attempt
to diagnose and repair problems that might be preventing a system from booting.
No data on the target machine will be changed by this option.
Install standard #!data!host_os::os_name!# #!data!host_os::os_arch!# Install
This will start a standard install of #!data!host_os::os_name!#.
This option will not change anything on disk until and unless you choose to do so.
Boot from the next boot device
Restarting: [#!variable!daemon!#] after updating the file: [#!variable!file!#].
The file: [#!variable!file!#] did not need to be updated.
The file: [#!variable!file!#] was updated.
Enabling and starting: [#!variable!daemon!#]
The daemon: [#!variable!daemon!#] is already enabled, skipping.
Copying the syslinux files: [#!data!path::directories::syslinux!#/*] into the tftpboot directory: [#!data!path::directories::tftpboot!#].
The syslinux files from: [#!data!path::directories::syslinux!#] appear to already be in the tftpboot directory: [#!data!path::directories::tftpboot!#], skipping.
Checking that the "Install Target" function is configured and updated.
Finding install drive for a Striker dashboard.
Finding install drive for an #!string!brand_0006!# node.
Finding install drive for a DR (disaster recovery) host.
[ Error ] - Target type not specified. Be sure that '\$type' is set to
'striker', 'node' or 'dr' in the \%pre section of the kickstart
script.
{$path}{transport}."], of the size: [".$device->{$path}{size}." (".hr_size($device->{$path}{size}).")]]]>
{$path}{transport}."], of the size: [".$device->{$path}{size}." (".hr_size($device->{$path}{size}).")]]]>
{$use_drive}{size})."]]]>
{$use_drive}{size})."]]]>
{$use_drive}{size})."] (no platter drives found)]]>
Striker Dashboard
#!string!brand_0002!# Node
Disaster Recovery (DR) Host
Regenerating the source repository metadata.
[ Error ] - The comps.xml file: [#!variable!comps_xml!#] was not found. This provides package group information required for Install Target guests.
About to try to download aproximately: [#!variable!packages!#] packages needed to:
- [#!variable!directory!#].
Successfully enabled the Install Target function.
Successfully disabled the Install Target function.
The 'Install Target' function is enabled.
The 'Install Target' function is disabled.
The 'Install Target' function has been disabled.
The attempt to disabled the 'Install Target' function failed! Please check the logs for details.
The 'Install Target' function has been enabled.
The attempt to enable the 'Install Target' function failed! Please check the logs for details.
[ Error ] - The comps.xml file: [#!variable!comps_xml!#] was found, but something failed when we tried to copy it to: [#!variable!target_comps!#].
Updated repository data.
Back-Channel Network ##!variable!number!# - Used for all inter-machine communication in the #!string!brand_0006!#, as well as communication for foundation pack devices. Should be VLAN-isolated from the IFN and, thus, trusted.
Storage Network ##!variable!number!# - Used for DRBD communication between nodes and DR hosts. Should be VLAN-isolated from the IFN and, thus, trusted.
Internet-Facing Network ##!variable!number!# - Used for all client/user facing traffic. Likely connected to a semi-trusted network only.
Updating / configuring the firewall.
It appears like we need to accept the fingerprint. Will do so now and then try to conenct again.
The zone: [#!variable!zone!#] file: [#!variable!file!#] needs to be updated.
The zone: [#!variable!zone!#] file: [#!variable!file!#] doesn't exist, it will now be created.
The interface: [#!variable!interface!#] will be added to the zone: [#!variable!zone!#].
Reloading the firewall...
Restarting the firewall...
Changing the default zone to: [#!variable!zone!#].
* Download progress: [#!variable!percentage!# %], Downloaded: [#!variable!downloaded!#], Current rate: [#!variable!current_rate!#], Average Rate: [#!variable!average_rate!#], Time Running: [#!variable!running_time!#], Estimated left: [#!variable!estimated_left!#].
The zone: [#!variable!zone!#]'s user-land file: [#!variable!file!#] exists. Skipping checking the configuration of this zone.
Red Hat user
Red Hat password
What kind of machine will this host be?
current IP address and password?]]>
The target's host key has changed. If the target has been rebuilt, or the target IP reused, the old key will need to be removed. If this is the case, remove line: [#!variable!line!#] from: [#!variable!file!#].
Set the new host name.
This is a RHEL host and has not yet been subscribed, but there is no internet access detected. OS Updates likely won't work, nor will subscribing the system. These tasks will be deferred until later in the setup process.
There is no internet access detected. OS Updates likely won't work and will be deferred until later in the setup process.
Local repository
Mail Server Configuration
When alert emails are sent, they are stored locally and then forwarded to a mail server. This is where you can configure the mail server that alerts are forwarded to for delivery to recipients.
Alert recipient Configuration
When a system alert is recorded, any alert recipient interested in that alert will be notified by email. This is determined by the alert's level, and the recipients alert level interest. If the alert's level is equal to or higher than a given alert, an email will be crafted for them, in their chosen language and units.
[ Error ] - The modules.yaml file: [#!variable!modules_yaml!#] was found, but something failed when we tried to copy it to: [#!variable!target_modules!#].
Updated module metadata.
Back-Channel Network
Storage Network
Internet-Facing Network
The network is the lowest IP in the subnet range. This is not any given IP address. For example, '10.255.0.0' for the mask '255.255.0.0', '192.168.1.0' for the mask '255.255.255.0', etc.
The subnet mask indicates the size of the network. The BCN and SN must be '255.255.0.0 (/16)'. Set the mask to match you IFN network(s).
If the network has a gateway (permanent or periodic), enter it here.
An isolated, VLAN'ed network used for all inter-machine communication in the #!string!brand_0006!#, as well as communication for foundation pack devices.
An isolated, VLAN'ed network Used for storage replication traffic only.
Connecting to the main site intranet. This is the network (or networks) that guest virtual servers will use to connect to all devices outside the #!string!brand_0006!# system.
Please select the host you want to purge from the database:
#!variable!key!#) #!variable!host_name!# - #!variable!type!# - #!variable!host_uuid!#)
Which machine do you want to purge from the database(s)?
Note: Be sure all databases are online! Otherwise, the purged records could return during the next resync!
Are you sure you want to purge: [#!variable!host_name!# (#!variable!host_uuid!#)]?
Confirmed by switch, proceeding with purge of: [#!variable!host_name!#].
Thank you, proceeding.
The host: [#!variable!host_name!#] has been purged.
##] anvil-daemon [###########################################################################################
# NOTE: The /etc/hosts file is managed by the Anvil! system. Manual additions will be retained, but #
# conflicts with hosts managed by the Anvil! system will be overwritten. Specifically, all hosts #
# related to Striker dashboards and, for hosts in an Anvil!, peer nodes and DR hosts will be set to #
# use the IPs recorded in the Anvil! database (which themselves are recorded by the anvil-daemon #
# running on each host). If / when an IP address changes, the host files on all associated hosts #
# should update within a minute. #
#############################################################################################################
Hosts added or updated by the #!string!brand_0002!# on: [#!variable!date!#]:
ScanCore has started.
The scan agent: [#!variable!agent_name!#] timed out! It was given: [#!variable!timeout!#] seconds to run, but it didn't return, so it was terminated.
The scan agent: [#!variable!agent_name!#] check if it's schema was loaded! This is likely a problem with the SQL schema in the file: [#!variable!file!#]. Details are likely available in the: [#!data!path::log::main!#] log file.
The scan agent: [#!variable!agent_name!#] has now successfully loaded! Whatever issue existed with: [#!variable!file!#] has been resolved.
The SQL schema for the scan agent: [#!variable!agent_name!#] has been loaded into the database host: [#!variable!host_name!#].
This Striker is a RHEL host. As such, we'll need to download any updated to packages in the High Availability repositories from entitled nodes. Will search now for a node to use...
The node: [#!variable!node_name!#] is online, has internet access and it is a RHEL machine. Will use it to download HA packages.
No RHEL-based nodes are available. Unable to check for updated packages under the High Availability entitlement.
Downloaded and copied HA packages that started with the letter: [#!variable!letter!#].
Finished downloading HA packages!
# The following line was added to track this resource UUID in the Anvil! database.
# Please do edit or remove it.
# scan_drbd_resource_uuid = #!variable!uuid!#
Preparing to provision a new server.
Processing an uploaded file.
Moving the file: [#!variable!file!#] to: [#!data!path::directories::shared::files!#].
Calculating the md5sum. This can take a little while for large files, please be patient.
The md5sum is: [#!variable!md5sum!#]. Storing details in the database.
Copying the file over to: [#!variable!host!#]. Please be patient, this could take a bit for large files.
Registering the file to be downloaded to the Anvil!: [#!variable!anvil_name!#]. Anvil! members will sync this file shortly. Member machines that are not online will sync the file when they do return.
Upload is complete!
Processing the pull of a file from Striker.
We're a DR host and there are: [#!variable!strikers!#] dashboards, so we will wait to pull the file until after the nodes are done. We're currently waiting on; Node 1? [#!variable!node1_waiting!#], Node 2? [#!variable!node2_waiting!#]. We'll check again at: [#!variable!wait_until!#].
Beginning rsync from: [#!variable!source_file!#] to: [#!variable!target_directory!#], please be patient...
Download appears to be complete, calculating md5sum to verify, please be patient...
Success! The file has been successfully downloaded.
Processing a file purge.
Processing an uploaded file.
Processing a file mode check.
Proceed? [Y/n]
Preparing to provision a new server.
-=] Listing servers on the Anvil! [#!variable!anvil_name!#].
(No servers found).
Which server would you like to delete?
- Please enter the server name or the number beside the server that you wish to delete. Press 'ctrl + c' to cancel.
[ WARNING ] - This is an irreversible action!
Are you sure that you want to delete the server: [#!variable!server_name!#]? [Type 'Yes']
Searching to see if the server is running...
The server is running on the host: [#!variable!host_name!#], assigning the job to it.
The server is not running anywhere, assigning the job to this host.
The server is running here, assigning the job to this host.
Preparing to delete a server.
Preparing to migrate a server (or all servers).
- #!variable!server_name!# (Current state: [#!variable!server_state!#])
- * #!variable!server_name!# (Deleted, name can be reused)
We're Striker: [#!variable!striker!#], and we're now configured, so we're done. Striker 1 will finish configuration.
The node: [#!variable!host_name!#] is in an unknown state.
The node: [#!variable!host_name!#] is a full cluster member.
The node: [#!variable!host_name!#] is coming online; the cluster resource manager is running. (step 2/3)
The node: [#!variable!host_name!#] is coming online; the node is a consensus cluster member. (step 1/3)
The node: [#!variable!host_name!#] has booted, but it is not (yet) joining the cluster.
The 'anvil-safe-start' tool is enabled on both this node and on the peer.
The 'anvil-safe-start' tool is disabled on both this node and on the peer.
The 'anvil-safe-start' tool is enabled on this node and disabled on the peer.
The 'anvil-safe-start' tool is disabled on this node and enabled on the peer.
The 'anvil-safe-start' tool is disabled, exiting. Use '--force' to run anyway.
The 'anvil-safe-start' tool is disabled, but '--force' was used, so proceeding.
It appears that another instance of 'anvil-safe-start' is already runing. Please wait for it to complete (or kill it manually if needed).
Preparing to rename a server.
Preparing to rename stop this node.
This records how long it took to migate a given server. The average of the last five migations is used to guess how long future migrations will take.
One or more servers are migrating. While this is the case, ScanCore post-scan checks are not performed.
Preventative live migration has completed.
Preventative live migration has been disabled. We're healthier than our peer, but we will take no action.
' or '--host '.]]>
Are you sure that you want to completely purge: [#!variable!host_name!#] (UUID: [#!variable!host_uuid!#] from the Anvil! database(s)?
Are you sure that you want to completely purge the Anvil!: [#!variable!anvil_name!#] (UUID: [#!variable!anvil_uuid!#] along with the machines:
- Host name: [#!variable!host_name!#] (host UUID: [#!variable!host_uuid!#]:
Now purging: [#!variable!host_name!#] (host UUID: [#!variable!host_uuid!#]:
Purging the Anvil!: [#!variable!anvil_name!#] (UUID: [#!variable!anvil_uuid!#]:
'. Available servers on this Anvil! system;]]>
Created the journald directory: [#!variable!directory!#].
Checking that the daemon: [#!variable!daemon!#] is running.
The daemon: [#!variable!daemon!#] was not running, starting it now.
Preparing to manage a server.
Found the server: [#!variable!server_name!#] in the database, loading details now.
The fence delay to prefer the node: [#!variable!node!#] has been removed.
The fence delay now prefers the node: [#!variable!node!#].
This is the TCP port that the VNC server is listening on to provide graphical access to the associated server.
[ #!variable!number!# ]- #!variable!server_name!# - (Current state: [#!variable!server_state!#])
-=] Please select the Anvil! hosting the server you want to manage [=-
[ #!variable!number!# ]- #!variable!anvil_name!# - #!variable!anvil_description!#
Preparing to manage VNC pipes.
Finished [#!variable!operation!#] VNC pipe for server UUID [#!variable!server_uuid!#] from host UUID [#!variable!host_uuid!#].
Finished dropping VNC pipes table.
Finished managing VNC pipes; no operations happened because requirements not met.
Preparing to get server VM screenshot.
Finished getting server VM screenshot.
Failed to get server VM screenshot; got non-zero return code.
Finished attempting to get server VM screenshot; no operations happened because requirements not met.>>> master
Preparing to manage DR for a server.
UUID Column counts for: [history.#!variable!table!#]:
Counting entries for each unique: [#!variable!column!#] in the table [#!variable!table!#]. Please be patient.
Marking this host as configured.
This host is already marked as configured.
Marking this host as configured.
This host is already marked as configured.
This host is marked as unconfigured.
This host is marked as configured.
This database is marked as inactive.
This database is marked as active.
Marking this database as active.
This database is already marked as active.
Marking this database as inactive.
This database is already marked as inactive.
Available options;
--age-out-database
This purches older records to reduce the size of the database.
--check-configured
This checks to see if the host is marked as configured or not.
--check-database
This checks to see if the host's database is marked as active or not.
--database-active
This marks the host's database as active.
--database-inactive
This marks the host's database as inactive.
--mark-configured
This marks the host as being configured.
--mark-unconfigured
This marks the host as being unconfigured.
--resync-database
Force a resync of the databases.
I was asked to resync, but there is only one database available so no sense proceeding.
I was asked to resync. Calling the resync now.
Aging out data to thin down the database(s).
Prior to resync, we will check to see if any scan agent schemas need to be loaded.
#!variable!total_cores!#c (#!variable!sockets!#s)
#!variable!total_cores!#c (#!variable!sockets!#s, #!variable!cores!#c, #!variable!threads!#t), #!variable!model!#, #!variable!mode!#
#!variable!cores!#c (#!variable!threads!#t)
-=] Server Usage and Anvil! Node Resource Availability
This program is currently disabled, please see NOTE in the header for more information.
# NOTE: This was added by the Anvil!, as per firewalld's warning below.
# WARNING: AllowZoneDrifting is enabled. This is considered an insecure
# configuration option. It will be removed in a future release.
# Please consider disabling it now.
Migration Network
Saved the mail server information successfully!
The mail server: [#!variable!mail_server!#] has been deleted.
The alert recipient: [#!variable!recipient_email!#] has been deleted.
Saved the alert recipient information successfully!
The fence device: [#!variable!name!#] has been successfully saved!
The fence device: [#!variable!name!#] has been successfully deleted!
The UPS: [#!variable!name!#] has been successfully saved!
The UPS: [#!variable!name!#] has been successfully deleted!
The install manifest: [#!variable!name!#] has been successfully saved!
The install manifest: [#!variable!name!#] has been successfully deleted!
The install manifest job has be initiated! Target machines should start configuring momentarily!
The file has been scheduled to purge from all systems.
The file has been scheduled to be renamed on all systems.
The file type been changed.
The Anvil!: [#!variable!anvil_name!#] members will now sync this file.
The Anvil!: [#!variable!anvil_name!#] members will now remove this file.
[ Error ] -
[ Warning ] -
[ Note ] -
Error
Warning
Note
Welcome! Lets setup your #!string!brand_0003!# dashboard...
We're going to ask you a few questions so that we can set things up for your environment. If you need help at any time, just click on the "[?]" icon in the top-right. Let's get started!
Organization name
This is the name of the company, organization or division that owns or maintains this #!string!brand_0006!#. This is a descriptive field and you can enter whatever makes most sense to you.
Prefix
This is a one to five character prefix used to identify this organization. It is used as the prefix for host names for dashboards, nodes and foundation pack equipment. You can use letters and numbers and set whatever makes sense to you.
Domain Name
This is the domain name you would like to use for this dashboard. This will also be used as the default domain used when creating new install manifests.
Sequence Number
If this is your first Striker, set this to '1'. If it is the second one, set '2'. If it is the third, '3' and so on.
Internet-Facing Network Count
NOTE: You must have a network interface for the back-channel network, plus one for each internal network. If you have two interfaces for each network, we will setup bonds for redundancy automatically.]]>
Next
Step 1
IFN Count
Host name
This is the host name for this Striker dashboard. Generally it is a good idea to stick with the default.
Back-Channel Network link #!variable!number!#
This is where you configure the network to enable access this Back-Channel Network.
Storage Network link #!variable!number!#
This is where you configure the network to enable access this Storage Network.
Internet-Facing Network link #!variable!number!#
This is where you configure the network to enable access this Internet-Facing Network.
IP Address
Subnet Mask
Gateway
DNS Server
Network Interface
Primary Interface
Backup Interface
Striker user name
This is the user name that you will log into Striker as and the name of the user that owns the database.
Striker password
NOTE: This password needs to be stored in plain text. Do not use a password you use elsewhere.]]>
Gateway
This is the network gateway used to access the outside world.
DNS
This is the domain name server(s) to use when resolving domain names. You can specify 2 or more, separated by commas.
Gateway Interface
This is the interface with the internet access. Usually this is "ifn_link1".
We're almost ready! Does this look right? If so, we'll setup this Striker dashboard.
What we are planning to do...
Apply New Configuration
Done!
The network will be reconfigured momentarily. You may need to reconnect using the new network address you chose.
Offline...
A job to reconfigure this Striker is underway. It is: [#!variable!percent!#%] done. It last updated its progress at: [#!variable!timestamp!#] (#!variable!seconds_ago!# seconds ago). Please try again shortly.
This indicates that this machine has been configured. After an initial install, this variable won't exist. If it is set to '0', it will trigger a reconfiguration of the local system.
Log in
User name
Password
Striker Configuration and Management
Reload
Configure Striker Peers
When you sync with a peer, this machine's data will be copied to and recorded on the peer's database. Data gathered by ScanCore will also be kept in sync on both dashboards, and any general purpose data collected by other dashboards while this one is offline will be copied back when this machine comes online. Should this machine ever be rebuilt, data recorded from before the rebuild will be automatically restored as well.
Update System
This will update this system using any available software repositories. You can also use this to create or load update packs to allow for the update of offline or air-gapped #!string!brand_0006!# systems.
Configure Striker
Update the network configuration for this Striker.
Welcome!
Create or manage #!string!brand_0006!# systems
Manage this Striker system and sync with others
Log out
Help and support
Use 'anvil-change-password' from the console to reset it.]]>
Access to this machine via: [#!variable!network!#].
Save
Delete
[db_user@]host[:pgsql_port][,ssh=ssh_port]
Add
Ping
Bi-directional
When checked, the #!string!brand_0006!# will ping the peer before trying to connect to the database. This speeds up skipping a database that is offline, but won't help if the databsae is behind a router. When unchecked, connections will be a touch faster when the database is available.
When checked, the peer will be configured to add the local database as a peer at the same time that we add it to this system.
Access
admin', and the default port is '5432'. If the peer uses these, then you only need to specify the IP address or host name of the peer. If the user name is not 'admin', then you need to use the format 'user@host. If the TCP port is not '5432', then you need to use 'host:port. If both user and port are different, use the format 'user@host:port'.]]>
22', you can append: ',ssh=X' where 'X' is the SSH TCP port.]]>
Please verify
Peer
Ping before connect
The test connection was successful. When saved, the resynchronization process might take a few minutes, and cause maintenance periods where some features are offline until complete.
Confirm
Would you like to reconfigure this machine? If you confirm, Striker will re-run the initial configuration. Connections to peers and database data will be retained.
Confirmed
This Striker has been marked as reconfigured. Reload to start the confguration process.
Would you like to update the operating system on this machine? This Striker will be placed into maintenance mode until the update completes.
When enabled on a Striker dashboard, the web interface will be disabled and ScanCore will not record to the local database. When enabled on a node, no servers will be allowed to run on it, and any already running on it will be migrated. When run on a DR node, that node will be disconnected from storage and no servers will be allowed to run on it. When disabled, all normal functions are available
The system will be updated momentarily. This system will now be in maintenance mode until the update is complete.
This indicates whether this system needs to be rebooted or not.
This system is in maintenance mode and is not currently available.
Reboot This System
This option will restart the host operating system. This is not currently needed.
This machine needs to be rebooted. This option will restart the host operating system.
Power Off This System
This will power off the Striker machine and leave it off. To power it back on, you will need physical access or cycle the power of the PDU feeding this Striker.
Recent and Running Jobs
There are no jobs currently running or recently completed.
Back
Job
Reboot this system? If you proceed, you will be logged out and this system will be rebooted. Please be sure you have access in the rare chance that the system fails to boot back up.
Power off this system? If you proceed, you will be logged out and this system will be powered off. You will need physical access to the machine to turn it back on in most cases. A properly condigured Striker dashboard will power on after a power cycle (via a PDU) or any machine with IPMI if you have access to a machine on the BCN.
The peer will be added to the local configuration shortly. Expect slight performance impacts if there is a lot of data to synchronize.
The peer will be added to the local configuration shortly, and we will be added to their configuration as well. Expect slight performance impacts if there is a lot of data to synchronize.
The peer will be removed from to the local configuration shortly. Any existing data will remain but no further data will be shared.
#!variable!peer!#]? If so, no further data from this system will be written to the peer. Do note that any existing data will remain and will be reused if you add the peer back again.]]>
Indicates when the last time the host system's RPM repository was refreshed. If the last refresh failed, this will be incremented by one day before another attempt is made (regardless of 'install-manifest::refresh-period' setting).
Enable 'Install Target'
Disable 'Install Target'
'Install Target' Not Available]]>
The 'Install Target' feature is used to do base (stage 1) installs on new or rebuilt Striker dashboards, #!string!brand_0006!# nodes or Disaster Recivery hosts. Specifically, it allows machines to boot off their BCN network interface and install the base operating system.
The 'Install Target' disable job has been requested. It should be completed in a few moments. You may need to reload the next page in a minute to see that it has been disabled.
The 'Install Target' enabled job has been requested. It should be completed in a few moments. You may need to reload the next page in a minute to see that it has been enabled.
#!string!brand_0006!# Configuration and Management.
Create a new #!string!brand_0006!# system.
Any running jobs, or jobs that have ended recently, are displayed below.
Initialize an #!string!brand_0006!# node or disaster recovery target.
Initial host configuration.
Prepare a new machine for use as an #!string!brand_0006!# node or DR (disaster recovery) host. This process will setup the repository, install the appropriate anvil packages and link it to the #!string!brand_0006!# databases on the Strikers you choose.
#!string!brand_0006!# File Manager.
Saving File...
Prepare Node or DR Host
Please enter the IP address and root password of the target machine you want to configure.
'root' Password
Host to Initialize
Current host name
Host UUID
Initialize
The target will now be initialized. How long this takes will depend on how fast files can be downloaded and, when needed, how long it takes to register with Red Hat and add the needed repositories.
Configure the network on a node or DR host.
This option will allow old machine keys to be removed. This is not currently needed.
There are one or more broken keys, blocking access to target machines. If a target has been rebuilt, you can clear the old keys here.
Manage Changed Keys
There are no known bad keys at this time.
Add or remove Striker peers.
Peer dashboards are Striker machines whose databases this Striker will use to record data. If this machine ever needs to be replaced, or goes offline for a period of time, it will automatically pull the data back from any peers that it is missing.
Warning: If you haven't rebuilt the target, then the "broken key" could actually be a "man in the middle" attack. Verify that the target has changed for a known reason before proceeding!
If you are comfortable that the target has changed for a known reason, you can select the broken keys below to have them removed.
]]>
New host name
]]>
Indicates when the last time the networks connected to this host were scanned. The scan is done to help find the IP addresses assigned to hosted servers and virtual machine equipment. The scan is a simple, sequential nmap ping scan in an attempt to be as non-invasive as possible. The frequency of these scans can be controlled by setting 'network-scan::scan-period' to a number of seconds (the current value is: [#!data!network-scan::scan-period!# seconds]).
Configure the network interfaces for this host.
IPs and host names are optional, and can be set when assembling this host into an #!string!brand_0006!# system later.]]>
If you would like to change the host name now, you can do so here. When adding this machine to an #!string!brand_0006!#, the host name will be set there as well making this optional.
This is the network gateway used to access the outside world. We'll match it to the appropriate network interface.
If left blank, the interface will be configured for DHCP.
Confirm network configuration.
If you confirm, the host will enter maintenance mode, reconfigure its network and reboot.
Network Plan
Network
Address
Bridged?
dhcp]]>
IP address for: [#!variable!say_network!#]
Subnet mask for: [#!variable!say_network!#]
The network interface that connects to the default gateway.
This is the primary network interface. All things being equal, this is the interface that network traffic will travel over.
This is the secondary network interface. Network traffic will switch over to this interface if there is a problem detected with the primary interface.
If set, a bridge will be created on this network, allowing hosted servers to use this network.
This is the host name for the target system.
The network will use DHCP to attempt to get an IP address.
The network will soon be reconfigured and then the target will reboot. In a couple minutes, it should be ready.
Return
How many network connections will exist for each network type.
Email and alert configuration
Alert email server and recipient configuration.
Configure which server(s) can be used for forwarding email alerts to.
Configure who will receive email alerts.
Outgoing mail server
This is the host name or IP address of the server that email alerts are forwarded to.
Login Name
This is the the user name used when authenticating with the outgoing mail server.
Login Password
This is the the password for the user name used when authenticating with the outgoing mail server.
None
SSL/TLS
STARTTLS
Normal Password
Encrypted Password
Kerberos / GSSAPI
NTLM
TLS Certificate
OAuth2
Connection Security
Authentication method
host_or_ip[:port]
Indicates when the last time the OUI file was parsed. This is done to translate MAC addresses (and IPs associated with those MAC addresses) to the company that owns them.
Existing mail servers:
Clear the form
Are you sure that you want to delete:
Alert Recipient
Alert level
Language
Units
Recipient's Name
This is the name that will be displayed when sending an email to this user.
Recipient's Email
The email that alerts are sent to.
The language the user will receive alerts in.
The alert level used for new (and existing) #!string!brand_0006!# systems.
Existing alert recipients:
This puts the host into network mapping mode. In this most, most functions are disabled and the link status of network interfaces are closely monitored.
Create or Run an Install Manifest
Create a new Install Manifest; The instructions used to assemble/repair a given #!string!brand_0006!# system.
Existing Manifests:
Run
Edit
Configure fence devices. These will be used when creating install manifests and are a critical safety mechanisms that will protect your data when a node misbehaves.
Configure fence devices.
Fence devices are used to force a node that has entered an unknown state into a known state. Recovery after a node fault can not proceed until this happens, so this step is critically important.
Note: Any IPMI (iRMC, iLO, DRAC, etc) fence config will be handled in the host's config. This section configures shared devices, like PDUs. The ports/outlets a given node will use will be set in the install manifest later.
How Many?
Configure fence devices:
List of fence agents installed on this system:
Configuring '#!data!cgi::fence_agent::value!#'
Configure device #!variable!number!#:
Options description (from the agent's metadata):
Note: Names and descriptions come from the fence agent itself. If you need more help, please run 'man #!variable!name!#' at the command line.
Required field
Device #!variable!number!#:
Please confirm the fence devices are configured the way you like.
Please confirm the fence device is configured the way you like.
This is the unique name (often the host name) of this specific fence device.
Existing fence devices:
Confirm deleting '#!variable!name!#'
Install Manifest; Step #!variable!number!#
First step are some simple questions to know what kind of #!string!brand_0006!# this manifest will build.
#!string!brand_0006!# prefix:
#!string!brand_0006!# Sequence:
IFNs.]]>
Add UPSes.
UPS #!variable!number!#:
Please confirm the UPS is configured the way you like.
Please confirm the UPSes are configured the way you like.
This is the unique name (often the host name) of this specific UPS.
Existing UPSes:
These will be used when creating install manifests and are used to know when to shed load, full shut down and when to restore services.
List of UPSes supported by ScanCore on this system:
UPS #!variable!number!#:
Configuring '#!data!cgi::ups_brand::value!#'
This is the IP address of the UPS. This must be reachable by nodes powered by this UPS.
The only time to change this is if a UPS has been replaced (using the same name/IP) by a UPS of a different brand.
Saving UPS data
This is the sequence number for this #!string!brand_0006!#. The first #!string!brand_0006!# will be '1', the second will be '2', etc. This is used to preset IP addresses, PDU outlet positions, etc.
IFN on an #!string!brand_0006!#. If you have separate networks and plan to restrict certain servers to certain networks, you can install extra network interfaces into the nodes (two per IFN). If this is your plan, set this value to the number of IFNs you plan to use.]]>
This is a one to five character prefix used to identify the department, organization, or company whose servers will run on this #!string!brand_0006!#. You can use letters and numbers and set whatever makes sense to you.
This is the domain name you would like to use for this #!string!brand_0006!#. This will be used in the next step when setting default hostnames for various devices.
The second step specified the networks (subnets) that will be used for each network. Generally, you only want to change the IFN(s). The BCN and SN are always '/16' subnets and should only be changed if they conflict with an existing IFN.
Default
The third step is where it all comes together!
NTP]]>
NTP servers.]]>
MTU]]>
MTU, over 1500 bytes), you can specify the maximum size in bytes here. Be sure all equipment support your chosen MTU size! When in doubt, leave this set to 1500.]]>
Node 1
Node 2
DR Host
IPMI IP
Note: The password to use for an #!string!brand_0006!# will be asked when the manifest is actually run. The password is not stored in the manifest.]]>
IPMI Details
Note: The IPMI information is set when a node is initialized if an IPMI BMC is found. Only the IP address is needed.]]>
Fence Port
This is the "port" (outlet, name or other ID) that the associated fence device uses to terminate the target node. This could be the outlet number on a PDU, VM name on a hypervisor host, etc.
Powered By UPS
If the machine is powered by a given UPS, click to check the corresponding box. This information will be used in power loss events to decide what machine should host servers, which should be powered off during load-shed conditions and when to gracefully power off entirely.
If your machine has an IPMI BMC, (iDRAC, iLO, iRMC, etc), then you can enter the IP to give it here. Further details will be collected when the manifest runs. Leave blank if the machine doesn't have IPMI.
#!variable!network!#].]]>
Notes
Run manifest: [#!variable!name!#]:
Set all passwords to...
NOT be changed, except to configure passwordless SSH to the peer node and/or DR host. As such, it is safe to run this manifest when adding a rebuilt node or adding a DR host to a live #!string!brand_0006!# system.]]>
Adding a disaster recovery (DR) host is optional. You can add one later if you don't have one now.
If there are no servers on either node (as it a new #!string!brand_0006!# build), the OSes will be updated. Otherwise, they won't be updated. If the kernel is updated, or the network reconfigured, the node will be rebooted.
Free-form description of this system.
This tracks the last time a given mail server was configured for use. It allows for a round-robin switching of mail servers when one mail server stops working and two or more mail servers have been configured.
No UPSes
This is a condition record, used by programs like scan agents to track how long a condition has existed for.
This indicated why a machine was powered off. This is used by ScanCore to decide if or when to power up the target host.
Storage group #!variable!number!#
Manage this file.
This will remove the file from all systems.
There are no #!string!brand_0006!# configured yet. Existing files will automatically sync to new clusters.
Cancel
Close
This controls if 'anvil-safe-start' is enabled on a node.
The virtio NAT bridge: [#!variable!bridge!#] exists. Removing it...
Manage existing Anvil! systems.
Control when the database is locked for use by any system except the lock holder.
This is the number of bytes received (rx) by a network interface since it was last started.
This is the number of bytes transmitted (tx) by a network interface since it was last started.
Stay Off
This is the command used to provision the referenced server.
This indicates if a Striker's DB is available to be used.
#!variable!number!#/sec
s
m
h
d
w
Seconds
Minutes
Hours
Days
Weeks
ms
milliseconds
B
KB
MB
GB
TB
PB
EB
ZB
YB
KiB
MiB
GiB
TiB
PiB
EiB
ZiB
YiB
B
Kilobyes
Megabytes
Gigabytes
Terabytes
Petabytes
Exabytes
Zettabytes
Yotabytes
Kibibytes
Mebibytes
Gibibytes
Tebibyes
Pebibytes
Exbibytes
Zebibytes
Yobibytes
bps
Kbps
Mbps
Gbps
Tbps
Pbps
Ebps
Zbps
Ybps
Bytes
Test
Test replace: [#!variable!test!#].
Test Out of order: [#!variable!second!#] replace: [#!variable!first!#].
#!FREE!#
This is a multi-line test string with various items to insert.
It also has some #!invalid!# replacement #!keys!# to test the escaping and restoring.
Here is the default output language: [#!data!defaults::language::output!#]
Here we will inject 't_0000': [#!string!t_0001!#]
Here we will inject 't_0002' with its embedded variables: [#!string!t_0002!#]
Here we will inject 't_0006', which injects 't_0001' which has a variable: [#!string!t_0006!#].
This string embeds 't_0001': [#!string!t_0001!#]
- Critical
- Warning
- Notice
- Info
- Critical Cleared!
- Warning Cleared!
- Notice Cleared!
- Info Cleared!
ISO (optical disc)
Script (program)
Other file type
Yes
No
None
Unknown
]]>
Balance Round-Robin
Active/Backup
Balanced Exclusive OR
Broadcast
Dynamic Link Aggregation (802.3ad)
Balanced Transmit Load balancing
Balanced Adaptive Load balancing
Up
Down
Full
Half
Always Use Primary
Select Better
On Failure
STP Disabled
STP Enabled in Kernel
STP Enabled in User land
Ignore
Critical
Warning
Notice
Info
Lit
Up
Down
Mbps
waiting for job output...
Volts
Watts
RPM
Celsius
Fahrenheit
%
Amps
Going Back
Link.]]>
Link.]]>
Link.]]>
Link.]]>
[ Warning ] - The IP address will change. You will need to reconnect after applying these changes.
[ Warning ] - The access information appears to not be valid.
[ Warning ] - Test access to the peer (using SSH) failed. There may be details in the log file.
[ Warning ] - Accessing the peer over SSH worked, but a test connection to the database failed.
[ Warning ] - There was a problem reading the peer's UUID. Read: [#!variable!uuid!#], which appears to be invalid.
[ Warning ] - An SSH connection was established to: [#!variable!target!#], but we failed to establish a channel. The last error was: [#!variable!error!#].
[ Warning ] - The job: [#!variable!command!#] was picked up by: [#!variable!pid!#], but that process is not running and it appears to only be: [#!variable!percent!# %] complete. Restarting the job.
[ Warning ] - Unable to find a local IP on the same subnet as the IP/host: [#!variable!host!#] given for the target. Bi-directional setup not currently possible.
[ Warning ] - The subtask request for manipulating the 'Install Target' feature is not valid. It should be 'enabled' or 'disabled'
[ Warning ] - The IP address: [#!variable!ip_address!#] is not a valid IPv4 address
[ Warning ] - The SSH port is not a valid (usually it is 22, but it has to be between 1 ~ 65536)
[ Warning ] - Failed to log into the host. Is the IP or root user's password right?
Click here to resolve.]]>
[ Warning ] - The host UUID: [#!variable!host_uuid!#] was not found in the #!data!path::json::all_status!# file on the local dashboard.
[ Warning ] - To configure a host as either an #!string!brand_0002!# node or a disaster recovery host, there must be at least 6 network interfaces. This machine only has: [#!variable!interface_count!#] interfaces.
[ Warning ] - No databases are available. Changes to the network interfaces will be cached.
[ Warning ] - The subnet mask is not valid
[ Warning ] - The IP address was specified, but the subnet mask was not
[ Warning ] - The passed in parameter '#!variable!parameter!#': [#!variable!ip_address!#] is not a valid IPv4 address.
[ Warning ] - The passed in parameter '#!variable!parameter!#': [#!variable!subnet_mask!#] is not a valid IPv4 subnet mask.
[ Warning ] - All three networks require the first network pair to be defined.
[ Warning ] - Only one network interface selected for a network pair.
[ Warning ] - The outgoing mail server appear to not be a valid domain name or IP address.
[ Warning ] - The outgoing mail server port is not valid. Must be 'mail_server:x' where x is 1 ~ 65535.
[ Warning ] - There was a problem saving the mail server data. Please check the logs for more information.
[ Warning ] - The recipient's email address appears to not be valid.
[ Warning ] - There was a problem saving the alert recipient data. Please check the logs for more information.
[ Warning ] - Failed to read the fence agent: [#!variable!agent!#] metadata. Ignoring it.
[ Warning ] - While resync'ing the table: [#!variable!table!#] on: [#!variable!host_name!# (#!variable!host_uuid!#)], there was an entry found in the public schema (#!variable!column!# = #!variable!uuid!#) but not in the history schema. This shouldn't happen, and it probably a bug. Switching the query's schema from public to history for the query: [#!variable!query!#] is being dropped.
[ Warning ] - Databse->insert_or_update_variables() was called with 'update_value_only' set, but the 'variable_uuid' wasn't passed or the 'variable_uuid' wasn't found given the 'variable_name'. Unable to update. Passed in values are logged below this message
[ Warning ] - No internet detected (couldn't ping: [#!variable!domain!#]). Skipping attempt to download RPMs.
[ Warning ] - The fence device: [#!variable!name!#] appears to have not been saved.
[ Warning ] - The fence device: [#!variable!name!#] with the UUID: [#!variable!uuid!#] has already been deleted.
[ Warning ] - The fence device with the UUID: [#!variable!uuid!#] was not found.
[ Warning ] - The fence device: [#!variable!name!#] with the UUID: [#!variable!uuid!#] was NOT deleted. The reason may be in the: [#!data!path::log::main!#] log file on this host.
[ Warning ] - The UPS with the UUID: [#!variable!uuid!#] was not found.
[ Warning ] - The UPS: [#!variable!name!#] with the UUID: [#!variable!uuid!#] has already been deleted.
[ Warning ] - The UPS: [#!variable!name!#] appears to have not been saved.
[ Warning ] - There's a problem with the form.
[ Warning ] - The UPS: [#!variable!name!#] with the UUID: [#!variable!uuid!#] was NOT deleted. The reason may be in the: [#!data!path::log::main!#] log file on this host.
[ Warning ] - There was a problem saving the install manifest. The reason may be in the: [#!data!path::log::main!#] log file on this host.
[ Warning ] - No record found for the table/columns: [#!variable!table!# -> #!variable!column!#] for the value: [#!variable!value!#].
[ Warning ] - The install manifest with the UUID: [#!variable!uuid!#] was not found.
[ Warning ] - The install manifest: [#!variable!name!#] with the UUID: [#!variable!uuid!#] has already been deleted.
[ Warning ] - The install manifest: [#!variable!name!#] with the UUID: [#!variable!uuid!#] was NOT deleted. The reason may be in the: [#!data!path::log::main!#] log file on this host.
[ Warning ] - The install manifest with the UUID: [#!variable!uuid!#] was not found.
[ Warning ] - The password to set for this #!string!brand_0006!# was not set.
[ Warning ] - The password verification was not set.
[ Warning ] - The passwords do not match.
[ Warning ] - The host: [#!variable!host!#] now belongs to the #!string!brand_0006!#, it can't be used here anymore.
[ Warning ] - The IP address: [#!variable!ip!#] is not valid. Ignoring associated hosts: [#!variable!hosts!#].
[ Warning ] - Failed to read the CIB. Is 'pcsd' running and is the cluster started?
[ Warning ] - Failed to parse the CIB. The CIB read was:
========
#!variable!cib!#
========
The error was:
========
#!variable!error!#
========
[ Warning ] - Node 1 and Node 2 are set to the same machine.
[ Warning ] - The DR Host is set to the same machine as Node 1.
[ Warning ] - The DR Host is set to the same machine as Node 2.
[ Warning ] - The 'libvirtd' daemon is not running. Checking to see if the server is running by looking for its PID (server state won't be available). Please start 'libvirtd'!
[ Warning ] - The server: [#!variable!server!#] is in a crashed state!
[ Warning ] - The server: [#!variable!server!#] was asked to be booted on: [#!variable!requested_node!#], but it is is already running on: [#!variable!current_host!#].
[ Warning ] - The server: [#!variable!server!#] was asked to be shutdown, but it's in an unexpected state: [#!variable!state!#] on the host: [#!variable!current_host!#]. Aborting.
[ Warning ] - The server: [#!variable!server!#] was asked to be migrated to: [#!variable!requested_node!#], but the server is off. Aborting.
[ Warning ] - Failed to read the 'crm_mon' output. Is the cluster started?
[ Warning ] - Failed to parse the XML output from 'crm_mon'. The XML read was:
========
#!variable!xml!#
========
The error was:
========
#!variable!error!#
========
[ Warning ] - The server: [#!variable!server!#] was asked to be migrated to: [#!variable!requested_node!#], but the server is shutting down. Aborting.
[ Warning ] - The server: [#!variable!server!#] was asked to be migrated to: [#!variable!requested_node!#], but the server is already in the middle of a migration. Aborting.
[ Warning ] - Failed to parse the XML:
========
#!variable!xml!#
========
The error was:
========
#!variable!error!#
========
[ Warning ] - Failed to find the server's UUID from the definition XML:
========
#!variable!xml!#
========
[ Warning ] - The server UUID read: from the definition XML doesn't match the passed-in server UUID.
Passed in UUID: [#!variable!passed_uuid!#]
Read UUID: .... [#!variable!read_uuid!#]
========
#!variable!xml!#
========
[ Warning ] - Checking the mail queue appears to have failed. Output received was: [#!variable!output!#].
[ Warning ] - Unable to report the available resources for the Anvil! [#!variable!anvil_name!#] as it looks like ScanCore has not yet run. Please try again after starting the 'scancore' daemon on the nodes.
[ Warning ] - We were asked to create a new storage group called: [#!variable!name!#] but that name is already used by the group with UUID: [#!variable!uuid!#].
[ Warning ] - The file: [#!variable!file_path!#] was not found on any accessible Striker dashboard (or it isn't the same size as recorded in the database). Will sleep for a minute and exit, then we'll try again.
[ Warning ] - No databases are available. Some functions of this resource agent will not be available.
[ Warning ] - Our disk state for the peer: [#!variable!peer_name!#] on resource: [#!variable!resource!#], volume: [#!variable!volume!#] is: [#!variable!disk_state!#].
[ Warning ] - We were asked to insert or update a host with the name: [#!variable!host_name!#]. Another host: [#!variable!host_uuid!#] has the same name, which could be a failed node that is being replaced. We're going to set it's 'host_key' to 'DELETED'. If this warning is logged only once, and after a machine is replaced, it's safe to ignore. If this warning is repeatedly being logged, then there are two active machines with the same host name, and that needs to be fixed.
[ Warning ] - It looks like the postfix daemon is not running. Enabling and starting it now.
[ Warning ] - Checking the mail queue after attempting to start postgres appears to have still failed. Output received was: [#!variable!output!#].
[ Warning ] - Not installing the Alteeve repo! The package: [#!variable!anvil_role_rpm!#] is already installed. This is OK, but be aware that updates from Alteeve will not be available. To change this, please install: [#!variable!alteeve_repo!#].
[ Warning ] - Failed to read the JSON formatted output of 'lsblk'. Expected the return code '0' but received: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
[ Warning ] - Failed to read the XML formatted output of 'lshw'. Expected the return code '0' but received: [#!variable!return_code!#]. The output, if any, was: [#!variable!output!#].
[ Warning ] - The temporary file: [#!variable!temp_file!#] vanished (or failed to be created) before it could be copied to: [#!variable!target!#].
[ Warning ] - This host is not in the cluster, and all UPSes are running on batteries, and have been for at least: [#!variable!time_on_batteries!#]. Shutting down to conserve power.
[ Warning ] - This host is not in the cluster, and the temperatures is anomalous. Shutting down to limit thermal loading.
[ Warning ] - We are healthier than our peer: [#!variable!peer_name!#]! Scores (local/peer): [#!variable!local_health!# / #!variable!peer_health!#]. This has been the case for: [#!variable!age!# seconds]. After 120 seconds, preventative migration will be triggered.
[ Warning ] - Initiating preventative live migration, taking the servers from our peer: [#!variable!peer_name!#]! Scores (local/peer): [#!variable!local_health!# / #!variable!peer_health!#]. This has been so for over two minutes, so we will not perform a preventative migration of server.
[ Warning ] - We're not a cluster member, but the server: [#!variable!server_name!#] is in the status: [#!variable!status!#]. ScanCore will take no action on this node.
[ Warning ] - We're alone in the cluster, and our temperature is now critical. Gracefully stopping servers and then shutting down.
[ Warning ] - We're alone in the cluster, we've been running on batteries for more than 2 minutes, and the strongest UPS shows less than ten minutes hold up time left. Gracefully stopping servers and then shutting down.
[ Warning ] - This host is not in the cluster, and all UPSes are running on batteries. The most recent UPS to lose power was roughly: [#!variable!time_on_batteries!#] seconds ago. After 120 seconds, this node will power down to conserve battery power.
[ Warning ] - This host is not in the cluster, and the temperatures is anomalous. This has been the case for roughly: [#!variable!age!#] seconds. After 120 seconds, this node will shut down to reduce thermal loading.
[ Warning ] - Both nodes have been running on batteries for more than two minutes, and both show the strongest UPS as having less than 10 minutes runtime left. Full power loss is highly likely, and imminent. Gracefully shutting down servers and powering off.
[ Warning ] - Both nodes have been running on batteries for more than two minutes. To conserve battery power, load shedding will begin. A node will be selected for shutdown momentarily.
[ Warning ] - Both nodes are running on batteries, but this has been so for less than two minutes. Will take no action yet in the hopes that this is a transient issue.
[ Warning ] - Our peer node: [#!variable!host_name!#] has been running on batteries for more than two minutes. We've still got power, so we will pull the servers off of our peer and on to this machine.
[ Warning ] - Our peer node: [#!variable!host_name!#] is running on batteries, but it has been less than two minutes. Not doing anything, yet.
[ Warning ] - We're running on batteries, have been so for more than two minutes, and the strongest UPS has an estimated hold up time below ten minutes. Power loss is innevitable, so we will start a graceful shutdown now.
[ Warning ] - We're running on batteries, and have been for more than two minutes. We'll shut down to conserve battery power now.
[ Warning ] - We're running on batteries, but it's been less than two minutes. We'll wait to see if this is a transient event before taking any action.
[ Warning ] - Both node's temperatures have been anomolous for more than two minutes. We'll shut down to reduce thermal loading of the room we're in.
[ Warning ] - Both node's temperatures are anomolous, and we've been critically anomolous for more than two minutes. Hardware shutdown is very likely, so we'll gracefully shutdown now.
[ Warning ] - Both node's temperatures are anomolous, but this has been the case for less than two minutes. We'll wait to see if the temperatures clear before taking action.
[ Warning ] - Our peer node: [#!variable!host_name!#]'s temperature has been anomolous for more than two minutes. We're still thermally nominal, so we will pull the servers off of our peer and on to this machine.
[ Warning ] - Our peer node: [#!variable!host_name!#]'s is anomolous, but it hasn't been so for two minutes yet. Not doing anything, yet.
[ Warning ] - Our temperature is anomolous, and have been so for more than two minutes. We'll shut down to reduce thermal loading in the room.
[ Warning ] - We are "SyncSource" for at least one resource, meaning that a peer is copying data from our storage in order to synchronize. As such, all shut down options are disabled until the sync ends or the peer goes offline.
[ Warning ] - Our temperature is critically anomolous, and has been so for more than two minutes. Hardware shutdown is highly likely, so will gracefully shut down now.
[ Warning ] - We're doing a load shed to conserve UPS power, and we're SyncSource (meaning our data is more complete than our peer's data). We will stay up and pull the servers to us.
[ Warning ] - We're doing a load shed to reduce thermal loading, and we're SyncSource (meaning our data is more complete than our peer's data). We will stay up and pull the servers to us.
[ Warning ] - We're doing a load shed to conserve UPS power, and we have no servers running locally. We will shut down now.
[ Warning ] - We're doing a load shed to reduce thermal loading, and we have no servers running locally. We will shut down now.
[ Warning ] - We're doing a load shed to conserve UPS power, and the amount of RAM allocated to servers on our peer is less than the amount of RAM allocated to servers running locally. As such, we'll pull the peer's servers to here.
[ Warning ] - We're doing a load shed to reduce thermal loading, and the amount of RAM allocated to servers on our peer is less than the amount of RAM allocated to servers running locally. As such, we'll pull the peer's servers to here.
[ Warning ] - We're doing a load shed to conserve UPS power, and the estimated migration time to pull the servers to us from our peer is shorter than the reverse. As such, we'll pull the peer's servers to here.
[ Warning ] - We're doing a load shed to reduce thermal loading, and the estimated migration time to pull the servers to us from our peer is shorter than the reverse. As such, we'll pull the peer's servers to here.
[ Warning ] - We're doing a load shed to conserve UPS power, and by all measures, the time to migrate off either node is equal. We're node 1, so we will pull the servers to us now.
[ Warning ] - We're doing a load shed to reduce thermal loading, and by all measures, the time to migrate off either node is equal. We're node 1, so we will pull the servers to us now.
[ Warning ] - The core Anvil! configuration file: [#!variable!file!#] was missing! It's been recreated using default values. It is possible that the database connection information will need to be restored manually.
[ Warning ] - The 'admin' group was created as a system group with the group ID: [#!variable!gid!#].
[ Warning ] - The 'admin' user was created with the user ID: [#!variable!uid!#].
[ Warning ] - Timed out waiting for the database: [#!variable!uuid!#] to become available.
[ Warning ] - The Anvil! with the UUID: [#!variable!uuid!#] was not found. Exiting, will re-run the anvil-join-anvil job again in a few moments.
[ Warning ] - Asked to find or set the fence delay, but this is not a node.
[ Warning ] - Asked to find or set the fence delay, but node is not in a cluster.
[ Warning ] - Asked to find or set the fence delay, but node is not fully in the cluster yet.
[ Warning ] - Asked to check server location constraints, but this is not a node.
[ Warning ] - Asked to check server location constraints, but this node is not in a cluster.
[ Warning ] - Asked to check server location constraints, but this node is not fully in the cluster yet.
[ Warning ] - Failed to parse the fence agent: [#!variable!agent!#]'s XML metadata:
========
#!variable!metadata!#
========
The error was:
========
#!variable!error!#
========
[ Warning ] - The IPMI BMC administrator (oem) user was not found. The output (if any) of the call: [#!variable!shell_call!#] was:
====
#!variable!output!#
====
We will sleep a bit and try again.
[ Warning ] - The storage group: [#!variable!storage_group_name!#] had the host: [#!variable!host_name!#] as a member. This host is not a member (anymore?) of the Anvil!: [#!variable!anvil_name!#]. Removing it from the storage group now.
[ Warning ] - The postgresql server is not installed yet. Sleeping for a bit, then will check again.
[ Warning ] - Failed to build or install the DRBD kernel module! It is very unlikely that this machine will be able to run any servers until this is fixed.
[ Warning ] - Table: [history.#!variable!table!#] not found.
[ Warning ] - Holding off starting the cluster. Tested access to ourself, and failed. Is '/etc/hosts' populated? Will try again in ten seconds.
[ Warning ] - The program: [#!variable!program!#] was not found to be running.
[ Warning ] - Failed to connect to the host: [#!variable!host!#]! Unable to up the resource, so the server may not start. If the peer can't be recovered, manually forcing the local resource(s) to UpToDate may be required.
[ Warning ] - Timed out waiting for the connections to the peers, and the local resource(s) is not in 'UpToDate' state. Booting the server will likely fail.
[ Warning ] - Timed out waiting for the connections to the peers.
[ Warning ] - We're using: [#!variable!ram_used!#] (#!variable!ram_used_bytes!# Bytes). but there is a job: [#!variable!job_command!#] is runnng, which might be why the RAM is high. NOT exiting while this program is running.
[ Warning ] - A no-longer active PID: [#!variable!pid!#] (used by: [#!variable!caller!#] had marked the database: [#!variable!db!#] as "in_use", but the PID is gone now. Reaping the flag.
[ Warning ] - We waited for: [#!variable!wait_time!#] seconds for all users of the local database to exit. Giving up waiting and taking the database down now.
[ Warning ] - The command: [#!variable!command!#] is still using our database.
[ Warning ] - While evaluating database shutdown, the host UUID: [#!variable!host_uuid!#] was not yet found in the database on host: [#!variable!db_uuid!#]. DB shutdown will not happen until all hosts are in all DBs.
[ Warning ] - While preparing to record the state: [#!variable!state_info!#], the host UUID: [#!variable!host_uuid!#] was not yet found in the database on host: [#!variable!db_uuid!#]. NOT recording the state!
[ Warning ] - The daemon: [#!variable!daemon!#] was found running. It shouldn't be, and will now be stopped and disabled.
[ Warning ] - Failed to parse the firewall zone file: [#!variable!file!#]. The body of the file was:
========
#!variable!body!#
========
The error was:
========
#!variable!error!#
========
[ Warning ] - The interface: [#!variable!interface!#] is in a bond, but it is down. The system uptime is: [#!variable!uptime!#], so it might be a problem where the interface didn't start on boot as it should have. So we're going to bring the interface up.
[ Warning ] - The IPMI stonith resource: [#!variable!resource!#] is in the role: [#!variable!role!#] (should be 'Started'). Will check the IPMI config now.
テスト
テスト いれかえる: [#!variable!test!#]。
テスト、 整理: [#!variable!second!#]/[#!variable!first!#]。
#!FREE!#
これは、挿入するさまざまな項目を含む複数行のテスト文字列です。
#!無効!#な置換#!キー!#を使ってエスケープとリストアをテストすることもできます。
デフォルトの出力言語は次のとおりです:「#!data!defaults::language::output!#」
ここで、「t_0000」を挿入します:[#!string!t_0001!#]
ここでは、 「t_0002」に埋め込み変数を挿入します:「#!string!t_0002!#」
ここでは変数 「#!string!t_0006!#」を持つ 「t_0001」を注入する 「t_0006」を注入します。
この文字列には「t_0001」が埋め込まれています:「#!string!t_0001!#」
アルティーブ
Anvil!
ストライカ
スカンコア
Alteeve's Niche! Inc., トロント、オンタリオ、カナダ]]>