Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

wspmvdata

News TCM Recommended Links Options Examples IBM humor Etc
 

Moves data between source hosts, that is Tivoli managed nodes with Software Distribution installed, and endpoints, and between one endpoint and more endpoints.  Syntax is as following:

wspmvdata -s origin_list -t destination_list [-P sp|tp: path] [-r spre|spost|tpre|tpost:script] [-S sw_package[:status]] [-c] [-D variable=value]... [-I] [-l mdist2_token=value...] [-R] [-B] [-F] file

wspmvdata -d destination_list [-P tp: path] [-r tpre|tpost:script] [-S sw_package[:status]] [-I] [-l mdist2_token=value...] [-R] [-B] file

wspmvdata -s origin_list -t destination_list [-P sp|tp: path] [-r spre|spost|tpre|tpost:script] [-S sw_package[:status]] [-c] [-D variable=value]... [-I] [-l mdist2_token=value...] [-R] [-B] [-F] [-G] file

wspmvdata -A

wspmvdata -p profile_manager

This command can be used for transferring medium files from one server to another. Huge files are more tricky as by default the max size is limited to free space on /opt  filesystem where Tivoli endpoint is installed

You can transfer one-to-one on one-to-many.  Multiple destination systems capability is very convenient. 

The same is true about retrieving data: you can get data from multiple endpoints to update values on a source host, that is a Tivoli managed node with Software Distribution installed.

While not that interesting but you can also delete data on multiple systems as command allow scripts to be run.

It understand simple (shell-style or DOS-style) regular expressions (wildcards)  so multiple source files can be targeted with one command. 

Running pre- and post-transfer tasks on both the origin and destination systems make this command really powerful, kind of distribution system in a box.

More advanced options include:

In the data moving architecture, data is moved between source hosts and endpoints and between one endpoint and multiple endpoints. A source host is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed. The source host corresponds to the origin system when send operations are performed, with the exception of send operations from one endpoint to multiple endpoints, where the origin system is an endpoint. During a retrieve operation, instead, the source host is the destination system.

All data moving operations use the same software package object, DataMovingRequest.1. This object contains certain standard information to be used by all data moving operations, including logging options. This object is either created automatically at installation time or by using the -A or -p profile_manager mutually exclusive options. If neither of these operations is performed, the object is created automatically in the first profile manager that belongs to a region having SoftwarePackage as managed resource when the first data moving operation is performed.

For more information about the DataMovingRequests.1 object and configuring the Data Moving service, see "Configuring the Data Moving Service" in the IBM Tivoli Configuration Manager: User's Guide for Software Distribution.

Options

-s origin_list
Identifies the system or systems where the data that is to be moved originates. When sending data, the origin system must be a source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed, or an endpoint. When retrieving data, the origin list can include multiple Tivoli endpoints, files that store a list of endpoints, profile managers, or a combination of these.

Components of the list must be separated by commas but the list must not end with a comma. No spaces are allowed. Each endpoint name must be preceded by @. For example,

-s @test1,@test2,@pm1,file1

If one or more subscribers are not valid, the operation fails on those subscribers, and continues on the other subscribers. Information about the subscribers on which the operation did not complete is written to the DataMovingRequests.1.log file. To specify that a distribution must be stopped when invalid targets are encountered, use the continue_on_invalid_targets key in the wswdcfg command.

-d
Specifies that the data movement is a delete operation. Do not specify an origin list if you use this argument.

Note:

When this option is specified, empty directories are removed, and a warning message is inserted in the log file.
-t destination_list
Identifies the system or systems to which data is to be transferred or where data is to be deleted. When retrieving data, the destination system must be a source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed. When sending or deleting data, the destination list can include multiple Tivoli endpoints, files that store a list of endpoints, profile managers, or a combination of these.

Components of the list must be separated by commas but the list must not end with a comma. No spaces are allowed. Each endpoint name must be preceded by @. For example,

-t @ep1,@ep2,@pm1,file1

This argument is always required.

If one or more subscribers are not valid, the operation fails on those subscribers, and continues on the other subscribers. Information about the subscribers on which the operation did not complete is written to the DataMovingRequests.1.log file. To specify that a distribution must be stopped when invalid targets are encountered, use the continue_on_invalid_targets key in the wswdcfg command.

-P
Specifies the location of the file, as follows:
sp:origin_path
Specifies the location of the file on the origin system or systems.
tp:dest_path
Specifies the location on the destination system or systems, to which the file is to be copied.

Note:

If you are performing a retrieve operation, the destination directory on the system is not created with this command, so it must be created beforehand. During the retrieve operation the program creates a new sub-directory under the specified destination directory using the following naming convention:
endpointname_distributionID_timestamp.

See File Paths.

-r spre|spost|tpre|tpost:script
Specifies a script to run, before or after data movement on the origin or destination system, as follows:
spre:src_prescript
Specifies a script to run on the origin system of the data file, before the data is transmitted. When sending data, the origin system must be a source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed, or an endpoint, when data is sent from one endpoint to one or more endpoints. When retrieving data, the origin list can include multiple Tivoli endpoints, files that store a list of endpoints, profile managers, or a combination of these. Where the -s option specifies a list of endpoints, the script runs on each endpoint.
spost:src_postscript
Specifies a script to run on the origin system of the data file, after the data is transmitted. When sending data, the origin system must be a source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed, or an endpoint, when data is sent from one endpoint to one or more endpoints. When retrieving data, the origin list can include multiple Tivoli endpoints, files that store a list of endpoints, profile managers, or a combination of these. Where the -s option specifies a list of endpoints, the script runs on each endpoint.
tpre:targ_prescript
Specifies a script to run on the destination system, before the data is transmitted. When retrieving data, the destination system must be a source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed, which afterwards redirects the data to the destination systems. When sending or deleting data, the destination list can include multiple Tivoli endpoints, files that store a list of endpoints, profile managers, or a combination of these. Where the -t option specifies a list of endpoints, the script runs on each endpoint.
tpost:targ_postscript
Specifies a script to run on the destination system, after the data is transmitted. When retrieving data, the destination system must be a source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed, which afterwards redirects the data to the destination systems. When sending or deleting data, the destination list can include multiple Tivoli endpoints, files that store a list of endpoints, profile managers, or a combination of these. Where the -t option specifies a list of endpoints, the script runs on each endpoint.
-S sw_package:state
Specifies a software dependency that must be met on the destination systems for the operation to proceed. Only one dependency can be defined with a single use of the -S argument. To define more dependencies, you must include the argument multiple times.

For each dependency, you specify a valid software package and one of the following states:

I
The package must be in the IC, ICU or I--D state.
R
The package must be in the RC or RCU state, or it must never have been installed.

The default state is I. If you do not include a state, the default is assumed.

The condition specified using this argument is mapped to the dependency attribute in the log file, as follows:

-S mypkg.1.0:I becomes $(installed_software) == "mypkg.1.0"

-S mypkg.1.0:R becomes $(installed_software) != "mypkg.1.0"

For more information about dependency checking, see Dependency.

There are two stages in the software dependency check. The first check is to the Inventory before transmission. If this check returns a value of false, the transmission is ended. Otherwise, a check is made on each target system. If any of the targets fail to meet the requirement, the transmission is ended for all targets, unless the -I argument is specified in the command. In this case, the transmission is sent only to targets that pass the check.

-c
Specifies that codepage translation is required.
-D variable=value
Defines the value of a variable that is to be added or is to override existing variables. When specifying multiple variables, repeat the -D attribute before each variable=value. Note that these variables can be resolved only on the endpoint.
-I
Verifies whether the operation can be performed and proceeds with the operation only on target systems that meet the software dependency requirements that are specified by the -S argument. For example, if you are performing a delete operation and the software package has already been deleted on a target system, the operation does not proceed on that target system. If this argument is not included, the operation cannot proceed on any destination if all destinations do not meet the requirements.
-l mdist2_token=value
Specifies the distribution options, as follows:
label
Specifies a description string for the distribution. The default value is the string "filename (operation) " where filename indicates the file name, and operation is one of the following:
  • send
  • retrieve
  • delete

depending on the operation submitted.

priority
Specifies the priority level, which is the order in which distributions are handled by repeaters, either h (highest priority), m (medium priority), or l (low priority). The default value is m (medium priority). The priority level is the priority also used when logging information.
notify_interval
Specifies the notification interval key, which determines how often each repeater bundles the completed results and returns them to the application and distribution manager. This attribute is initially set using the wmdist -s command for each repeater and is expressed as a positive integer representing minutes. You can override the wmdist -s setting by specifying a different value here. The default value is 30 minutes.
send_timeout
Specifies the length of time a repeater will wait for a target system to receive a block of data. This timeout is used to detect network or endpoint failures. This attribute is initially set using the wmdist -s command for each repeater (the default value is 300 seconds) and is expressed as a positive integer representing seconds. You can override the wmdist -s setting by specifying a different value here.
execute_timeout
Specifies the length of time a repeater will wait for Software Distribution to return the result of a distribution after all the data has been sent. This timeout is used to detect network, endpoint, or script failures, such as a script running an infinite loop. This attribute is initially set using the wmdist -s command for each repeater (the default value is 600 seconds) and is expressed as a positive integer representing seconds. You can override the wmdist -s setting by specifying a different value here. When retrieve and send from endpoint to endpoint operations are performed, the software package is built on the endpoint. As this operation can require a longer amount of time than the default timeout value allows for, set a higher timeout value.
deadline
The date on which a distribution expires, that is, when it fails for unavailable target systems, in the format "mm/dd/yyyy hh:mm".
distribution_note
Specifies a message to be associated with a software package when it is distributed to mobile targets.

You can enter a text using the format:

distribution_note="message text"

You can specify a file using the format:

distribution_note=@filename
mandatory_date
Specifies a date, in the format "mm/dd/yyyy hh:mm", by which the distribution must be made to an endpoint or mobile target. Distributions to endpoints or mobile targets can be deferred up to this date. When the date is reached, the package is automatically installed on all endpoints or mobile targets that have not yet accepted it. Use this option to set the distribution as mandatory.
force_mandatory
The setting of this argument controls the way in which mandatory distributions on mobile targets are treated once the mandatory date is passed.

If you specify y (the default value), the mandatory distribution is automatically started as soon as the mobile user connects.

If you specify n, the mobile user has the choice of not starting the mandatory distribution. However, the user will not be able to perform any other operations until the mandatory distribution has been performed.

escalate_date_n
Specifies a date, in the format "mm/dd/yyyy hh:mm", on which a reminder message must be sent to mobile targets that have not yet performed the operation.

The n represents a number, 0 through 9, so that a sequence of up to ten messages can be specified. Each escalation date must have an associated message and there must be no gaps in the sequence.

escalate_msg_n
Specifies a message that must be sent to mobile targets that have not performed the operation by the associated escalation date.

The n represents a number, 0 through 9, so that a sequence of up to ten messages can be specified. Each message must have an associated date and there must be no gaps in the sequence.

You can enter a text using the format:

escalate_msg_n="message text"

You can specify a file using the format:

escalate_msg_n=@filename
enable_disconnected
Indicates whether disconnected operations are enabled. If you specify y, you have the option of downloading the software package to a depot and applying it later. If you specify n, you must apply the software package as soon as you download it.
hidden
Indicates whether the operation on mobile targets is to be hidden. Non-hidden operations on mobile targets can be deferred. Hidden operations cannot.

Valid values are y (hidden) and n (not hidden). If you set this argument to y, you must not set values for mandatory date, escalation dates or escalation messages.

roam_endpoints
Indicates whether the operation defined in the command supports roaming endpoints. Setting this argument to y, indicates that the distribution is to be transferred to any gateway where the mobile endpoint connects. Setting this argument to n, indicates that once the package is queued at a gateway it cannot be transferred to another. The default value is n.
wake_on_lan
Indicates whether the operation sends a wake-on-lan message to trigger rebooting of systems that are not available at distribution time. Valid values are y (yes) and n (no).
is_multicast
Enables data broadcasting to multiple repeaters. Multicast sends only one distribution from the source to a group of targets simultaneously. Use this option where there is limited network bandwidth. Valid values are t (true) and f (false). Setting this token to t enables the retry_unicast token.
retry_unicast
Retransmits the distribution independently to each endpoint that failed to receive the original multicast distribution. This option can be used only if the is_multicast option is set to t.
enable_notification
Specifies whether the user should be notified of a distribution starting on the user's machine. The notification dialog containing the message text is displayed only on Windows platform endpoints. To specify the message text, use the user_notification option. Valid values are y and n. The default value is n.
allow_defer
Specifies whether the user should be allowed to defer the distribution. A user can defer the software distribution and, at the end of the defer timeout period, subsequently reject it or defer it again. Valid values are y and n. The default value is y.
allow_reject
Specifies whether the user should be allowed to reject the distribution. Valid values are y and n. The default value is y.
default_action
Specifies the default action to be performed on the user's machine in case the user is not logged on the machine, or is not physically present. Valid values are accept and reject. The default value is accept.
default_timeout
Specifies the interval of time the notification dialog is displayed. The default is 60 seconds. When the timeout period elapses, the default action is launched if the user is logged on. If the user is not logged on, the default action is launched immediately without a timeout period.
user_notification
Specifies the text to be sent with the distribution and displayed on the user's machine. The notification dialog containing the message text is displayed only on Windows platform endpoints. To enable this function, set enable_notification to y

You can enter text using the following format:

user_notification="message text"

You can specify a file using the following format:

user_notification=@/test/download/filename
fail_unavail
Specifies whether the distribution fails on endpoints that cannot be reached for any reason. Supported values are true and false. The default value is false.
-p profile_manager
Specifies the profile manager in which the DataMovingRequests.1 software distribution object is to be created.

Note:

Before specifying a profile manager name, make sure that the profile manager belongs to a region having SoftwarePackage as managed resource.

DataMovingRequests.1 is the software distribution object used for all data moving operations. It includes general information for data moving operations, for example, the name and location of the log file.

-A
Triggers the creation of the DataMovingRequests.1 software distribution object in a profile manager that belongs to a region having SoftwarePackage as managed resource
-R
Specifies that the selected operation will be applied to all subdirectories in the source and target directories. This option is not supported with matching indicators.
-B
Use this option to specify that the whole data moving operation must be considered as failed if the post-script on the source host fails. This option is available only for send and retrieve operations.
-F
Use this option to specify that the send operation must proceed even when one or more files to be sent to endpoints are not present on the source host. This option applies to the case in which you are sending multiple files using the $(ep_label) variable to specify which files are to be sent. In this case, the $(ep_label) variable allows you to send files with similar names to different endpoints by substituting the label of the endpoint, for example 'data.endpoint_name.txt' where endpoint_name is the label of the endpoint. For more information, see Sending Multiple Files.
-G
Use this option to modify the behavior of the command when retrieving one file from the endpoints. If you specify this option, the file is saved on the destination system according to the following naming convention:
name_endpoint_timestamp_distribution_id.extension. If this option is not specified, the default behavior applies and the retrieved files are saved with their original names to a directory on the destination system named according to the following convention:
<endpoint>_<distribution_id>_<timestamp>. For more information, see File Paths.

Note:

This parameter can be used only when retrieving a single file.
file
Specifies the name of the file to be moved. The file name can be fully qualified or relative to the paths specified in the origin and destination lists. If the file name is fully qualified, the specified path defines the location of the file on the origin and destination systems. See File Paths. This option also allows you to send files with similar names to different endpoints by substituting the label of the endpoint, for example 'data.endpoint_name.txt' where endpoint_name is the label of the endpoint. For more information, see Sending Multiple Files.

Note:

Hard links and symbolic links are not supported. Hard links are turned to files and lose the link to the original file, while symbolic links are ignored.

File Paths

The CLI allows definition of any or all of the following:

The qualified file name is appended to the origin and destination paths to obtain the full path to the file on the origin and destination systems.

Note:

Due to the differences in default drive between the source host, where the default drive is the drive where Software Distribution is installed, and Windows 2000 endpoints where the default drive is defined in a system variable, it is advisable to include the drive in the definition of fully qualified paths.

The origin and destination path values are optional and the file name may be unqualified. The examples that follow show how the file location is resolved depending on the presence or absence of these values.

wspmvdata -s @lab15124 -P sp:/usr/sd/ -t @lab67135-w98,
@lab15180-2000 /source/data.txt

On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /source/data.txt

wspmvdata -s @lab15124 -P sp:/usr/sd/source -t @lab67135-w98,
@lab15180-2000 data.txt

On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /<default dir>/data.txt

In the last example, no destination path is specified and the file name is unqualified, so a default path is used for the destination location. The default path for Tivoli managed nodes is the current working directory of the SH process implementation ($DBDIR). The default path for endpoints is the <prod_dir> directory, which can be set in the swdis.ini file.

If the target directory is not preceded by a backslash, it is created in the default directory.

wspmvdata -s @lab15124 -P sp:/usr/sd/source -P tp:dest -t @lab67135-w98,
@lab15180-2000 data.txt 

On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /<default dir>/dest/data.txt

If the backslash is inserted, the target directory dest is created at root level.

wspmvdata -s @lab15124 -P sp:/usr/sd/source -P tp:/dest -t @lab67135-w98,
@lab15180-2000 data.txt 

On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /dest/data.txt

When performing a retrieve operation, a new sub-directory is created under the specified destination directory using the following naming convention:
endpointname_distributionID_timestamp. A single directory is created on the source host for each endpoint, as described in the following example:
 

wspmvdata -t @lab21543mn -s @lab21459,@lab21635,@lab21857 
-P sp:/usr/sd/source -P tp:/usr/sd/target data.txt 

This ensures that each retrieved file is stored in a unique directory on the source host.

On the origin system, the file is: /usr/sd/source/data.txt.

On the destination system, the file for endpoint lab21459 is: /usr/sd/target/lab21459_1506362350.267_20050421112728/data.txt.

On the destination system, the file for endpoint lab21635 is: /usr/sd/target/lab21635_1384061647.853_20050421112752/data.txt.

On the destination system, the file for endpoint lab21857 is /usr/sd/target//lab21857_1956072719.249_20050421112803/data.txt.

If you are retrieving only one file from each endpoint, you can choose to save the file on the destination system with the following naming convention: file_name_endpoint_name_timestamp_distribution_id.file_extension, as described in the following example:

wspmvdata -t @lab21543mn -s @lab21459,@lab21635,@lab21857 
-P sp:/usr/sd/source -P tp:/usr/sd/target -G data.txt 

On the origin system, the file is: /usr/sd/source/data.txt.

On the destination system, the file for endpoint lab21459 is: /usr/sd/target/data_lab21459_20050421110213_1506362350.267.txt.

On the destination system, the file for endpoint lab21635 is: /usr/sd/target/data_lab21635_20050421112413_1685244375.497.txt

On the destination system, the file for endpoint lab21857 is: /usr/sd/target/data_lab21857_20050421111421_1375294728.468.txt

To perform this operation, use the -G option.

Using Wildcards and Matching Indicators

With the wspmvdata command, you can use the Software Distribution standard variables, or wild cards and matching indicators to specify a file name, as described below:

The -R option is not supported with matching indicators. For more information about Software Distribution variables, see Using Variables.

On UNIX systems, enter wild cards and matching indicators between single quotation marks, or precede them with a backslash, as described in the following examples:

wspmvdata -s @lab78040 -t @endpt1, @endpt2, @endpt3 '*.*'

 

wspmvdata -t @lab78040 -s @endpt1, @endpt2, @endpt3 sales.\$\(ep_label\).txt

For more information about using the wild card and the matching indicators, refer to the IBM Tivoli Configuration Manager: User's Guide for Software Distribution.

Scripts for Pre- and Post-processing

Using the -r argument, you can specify scripts for pre- and post-processing on the origin and destination systems. For example, you can include .exe, .com, or .pl programs you have written to perform tasks before and after moving data.

Note:

If you specify a script created in a language that is not native to the operating system installed on the origin or destination system, you must specify the path to the application that runs the script on the origin or destination system, as described in the following example:

wspmvdata -s @lab133049-w2k -P sp:/wtd_tmp/source -P tp:/wtd_tmp/target 
-t @lab133148-w2003 -r spre:"c:/tools/applications/perl/perl.exe 
/wtd_tmp/target/script1.pl" 
-r tpost:/wtd_tmp/target/test.exe data.txt"

In this case, the origin pre-script script1.pl is to be performed on a Windows origin system, therefore the path to the perl executable must be specified with the -r spre option.

These scripts define pre- and post-processing tasks on the origin and destination systems between which you want to move data. Up to four scripts can be invoked.

The following list shows the sequence of scripts for send operations:

  1. Origin pre-processing script on the origin system.
  2. Destination pre-processing script on each endpoint.
  3. Destination post-processing script on each endpoint.
  4. Origin post-processing script on the origin system.

The following list shows the sequence of scripts for retrieve operations:

  1. Destination pre-processing script on each endpoint.
  2. Origin pre-processing script on the origin system.
  3. Destination post-processing script on each endpoint.
  4. Origin post-processing script on the origin system.

A delete operation does not have an origin. The destination pre- and post-processing scripts run on the endpoints.

You can also move data from one endpoint to multiple endpoints. The sequence in which the scripts run in a send operation from endpoint to endpoint is as follows:

  1. The pre-processing script runs on the origin system, which is the endpoint that was specified in the command.
  2. A post-processing script runs on the origin system, which is the endpoint that was specified in the command.
  3. A pre-processing script runs on each destination system. The destination systems for a send operation are endpoints.
  4. A post-processing script runs on each destination system. The destination systems for a send operation are endpoints.

In all pre- and post-processing scripts, there is a set of predefined parameters. The following list shows the parameters and the values assigned to each at run time.

Parameter 1 Operation Type
Send, Retrieve, Delete
Parameter 2 Location Type
EP_SCRIPT, SH_SCRIPT
Parameter 3 Timing Type
PRE_SCRIPT, POST_SCRIPT
Parameter 4 Data File
Fully qualified file name. When using this parameter in a post-processing script at destination during a recursive retrieve operation, the value assigned to the parameter is the destination directory only, not the file name, as multiple files are being retrieved.
Parameter 5 Endpoint Label
Unique endpoint identifier. This parameter is only available for the post-processing script on the source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed.
Parameter 6 Endpoint Result
Result of the operation on the endpoint. Possible results are 0 (success) and 1 (failure). This parameter is only available for the post-processing script on the source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed.
Note:

The Endpoint Result parameter allows you to condition the running of the post-processing script on the source host to the result of the operation on the endpoint, so that, for example, the script is not run if the operation on the endpoint has not been successful.

Note:

If you are writing a post-processing script for use on Windows platforms, you must include code to deal with any errors caused by the file being locked.

This situation can occur when an identical file, in name and content, already exists on the target system and is locked at the distribution time. In such a case, the data moving operation does not fail with "file locked", because it does not attempt to replace the file, since there are no changes.

As the operation has not failed, the post-processing script will run and must be able to deal with a locked file.

Example

The following command includes the script merge.sh as a post-processing script on the target system:

wspmvdata -t @lab15124 -s @lab67135-w98,@lab15180-2000
 -r tpost:/usr/sd/scripts/merge.sh /usr/sd/source/data.txt

The destination system for this command is a source host and the source list includes two endpoints. The purpose of the merge.sh script is to create a single file on the source host system by merging the files that have been retrieved from the endpoints. The merge.sh script is performed as a post-processing script on the source host after the files have been retrieved from the specified endpoints.

#!/bin/sh
#===================================================
CM_OPERATION=$1
LOCATION=$2
PRE_POST=$3
DATA_FILE=$4
print "CM Operation:" $CM_OPERATION > /usr/sd/scripts/merge.out;
print "Location:" $LOCATION >> /usr/sd/scripts/merge.out;
print "Pre-post:" $PRE_POST >> /usr/sd/scripts/merge.out;
print "File Name" $DATA_FILE >>/usr/sd/scripts/merge.out;
print "========================================" >>/usr/sd/scripts/merge.file;
print "=FILE merged: $DATA_FILE at: `date`="          >>/usr/sd/scripts/merge.file;
print "========================================" >>/usr/sd/scripts/merge.file;
cat $DATA_FILE >> /usr/sd/scripts/merge.file;
print "Error level is:" $? >> /usr/sd/scripts/merge.out;
exit $?

When the merge.sh script runs, the fixed parameters are set as follows:

$1
Retrieve
$2
SH_SCRIPT
$3
POST_SCRIPT
$4
/usr/sd/source/<endpoint name>_<distributionID>_<timestamp>

Note:

A single directory is created on the source host for each endpoint. This ensures that each retrieved file is stored in a unique directory on the source host.

The script writes these values and any errors to an output file and appends the contents of the data file to the file /usr/sd/scripts/merge.file.

Sending Multiple Files

When you need to send several different files with similar names to different endpoints and each endpoint must receive only a specific file, you can use the $(ep_label) variable in the source file name. The $(ep_label) variable replaces the label of the endpoint.

The $(ep_label) variable is then resolved on each endpoint and the file named with the endpoint label is installed on the corresponding endpoint.

When you perform a send operation using this variable, an internal software package is created in the product_dir on the source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed. This software package contains all the files to be sent to the endpoints and a condition for each file which specifies on which endpoint each file must be installed. The software package is then sent to the target endpoints where the variable is resolved and the files installed.

You can specify the maximum size for the software package by setting the dms_send_max_spb_size key with the wswdcfg command. The default value for this key is 10,000 kilobytes. You can set this value to any integer equal to or lower than two gigabytes, which is the maximum size for a software package. The value defined on the Tivoli server is applied to the entire region, irrespective of the values defined on the source hosts, if any. For more information on the wswdcfg command, see wswdcfg. Note that an amount of space at least equal to the value you specify must be available in the product_dir on the source host for the package to be created.

To calculate the precise value for the dms_send_max_spb_size key, you need to consider the total size of the files to be sent plus 2 kilobytes for each endpoint.

If you are working with interconnected regions, you must perform the following operations when sending multiple files:

This behavior allows you to manage endpoints with duplicate labels within interconnected regions.

The following command sends files data.ep1#sales-region.txt to endpoint ep1#sales-region, file data.ep1#resources-region.txt to endpoint ep1#resources-region, and file data.lab132782-ep.txt to endpoint lab132782-ep, registered to the same region where the source host is located.

wspmvdata -s @yoursourcehost -t @ep1#sales-region, @ep1#resources-region,
@lab132782-ep -P sp:c:\source\ -P tp:c:\target data.$(ep_label).txt

When specifiyng the distribution list the pound (#) sign and region name must be specified only when the endpoint label is duplicate between one or more endpoints. Note that files data.ep1#sales-region.txt, data.ep1#resources-region.txt, and file data.lab132782-ep.txt must be present on the source host, otherwise the operation is not performed because the -F option has not been specified.

To determine the region to which the specified endpoint belongs, type the following two commands on the Tivoli server:

eid=`wlookup -r Endpoint endpoint_name | awk -F'.' '{print $1}'`
ep_region=`wlsconn | grep $eid | awk '{print $2}'`

where:

endpoint_name
is the name of the endpoint whose region name you are trying to determine.

The results of the commands are returned to standard output. If the output returned is empty, the endpoint belongs to the region where the command was launched. If you have a large number of endpoints, you can insert this command in a script file. On Windows systems, these commands must be run in a bash shell. For more information on the wlookup and wlsconn commands, refer to Tivoli Management Framework: Reference Manual.

In the DataMovingRequests.1.log file, the information concerning the distributions to interconnected regions is logged according to the following criteria:

Authorization

admin, senior, or super

Return Values

The wspmvdata command returns one of the following:
0
Indicates that wspmvdata started successfully.
-1
Indicates that wspmvdata failed due to an error.

Examples

  1. The following command sends file /data/file1 from the origin system sh1 to a list of destination systems. Code translation is required and a post transmission script, epprocess.sh, is to be executed on the destination systems.
    wspmvdata -s @sh1 -t @ep1,@ep2,@ep3 -r
    tpost:/scripts/epprocess.sh -c /data/file1 
  2. The following command runs a pre-processing script called export_file.sh on the pi003-ept, pi006-ept endpoints to extract data. The extracted data is saved in a file called trans. The trans file is stored in the /sales directory on each endpoint. Afterwards the trans files are retrieved from the endpoints and stored on the source host system, in a sub-directory within the /data/sales destination directory:
    wspmvdata -s @pi003-ept,@pi006-ept -t @centoff -r tpost:/tmp/import_file.sh 
    -r spre:/tmp/export_file.sh -P sp:/sales -P tp:/data/sales trans
    Note:

    The destination directory (/data/sales) on the system is not created with this command, so it must be created beforehand.

    When the operation is completed the trans file is stored on the source host system under the following paths:
    data/sales/pi003-ept_14614660071043934511_20030130144803/trans data/sales/pi006-ept_14614660071043934511_20030130144803/trans

  3. The following command sends file file1 from the /data directory on the origin system sh1 to a list of destination systems. The file is to be transferred to a location on the destination systems that is represented by the variable $staging_dir.
    wspmvdata -s @sh1 -P sp:/data -t @ep1,@ep2,@ep3 -P tp:$staging_dir file1 
  4. The following command retrieves the file file1 from the directory represented by the variable $temp on each of the endpoints specified in the origin list and transfers the retrieved files to the destination system sh1, where each is saved with a unique name in the directory represented by the variable $temp. The command specifies a pre-transmission task to export the file on each origin system and a post-transmission task to import the retrieved files on the destination system.
    wspmvdata -s @ep1,@ep2,@ep3 -t @sh1 -r tpost:/scripts/import.sh 
    -r spre:/scripts/export.sh $temp/file1 
  5. The following command deletes the file called test stored in the /temp directory on the b1, b2 systems if the software SW_Package version 2 is in removed state or has never been installed.
    wspmvdata -t @b1,@b2 -S @SW_Package^2:R -d /temp/test
  6. The following command sends file plist from the origin system centoff to a list of destination systems and runs a post -transfer task on the destination systems:
    wspmvdata -s @centoff -t @pi003-ept, @pi006-ept -r tpost:/tmp/importpl.sh 
    -P sp:/price -P tp:/data/sales plist
  7. The following command runs a pre-process task on the origin systems pi003-ept and pi006-ept, retrieves file trans from the origin systems, sends it to the destination system centoff, and runs a post-transfer task on the destination system:
    wspmvdata -s @pi003-ept,@pi006-ept -t @centoff -r tpost:\tmp\importtrans.sh
     -r spre:\tmp\exporttran.sh -P sp:/sales -P tp:/data/sales trans
  8. The following command selects and moves the file on the source directory on the centoff system c:/tmp with prefix sales.data and suffix transactions.txt and with the highest value; that is the most recent date within the set to the target systems b1, b2, b3:
    wspmvdata -s @centoff -t @b1,@b2,@b3 -P sp:c:/tmp 
    -P tp:/tmp sales.data.$(MAX).transactions.txt

    To enter the same command on a UNIX system, enter the string with the wild card between single quotation marks or precede it with a backslash, as follows:

    wspmvdata -s @centoff -t @b1,@b2,@b3 -P sp:c:/tmp 
    -P tp:/tmp 'sales.data.\$\(MAX\).transactions.txt'

    or

    wspmvdata -s @centoff -t @b1,@b2,@b3 -P sp:c:/tmp 
    -P tp:/tmp sales.data.\$(MAX).transactions.txt
  9. The following command retrieves all files in the directory c:/tmp on the endpoints b1, b2, b3 with prefix sales.data and suffix transactions replacing the $(ep_label) variable with the actual name of the endpoint and sends them to the destination system centoff:
    wspmvdata -s @b1,@b2,@b3 -t @centoff -P sp:c:/tmp 
    -P tp:/tmp sales.data.$(ep_label).transactions
  10. The following command sends files located in the directory c:\temp on the source host juno to each endpoint, based on the endpoint name. Each endpoint receives only the file containing its label as part of the file name. If any of the files are not present on the source host, the -F option causes the operation to be performed on the remaining endpoints:
    wspmvdata -s  @juno -t @ep1,@ep2 -P sp:c:\temp -P tp:d:\temp -F 
    price.$(ep_label).txt



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019