|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
News | TCM | Recommended Links | Options | Examples | IBM humor | Etc |
|
|
Moves data between source hosts, that is Tivoli managed nodes with Software Distribution installed, and endpoints, and between one endpoint and more endpoints. Syntax is as following:
wspmvdata -s origin_list -t destination_list [-P sp|tp: path] [-r spre|spost|tpre|tpost:script] [-S sw_package[:status]] [-c] [-D variable=value]... [-I] [-l mdist2_token=value...] [-R] [-B] [-F] fileThis command can be used for transferring medium files from one server to another. Huge files are more tricky as by default the max size is limited to free space on /opt filesystem where Tivoli endpoint is installedwspmvdata -d destination_list [-P tp: path] [-r tpre|tpost:script] [-S sw_package[:status]] [-I] [-l mdist2_token=value...] [-R] [-B] file
wspmvdata -s origin_list -t destination_list [-P sp|tp: path] [-r spre|spost|tpre|tpost:script] [-S sw_package[:status]] [-c] [-D variable=value]... [-I] [-l mdist2_token=value...] [-R] [-B] [-F] [-G] file
wspmvdata -A
wspmvdata -p profile_manager
You can transfer one-to-one on one-to-many. Multiple destination systems capability is very convenient.
The same is true about retrieving data: you can get data from multiple endpoints to update values on a source host, that is a Tivoli managed node with Software Distribution installed.
While not that interesting but you can also delete data on multiple systems as command allow scripts to be run.
It understand simple (shell-style or DOS-style) regular expressions (wildcards) so multiple source files can be targeted with one command.
Running pre- and post-transfer tasks on both the origin and destination systems make this command really powerful, kind of distribution system in a box.
More advanced options include:
Data moving operations are logged in a file named DataMovingRequests.1.log. By default, this file is written in the working_dir defined using the working_dir key with the wswdcfg command.
If you enable the split_dm_log option, the data moving log is split into a separate file for each data moving operation. The resulting files are named according to the following standard: DataMovingRequest.DIST_ID.log
where
DIST_ID is the last portion of the MDist 2 distribution ID.
In the data moving architecture, data is moved between source hosts and endpoints and between one endpoint and multiple endpoints. A source host is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed. The source host corresponds to the origin system when send operations are performed, with the exception of send operations from one endpoint to multiple endpoints, where the origin system is an endpoint. During a retrieve operation, instead, the source host is the destination system.
All data moving operations use the same software package object, DataMovingRequest.1. This object contains certain standard information to be used by all data moving operations, including logging options. This object is either created automatically at installation time or by using the -A or -p profile_manager mutually exclusive options. If neither of these operations is performed, the object is created automatically in the first profile manager that belongs to a region having SoftwarePackage as managed resource when the first data moving operation is performed.
For more information about the DataMovingRequests.1 object and configuring the Data Moving service, see "Configuring the Data Moving Service" in the IBM Tivoli Configuration Manager: User's Guide for Software Distribution.
Components of the list must be separated by commas but the list must not end with a comma. No spaces are allowed. Each endpoint name must be preceded by @. For example,
-s @test1,@test2,@pm1,file1
If one or more subscribers are not valid, the operation fails on those subscribers, and continues on the other subscribers. Information about the subscribers on which the operation did not complete is written to the DataMovingRequests.1.log file. To specify that a distribution must be stopped when invalid targets are encountered, use the continue_on_invalid_targets key in the wswdcfg command.
Note:
When this option is specified, empty directories are removed, and a warning message is inserted in the log file.Components of the list must be separated by commas but the list must not end with a comma. No spaces are allowed. Each endpoint name must be preceded by @. For example,
-t @ep1,@ep2,@pm1,file1
This argument is always required.
If one or more subscribers are not valid, the operation fails on those subscribers, and continues on the other subscribers. Information about the subscribers on which the operation did not complete is written to the DataMovingRequests.1.log file. To specify that a distribution must be stopped when invalid targets are encountered, use the continue_on_invalid_targets key in the wswdcfg command.
Note:
If you are performing a retrieve operation, the destination directory on the system is not created with this command, so it must be created beforehand. During the retrieve operation the program creates a new sub-directory under the specified destination directory using the following naming convention:See File Paths.
For each dependency, you specify a valid software package and one of the following states:
The default state is I. If you do not include a state, the default is assumed.
The condition specified using this argument is mapped to the dependency attribute in the log file, as follows:
-S mypkg.1.0:I becomes $(installed_software) == "mypkg.1.0"
-S mypkg.1.0:R becomes $(installed_software) != "mypkg.1.0"
For more information about dependency checking, see Dependency.
There are two stages in the software dependency check. The first check is to the Inventory before transmission. If this check returns a value of false, the transmission is ended. Otherwise, a check is made on each target system. If any of the targets fail to meet the requirement, the transmission is ended for all targets, unless the -I argument is specified in the command. In this case, the transmission is sent only to targets that pass the check.
depending on the operation submitted.
You can enter a text using the format:
distribution_note="message text"
You can specify a file using the format:
distribution_note=@filename
If you specify y (the default value), the mandatory distribution is automatically started as soon as the mobile user connects.
If you specify n, the mobile user has the choice of not starting the mandatory distribution. However, the user will not be able to perform any other operations until the mandatory distribution has been performed.
The n represents a number, 0 through 9, so that a sequence of up to ten messages can be specified. Each escalation date must have an associated message and there must be no gaps in the sequence.
The n represents a number, 0 through 9, so that a sequence of up to ten messages can be specified. Each message must have an associated date and there must be no gaps in the sequence.
You can enter a text using the format:
escalate_msg_n="message text"
You can specify a file using the format:
escalate_msg_n=@filename
Valid values are y (hidden) and n (not hidden). If you set this argument to y, you must not set values for mandatory date, escalation dates or escalation messages.
You can enter text using the following format:
user_notification="message text"
You can specify a file using the following format:
user_notification=@/test/download/filename
Note:
Before specifying a profile manager name, make sure that the profile manager belongs to a region having SoftwarePackage as managed resource.DataMovingRequests.1 is the software distribution object used for all data moving operations. It includes general information for data moving operations, for example, the name and location of the log file.
Note:
This parameter can be used only when retrieving a single file.Note:
Hard links and symbolic links are not supported. Hard links are turned to files and lose the link to the original file, while symbolic links are ignored.The CLI allows definition of any or all of the following:
The qualified file name is appended to the origin and destination paths to obtain the full path to the file on the origin and destination systems.
Note:Due to the differences in default drive between the source host, where the default drive is the drive where Software Distribution is installed, and Windows 2000 endpoints where the default drive is defined in a system variable, it is advisable to include the drive in the definition of fully qualified paths.
The origin and destination path values are optional and the file name may be unqualified. The examples that follow show how the file location is resolved depending on the presence or absence of these values.
wspmvdata -s @lab15124 -P sp:/usr/sd/ -t @lab67135-w98, @lab15180-2000 /source/data.txt
On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /source/data.txt
wspmvdata -s @lab15124 -P sp:/usr/sd/source -t @lab67135-w98, @lab15180-2000 data.txt
On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /<default dir>/data.txt
In the last example, no destination path is specified and the file name is unqualified, so a default path is used for the destination location. The default path for Tivoli managed nodes is the current working directory of the SH process implementation ($DBDIR). The default path for endpoints is the <prod_dir> directory, which can be set in the swdis.ini file.
If the target directory is not preceded by a backslash, it is created in the default directory.
wspmvdata -s @lab15124 -P sp:/usr/sd/source -P tp:dest -t @lab67135-w98, @lab15180-2000 data.txt
On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /<default dir>/dest/data.txt
If the backslash is inserted, the target directory dest is created at root level.
wspmvdata -s @lab15124 -P sp:/usr/sd/source -P tp:/dest -t @lab67135-w98, @lab15180-2000 data.txt
On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file is: /dest/data.txt
When performing a retrieve operation, a new sub-directory is created under the
specified destination directory using the following naming convention:
endpointname_distributionID_timestamp. A single directory is created on
the source host for each endpoint, as described in the following example:
wspmvdata -t @lab21543mn -s @lab21459,@lab21635,@lab21857 -P sp:/usr/sd/source -P tp:/usr/sd/target data.txt
This ensures that each retrieved file is stored in a unique directory on the source host.
On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file for endpoint lab21459 is: /usr/sd/target/lab21459_1506362350.267_20050421112728/data.txt.
On the destination system, the file for endpoint lab21635 is: /usr/sd/target/lab21635_1384061647.853_20050421112752/data.txt.
On the destination system, the file for endpoint lab21857 is /usr/sd/target//lab21857_1956072719.249_20050421112803/data.txt.
If you are retrieving only one file from each endpoint, you can choose to save the file on the destination system with the following naming convention: file_name_endpoint_name_timestamp_distribution_id.file_extension, as described in the following example:
wspmvdata -t @lab21543mn -s @lab21459,@lab21635,@lab21857 -P sp:/usr/sd/source -P tp:/usr/sd/target -G data.txt
On the origin system, the file is: /usr/sd/source/data.txt.
On the destination system, the file for endpoint lab21459 is: /usr/sd/target/data_lab21459_20050421110213_1506362350.267.txt.
On the destination system, the file for endpoint lab21635 is: /usr/sd/target/data_lab21635_20050421112413_1685244375.497.txt
On the destination system, the file for endpoint lab21857 is: /usr/sd/target/data_lab21857_20050421111421_1375294728.468.txt
To perform this operation, use the -G option.
With the wspmvdata command, you can use the Software Distribution standard variables, or wild cards and matching indicators to specify a file name, as described below:
The -R option is not supported with matching indicators. For more information about Software Distribution variables, see Using Variables.
On UNIX systems, enter wild cards and matching indicators between single quotation marks, or precede them with a backslash, as described in the following examples:
wspmvdata -s @lab78040 -t @endpt1, @endpt2, @endpt3 '*.*'
wspmvdata -t @lab78040 -s @endpt1, @endpt2, @endpt3 sales.\$\(ep_label\).txt
For more information about using the wild card and the matching indicators, refer to the IBM Tivoli Configuration Manager: User's Guide for Software Distribution.
Using the -r argument, you can specify scripts for pre- and post-processing on the origin and destination systems. For example, you can include .exe, .com, or .pl programs you have written to perform tasks before and after moving data.
Note:If you specify a script created in a language that is not native to the operating system installed on the origin or destination system, you must specify the path to the application that runs the script on the origin or destination system, as described in the following example:
wspmvdata -s @lab133049-w2k -P sp:/wtd_tmp/source -P tp:/wtd_tmp/target -t @lab133148-w2003 -r spre:"c:/tools/applications/perl/perl.exe /wtd_tmp/target/script1.pl" -r tpost:/wtd_tmp/target/test.exe data.txt"
In this case, the origin pre-script script1.pl is to be performed on a Windows origin system, therefore the path to the perl executable must be specified with the -r spre option.
These scripts define pre- and post-processing tasks on the origin and destination systems between which you want to move data. Up to four scripts can be invoked.
The following list shows the sequence of scripts for send operations:
The following list shows the sequence of scripts for retrieve operations:
A delete operation does not have an origin. The destination pre- and post-processing scripts run on the endpoints.
You can also move data from one endpoint to multiple endpoints. The sequence in which the scripts run in a send operation from endpoint to endpoint is as follows:
In all pre- and post-processing scripts, there is a set of predefined parameters. The following list shows the parameters and the values assigned to each at run time.
The Endpoint Result parameter allows you to condition the running of the post-processing script on the source host to the result of the operation on the endpoint, so that, for example, the script is not run if the operation on the endpoint has not been successful.
Note:If you are writing a post-processing script for use on Windows platforms, you must include code to deal with any errors caused by the file being locked.
This situation can occur when an identical file, in name and content, already exists on the target system and is locked at the distribution time. In such a case, the data moving operation does not fail with "file locked", because it does not attempt to replace the file, since there are no changes.
As the operation has not failed, the post-processing script will run and must be able to deal with a locked file.
The following command includes the script merge.sh as a post-processing script on the target system:
wspmvdata -t @lab15124 -s @lab67135-w98,@lab15180-2000 -r tpost:/usr/sd/scripts/merge.sh /usr/sd/source/data.txt
The destination system for this command is a source host and the source list includes two endpoints. The purpose of the merge.sh script is to create a single file on the source host system by merging the files that have been retrieved from the endpoints. The merge.sh script is performed as a post-processing script on the source host after the files have been retrieved from the specified endpoints.
#!/bin/sh #=================================================== CM_OPERATION=$1 LOCATION=$2 PRE_POST=$3 DATA_FILE=$4 print "CM Operation:" $CM_OPERATION > /usr/sd/scripts/merge.out; print "Location:" $LOCATION >> /usr/sd/scripts/merge.out; print "Pre-post:" $PRE_POST >> /usr/sd/scripts/merge.out; print "File Name" $DATA_FILE >>/usr/sd/scripts/merge.out; print "========================================" >>/usr/sd/scripts/merge.file; print "=FILE merged: $DATA_FILE at: `date`=" >>/usr/sd/scripts/merge.file; print "========================================" >>/usr/sd/scripts/merge.file; cat $DATA_FILE >> /usr/sd/scripts/merge.file; print "Error level is:" $? >> /usr/sd/scripts/merge.out; exit $?
When the merge.sh script runs, the fixed parameters are set as follows:
Note:
A single directory is created on the source host for each endpoint. This ensures that each retrieved file is stored in a unique directory on the source host.The script writes these values and any errors to an output file and appends the contents of the data file to the file /usr/sd/scripts/merge.file.
When you need to send several different files with similar names to different endpoints and each endpoint must receive only a specific file, you can use the $(ep_label) variable in the source file name. The $(ep_label) variable replaces the label of the endpoint.
The $(ep_label) variable is then resolved on each endpoint and the file named with the endpoint label is installed on the corresponding endpoint.
When you perform a send operation using this variable, an internal software package is created in the product_dir on the source host, that is a Tivoli managed node, functioning as a gateway or a repeater, where Software Distribution is installed. This software package contains all the files to be sent to the endpoints and a condition for each file which specifies on which endpoint each file must be installed. The software package is then sent to the target endpoints where the variable is resolved and the files installed.
You can specify the maximum size for the software package by setting the dms_send_max_spb_size key with the wswdcfg command. The default value for this key is 10,000 kilobytes. You can set this value to any integer equal to or lower than two gigabytes, which is the maximum size for a software package. The value defined on the Tivoli server is applied to the entire region, irrespective of the values defined on the source hosts, if any. For more information on the wswdcfg command, see wswdcfg. Note that an amount of space at least equal to the value you specify must be available in the product_dir on the source host for the package to be created.
To calculate the precise value for the dms_send_max_spb_size key, you need to consider the total size of the files to be sent plus 2 kilobytes for each endpoint.
If you are working with interconnected regions, you must perform the following operations when sending multiple files:
This behavior allows you to manage endpoints with duplicate labels within interconnected regions.
The following command sends files data.ep1#sales-region.txt to endpoint ep1#sales-region, file data.ep1#resources-region.txt to endpoint ep1#resources-region, and file data.lab132782-ep.txt to endpoint lab132782-ep, registered to the same region where the source host is located.
wspmvdata -s @yoursourcehost -t @ep1#sales-region, @ep1#resources-region, @lab132782-ep -P sp:c:\source\ -P tp:c:\target data.$(ep_label).txt
When specifiyng the distribution list the pound (#) sign and region name must be specified only when the endpoint label is duplicate between one or more endpoints. Note that files data.ep1#sales-region.txt, data.ep1#resources-region.txt, and file data.lab132782-ep.txt must be present on the source host, otherwise the operation is not performed because the -F option has not been specified.
To determine the region to which the specified endpoint belongs, type the following two commands on the Tivoli server:
eid=`wlookup -r Endpoint endpoint_name | awk -F'.' '{print $1}'`
ep_region=`wlsconn | grep $eid | awk '{print $2}'`
where:
The results of the commands are returned to standard output. If the output returned is empty, the endpoint belongs to the region where the command was launched. If you have a large number of endpoints, you can insert this command in a script file. On Windows systems, these commands must be run in a bash shell. For more information on the wlookup and wlsconn commands, refer to Tivoli Management Framework: Reference Manual.
In the DataMovingRequests.1.log file, the information concerning the distributions to interconnected regions is logged according to the following criteria:
wspmvdata -s @sh1 -t @ep1,@ep2,@ep3 -r tpost:/scripts/epprocess.sh -c /data/file1
wspmvdata -s @pi003-ept,@pi006-ept -t @centoff -r tpost:/tmp/import_file.sh -r spre:/tmp/export_file.sh -P sp:/sales -P tp:/data/sales transNote:
The destination directory (/data/sales) on the system is not created with this command, so it must be created beforehand.
When the operation is completed the trans file is stored on the source host
system under the following paths:
data/sales/pi003-ept_14614660071043934511_20030130144803/trans data/sales/pi006-ept_14614660071043934511_20030130144803/trans
wspmvdata -s @sh1 -P sp:/data -t @ep1,@ep2,@ep3 -P tp:$staging_dir file1
wspmvdata -s @ep1,@ep2,@ep3 -t @sh1 -r tpost:/scripts/import.sh -r spre:/scripts/export.sh $temp/file1
wspmvdata -t @b1,@b2 -S @SW_Package^2:R -d /temp/test
wspmvdata -s @centoff -t @pi003-ept, @pi006-ept -r tpost:/tmp/importpl.sh -P sp:/price -P tp:/data/sales plist
wspmvdata -s @pi003-ept,@pi006-ept -t @centoff -r tpost:\tmp\importtrans.sh -r spre:\tmp\exporttran.sh -P sp:/sales -P tp:/data/sales trans
wspmvdata -s @centoff -t @b1,@b2,@b3 -P sp:c:/tmp -P tp:/tmp sales.data.$(MAX).transactions.txt
To enter the same command on a UNIX system, enter the string with the wild card between single quotation marks or precede it with a backslash, as follows:
wspmvdata -s @centoff -t @b1,@b2,@b3 -P sp:c:/tmp -P tp:/tmp 'sales.data.\$\(MAX\).transactions.txt'
or
wspmvdata -s @centoff -t @b1,@b2,@b3 -P sp:c:/tmp -P tp:/tmp sales.data.\$(MAX).transactions.txt
wspmvdata -s @b1,@b2,@b3 -t @centoff -P sp:c:/tmp -P tp:/tmp sales.data.$(ep_label).transactions
wspmvdata -s @juno -t @ep1,@ep2 -P sp:c:\temp -P tp:d:\temp -F price.$(ep_label).txt
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019