Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Creating External Snapshots for JFS (not JFS2)

News

JFS2 snapshots

Redbooks IBM Links Recommended Links JFS2 Reference
aix lvm mksysb Command Log administration Useful AIX commands smit AIX Networking  Performance tuning
sudo AIX Networking   Tips History Humor Etc
             

AIX 5.3 supports filesystem snapshots. The problem is that they are extremely badly documented. It took me hours just to understand simplest things about this feature So typical junk that IBM documentation represents here is made into incomprehensible mess that probably ensured that this feature is not used. You simply cannot do worse. In this sense Solaris documentation is a masterpiece ;-). Also AIX 5.3 supports splitting mirror drives:

In JFS (and JFS only; not JFS2) in addition to ability to create snapshots chfs command provides freeze function which writes all blocks to disk and convert disk to read-only for a certain number of seconds.  If you are using FlashCopy to backup mounted AIX filesystems, please see the restrictions listed in the following URL: http://www-1.ibm.com/support/docview.wss?

One way to cut downtime is:

  1. shutdown particular application (start of downtime)

  2. freeze the drive for 1 second to write all blocks to the disk
      chfs -a freeze=1
  3. Create a  snapshot  chfs -a splitcopy=/snapshot_mount_point /filesystem/mount/point
  4. Restart the application (end of downtime)
  5. Mount snapshot using a new mount point.

  6. Backup snapshot using a new mount point.

  7. Delete the snapshot  with  rmfs

That helps to cuts downtime for critical applications.

This "external snapshot" feature is controlled via chfs Command. To create a snapshot type the following:

chfs -a splitcopy=/snapshot_mount_point /filesystem/mount/point

At this point, a read-only copy of the file system is available in //snapshot_mount_point. Any changes made to the original file system after the copy is split off are not reflected in the backup copy.

You can control which mirrored copy is used as the backup by using the copy attribute. The second mirrored copy is the default if a copy is not specified by the user. For example:

 chfs -a splitcopy=/snapshot_mount_point -a copy=1 /filesystem/mount/point

To reintegrate the JFS split image as a mirrored copy at the /backup_point mount point, use the following commands:

umount /backup_point 

rmfs /backup_point

The rmfs command removes the file system copy from its split-off state and allows it to be reintegrated as a mirrored copy.

chfs Parameters Reference

-a freeze={ timeout | 0 | off}
Specifies that the file system must be frozen or thawed, depending on the value of timeout.

The act of freezing a file system produces a nearly consistent on-disk image of the file system, and writes all dirty file system metadata and user data to the disk. In its frozen state, the file system is read-only, and anything that attempts to modify the file system or its contents must wait for the freeze to end. The value of timeout must be either 0, off, or a positive number. If a positive number is specified, the file system is frozen for a maximum of timeout seconds. If timeout is 0 or off, the file system will be thawed, and modifications can proceed.

Note:

Freezing base file systems (/, /usr, /var, /tmp) can result in unexpected behavior.

Splitting mirror drives is available via is available via chfs utility option -a

-a splitcopy=NewMountPointName
 
Splits off a mirrored copy of the file system and mounts it read-only at the new mount point. This provides a copy of the file system with consistent JFS meta-data that can be used for backup purposes. User data integrity is not guaranteed, so it is recommended that file system activity be minimal while this action is taking place. Only one copy may be designated as an online split mirror copy.

For example to split off a copy of a mirrored file system and mount it read-only for use as an online backup, enter:

chfs -a splitcopy=/backup -a copy=2 /testfs

Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Apr 16, 2009] Freezing filesystems and containers [LWN.net]

By Jake Edge
June 25, 2008

Freezing seems to be on the minds of some kernel hackers these days, whether it is the northern summer or southern winter that is causing it is unclear. Two recent patches posted to linux-kernel look at freezing, suspending essentially, two different pieces of the kernel: filesystems and containers. For containers, it is a step along the path to being able to migrate running processes elsewhere, whereas for filesystems it will allow backup systems to snapshot a consistent filesystem state. Other than conceptually, the patches have little to do with each other, but each is fairly small and self-contained so a combined look seemed in order.

Takashi Sato proposes taking an XFS-specific feature and moving it into the filesystem code. The patch would provide an ioctl() for suspending write access to a filesystem, freezing, along with a thawing option to resume writes. For backups that snapshot the state of a filesystem or otherwise operate directly on the block device, this can ensure that the filesystem is in a consistent state.

Essentially the patch just exports the freeze_bdev() kernel function in a user accessible way. freeze_bdev() locks a file system into a consistent state by flushing the superblock and syncing the device. The patch also adds tracking of the frozen state to the struct block_device state field. In its simplest form, freezing or thawing a filesystem would be done as follows:

    ioctl(fd, FIFREEZE, 0);

    ioctl(fd, FITHAW, 0);
Where fd is a file descriptor of the mount point and the argument is ignored.

In another part of the patchset, Sato adds a timeout value as the argument to the ioctl(). For XFS compatibility-though courtesy of a patch by David Chinner, the XFS-specific ioctl() is removed-a value of 1 for the pointer argument means that the timeout is not set. A value of 0 for the argument also means there is no timeout, but any other value is treated as a pointer to a timeout value in seconds. It would seem that removing the XFS-specific ioctl() would break any applications that currently use it anyway, so keeping the compatibility of the argument value 1 is somewhat dubious.

If the timeout occurs, the filesystem will be automatically thawed. This is to protect against some kind of problem with the backup system. Another ioctl() flag, FIFREEZE_RESET_TIMEOUT, has been added so that an application can periodically reset its timeout while it is working. If it deadlocks, or otherwise fails to reset the timeout, the filesystem will be thawed. Another FIFREEZE_RESET_TIMEOUT after that occurs will return EINVAL so that the application can recognize that it has happened.

Moving on to containers, Matt Helsley posted a patch which reuses the software suspend (swsusp) infrastructure to implement freezing of all the processes in a control group (i.e. cgroup). This could be used now to checkpoint and restart tasks, but eventually could be used to migrate tasks elsewhere entirely for load balancing or other reasons. Helsley's patch set is a forward port of work originally done by Cedric Le Goater.

The first step is to make the freeze option, in the form of the TIF_FREEZE flag, available to all architectures. Once that is done, moving two functions, refrigerator() and freeze_task(), from the power management subsystem to the new kernel/freezer.c file makes freezing tasks available even to architectures that don't support power management.

As is usual for cgroups, controlling the freezing and thawing is done through the cgroup filesystem. Adding the freezer option when mounting will allow access to each container's freezer.state file. This can be read to get the current freezer state or written to change it as follows:

    # cat /containers/0/freezer.state
    RUNNING
    # echo FROZEN > /containers/0/freezer.state
    # cat /containers/0/freezer.state
    FROZEN
It should be noted that it is possible for tasks in a cgroup to be busy doing something that will not allow them to be frozen. In that case, the state would be FREEZING. Freezing can then be retried by writing FROZEN again, or canceled by writing RUNNING. Moving the offending tasks out of the cgroup will also allow the cgroup to be frozen. If the state does reach FROZEN, the cgroup can be thawed by writing RUNNING.

In order for swsusp and cgroups to share the refrigerator() it is necessary to ensure that frozen cgroups do not get thawed when swsusp is waking up the system after a suspend. The last patch in the set ensures that thaw_tasks() checks for a frozen cgroup before thawing, skipping over any that it finds.

There has not been much in the way of discussion about the patches on linux-kernel, but an ACK from Pavel Machek would seem to be a good sign. Some comments by Paul Menage, who developed cgroups, also indicate interest in seeing this feature merged.

IBM Redbooks IBM AIX Version 6.1 Differences Guide

2.2 JFS2 internal snapshot

With AIX 5L V5.2, the JFS2 snapshot was introduced. Snapshots had to be created into separate logical volumes. AIX V6.1 offers the ability to create snapshots within the source file system.

Therefore, starting with AIX V6.1, there are two types of snapshots:

Table 2-1 provides an overview of the differences between the two types of snapshots.
Category External snapshot Internal snapshot
Location Separate logical volume Within the same logical volume
Access Must be mounted separately /fsmountpoint/.snapshot/snapshotname
Maximum generations 15 64
AIX compatibility >= AIX 5L V5.2 >= AIX V6.1

Comparison of external and internal snapshots

Both the internal and the external snapshots keep track of the changes to the snapped file system by saving the modified or deleted file blocks. Snapshots provide point-in-time (PIT) images of the source file system. Often, snapshots are used to be able to create a consistent PIT backup while the workload on the snapped file system continues.

The internal snapshot introduces the following enhancements:

2.2.1 Managing internal snapshots

A JFS2 file system must be created with the new -a isnapshot=yes option. Internal snapshots require the use of the extended attributes v2 and therefore the crfs command will automatically create a v2 file system.

Existing file systems created without the isnapshot option cannot be used for internal snapshots. They have to be recreated or have to use external snapshots.

There are no new commands introduced with internal snapshots. Use the snapshot, rollback, and backsnap commands to perform operations. Use the new -n snapshotname option to specify internal snapshots. There are corresponding SMIT and Web-based System Manager panels available.

To create an internal snapshot:

# snapshot -o snapfrom=/aix61diff -n snap01
Snapshot for file system /aix61diff created on snap01

To list all snapshots for a file system:

# snapshot -q /aix61diff
Snapshots for /aix61diff
Current  Name         Time
*        snap01       Tue Sep 25 11:17:51 CDT 2007

To list the structure on the file system:

# ls -l /aix61diff/.snapshot/snap01
total 227328
-rw-r--r--    1 root     system     10485760 Sep 25 11:33 file1
-rw-r--r--    1 scott    staff       1048576 Sep 25 11:33 file2
-rw-r--r--    1 jenny    staff     104857600 Sep 25 11:33 file3
drwxr-xr-x    2 root     system          256 Sep 24 17:57 lost+found

The previous output shows:

Note: The .snapshot directory in the root path of every snapped file system is not visible to the ls and find command. If the .snapshot directory is explicitly specified as an argument, they are able to display the content.

To delete an internal snapshot:

# snapshot -d -n snap01 /aix61diff
2.2.3 Considerations

The following applies for internal snapshots:

  • Once a file system has been enabled to use internal snapshots, this cannot be undone.
  • If the fsck command has to modify the file system, any internal snapshots for the file system will be deleted by fsck.
  • Snapped file systems cannot be shrunk.
  • The defragfs command cannot be run on a file system with internal snapshots.
  • Existing snapshot Web-based System Manager and SMIT panels are updated to support internal snapshots.

    The following items apply to both internal and external snapshots:

  • Understanding and exploiting snapshot technology for data protection, Part 1 Snapshot technology overview

    Split mirror

    Split mirror creates a physical clone of the storage entity, such as the file-system, volume, or LUN for which snapshot is being created, onto another entity of the same kind and the exact same size. The entire contents of the original volume are copied onto a separate volume. Clone copies are highly available, since they are exact duplicates of the original volume that resides on a separate storage space. However, due to the data copy, such snapshots cannot be created instantaneously. Alternatively, a clone can also be made available instantaneously by "splitting" a pre-existing mirror of the volume into two, with the side effect that original volume has one fewer synchronized mirror. This snapshot method requires as much storage space as the original data for each snapshot. This method has the performance overhead of writing synchronously to the mirror copy.

    EMC Symmterix and AIX Logical Volume Manager support split mirror. Additionally, any raid system supporting multiple mirrors can be used to create a clone by splitting a mirror.

    Copy-on-write with background copy (IBM FlashCopy)

    Some vendors offer an implementation where a full copy of the snapshot data is created using copy-on-write and a background process that copies data from original location to snapshot storage space. This approach combines the benefits of copy-on-write and split mirror methods as done by IBM FlashCopy and EMC TimeFinder/Clone. It uses copy-on-write to create an instant snapshot and then optionally starts a background copy process to perform block-level copy of the data from the original volume (source volume) to the snapshot storage (target volume) in order to create an additional mirror of the original volume.

    When a FlashCopy operation is initiated, a FlashCopy relationship is created between the source volume and target volume. This type of snapshot is called a COPY type of FlashCopy operation.

    IBM incremental FlashCopy

    Incremental FlashCopy tracks changes made to the source and target volumes when the FlashCopy relationships are established. This allows the capability to refresh a LUN or volume to the source or target's point in time content using only the changed data. The refresh can occur in either direction, and it offers improved flexibility and faster FlashCopy completion times.

    This incremental FlashCopy option can be used to efficiently create frequent and faster backups and restore without the penalty of having to copy entire content of the volume .

    developerWorks IBM Systems IBM System Storage Flashcopy on FAStT700 with AIX 5.2 (P ...

    Flashcopy on FAStT700 with AIX 5.2 (P Series Server)
    Posted: Nov 23, 2006 04:57:45 AM

    I am hoping somebody with Flashcopy experience on AIX may be able to help. We have had a few issues with the scripts we have been running to import Flashcopies of our databases to our P series host nightly to dump to tape; mostly centering around repository sizing which I have pretty much resolved.

    I am struggling with a couple of legacy issues though. First of all we need to add some more disk (LUNS) from the array, and I am concerned about conflicts with pvids on the scripted Flashcopies.

    Currently our import/export runs something like this :

    Import:

    - Recreate Flashcopy with SMcli(FAStT700)

    Export:
    - umount filesystems (AIX)

    Problem is as we completely remove the FC pvs from AIX each time, they only keep the same pvids each time for scripting purposes because we never add any other volumes to the system.

    Somebody on another forum suggested that we should only remove the FCs from AIX using rmdev -l hdisk (no -d flag) in order that they remain defined in the ODM. That theory seems to work; but unfortunately when re-importing the volumes as soon as you run 'chdev -l hdisk -a pv=clear'I can not get the 'defined' volumes to switch back to 'available', and subsequently re-running hot_add (cfgmgr) just brings the FCs on line with new pvids, leaving the original ones as 'defined'.

    I have tried throwing a 'chdev -l hdisk -a pv=yes' into the equation and even a 'mkdev -l hdisk' into the mix in various orders but nothing seems to work. Has anybody got any ideas how I can effectively 'hardcode' the FC pvids in the ODM and reuse the same ones each time ?

    The other issues we have experienced are failures due to either 'chdev -l hdisk -a pv=clear' fails; or corrupt jfs superblocks when mounting the imported volumes on the host, requiring an fsck or re-running the snapshot.

    We shut down the database prior to recreating the FCs to quiesce i/o; but I was wondering if additionally we should be (a) running a synch call prior to recreating the FCs to flush the disk buffers and (b) running a full fsck prior to mounting the volumes as a matter of course.

    Does anybody have any ideas/experience here ?

    Sorry for the long post - but I'm an HP guy historically and I'm just getting to grips with this Flashcopy stuff; and unfortunately IBM won't ratify our 'home grown' scripts under our support agreement.

    I will not duplicate this posting at this time but if anybody thinks it would be better in the AIX forum please let me know.

    fosteria

    Posts: 4
    Registered: Nov 23, 2006 04:12:26 AM
    Re: Flashcopy on FAStT700 with AIX 5.2 (P Series Server)
    Posted: Nov 23, 2006 05:53:00 AM in response to: fosteria's post
    Reply
    Further to this; I just found that our recent 'failed to clear pvid' error was actualy due to this error when creating the FC :

    20:45:02 SMcli_north_recreate.sh :Created Logical drive for Flash copy 4-1
    An internal error, error code 12, has occurred. This is possibly due to
    initialization problems when loading the necessary internal files. Please check
    your installation of SMclient. If problems continue to persist, re-install SMcli
    ent.

    SMcli failed.

    Anybody got any ideas ? THX.

    jvk

    Posts: 23
    Registered: Oct 24, 2006 04:15:23 AM
    Re: Flashcopy on FAStT700 with AIX 5.2 (P Series Server)
    Posted: Dec 03, 2006 11:54:19 AM in response to: fosteria's post
    Reply
    From my notes here are some general hints for a good FlashCopy:
    .
    1) just before FlashCopy appl. should be frozen or stoped.
    Next step is to unmount the FlashCopy source file systems to ensure that all data is flashed to the disks.
    If you don't use unmount, then it's required to run "freeze" and "thaw" AIX commands. See the "chfs -a freeze" and "chfs -a freeze=off" in the man pages.

    2) after "freeze" run sync; sync; sync; sleep 5

    3) if your FlashCopy source logical volumes are spread over more hdisks, it's required to use disk level"consistency group" on your disk subsytem to ensure all IOs are stoped at the same time.
    Without disk level"consistency group" there is a possibility that the target volumes may not be mountable. The failure to mount is not an error in this case. It results from an inconsistency between the file system data and the file system log that can be introduced during the FlashCopy process.

    4) I would suggest to use separate jfs/jfs2 log for every FS. Otherwise it must be ensured that all FS's they use same log are stoped or frozen at the same time.

    5) on target node (where FlashCopy disks are imported) run
    - "recreatevg -f" if you flashed just half of LVM mirred hdisks
    - "fsck -y" on all file systems before mounting

    For your SMcli problems I have no hints as I don't know SMcli.

    Next I don't understand your PVIDs problem... I see nothing wrong if disks are rmdev -dl during "Export". These are only FlashCopy target LUNs, so they will get all LVM and data structure _exactly_the_same_ (including the PVID) as FlashCopy source LUNs. If you want to import target LUNs on some other AIX node, then the "chdev -l hdisk -a pv=clear" and normal recreatevg are not necessary. If you import target LUNs on the same system where are source LUNs, then the "chdev -l hdisk -a pv=clear" and recreatevg are required.

    fosteria

    Posts: 4
    Registered: Nov 23, 2006 04:12:26 AM
    Re: Flashcopy on FAStT700 with AIX 5.2 (P Series Server)
    Posted: Dec 08, 2006 05:21:11 AM in response to: jvk's post
    Reply
    First of all - thanks for your help - and especially all the tips with ensuring the integrity of the Flashcopy.

    On the pvid issue (and bear with me here); the problem is that we do import to the same system each night and use recreatevg with the pvids hardcoded in the import script (currently hdisk17 - hdisk22 for instance). Even though we export them using rmdev -dl they always get the same hdisk ids when we run hot_add (cfgmgr) each night to bring them back in.

    Problem is if I introduce a new device whilst the FC's are off line, that new device is going to get hdisk17 pvid in the ODM when I run cfgmgr; so subsequently when I next run my FC import I'm screwed because the script is trying to use the same pvid. My Flashcopy volumes would be configured in as hdisks 18-23 this time; so the script would have to be changed.

    What I was trying to do was effectively leave the FC volumes permanently defined as hdisk17 - hdisk22, so when I configure a new volume it would be assigned hdisk23 for instance.

    It is configuring this functionality that is causing the issue.

    Recommended Links

    Help - chfs Command



    Etc

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


    Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to to buy a cup of coffee for authors of this site

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last modified: March 12, 2019