Backup to the cloud

News

Slightly Skeptical View on Enterprise Unix Administration

Recommended Links

Webliography of problems with "pure" cloud environment

Bandwidth communism

Cloud Mythology
Edge Computing Backup to the cloud   Lock in related problems Cloud providers as intelligence collection hubs Problem of loyalty
Questionable costs efficiency of pure cloud Issues of security and trust in "pure" cloud environment Review of Remote Management Systems Virtual Software Appliances Working with serial console System Management
iLO 3 -- HP engineering fiasco Dell DRAC ALOM Setup Configuring Low End Authonomous Servers Troubleshooting Remote Autonomous Servers Is Google evil ?

Typical problems with IT infrastructure

Heterogeneous Unix server farms Sysadmin Horror Stories Humor Etc

From the historical standpoint "pure cloud"  providers represent the return to the mainframe era on a new technological level. And older folks remember quite well how much IBM was hated in 60th and 70th and how much energy and money people have spent trying to free themselves from this particular "central provider of services" ("glass server room") model.

In case of outages you are at mercy of your cloud service provider. That creates an uncertainty that might cost you substantial amount of money. Also you have additional fail point: pipe from your cloud provider to the corporate users at headquarters and elsewhere. Cloud availability and security failures are much more visible than when similar issues strike the corporate data center. So PR loses are higher.

Backup in the cloud controversies

Costs of cloud servers are high. Using a small one socket "disposable" server and automatic provisioning and remote control tools (DRAC, ILO) you can achieve the same result in remote datacenters at a fraction of costs. This new idea of disposable server (a small 1U server now costs $1000 or less) is a novel idea that gives the second life to the idea of "autonomous datacenter" (at one point promoted by IBM,  but soon completely forgotten as modern IBM is a fashion driven company ;-)  and is a part of hybrid cloud model when some services like email are highly centralized, but other like file services are not.

The problem with "pure cloud" is the bandwidth cost money.  That's why the idea of "backup to the cloud" (which in reality simply means backup over WAN) is such a questionable idea. Unless you can do it at night and did not splash to working hours  you compete for bandwidth  with other people and applications. It still can be used for private backups if you want to say good buy to your privacy.

But in an enterprise environment if such a backup spills over to morning hours you can make life of your staff really miserable, because they are depending of other applications which are also "in the cloud". This is classic Catch 22 situation with such a backup strategy. It is safer to do it "on site" and if due to regulations you need to have off site storage is probably cheaper to buy a private optical link (see, for example, AT&T Optical Private Link Service) to a suitable nearby building. BTW a large attached storage box from, say, Dell costs around $40K for 280TB, $80K for 540TB and so on, while doubling the bandwidth of your WAN connection can run you a million a year.

For this amount of money you move such a unit to a remote site each week (after full backup is done) using a mid size SUV  (the cost of SUV is included ;-). For more fancy setup you can use ideas from container based datacenters  and use two cars instead of one (one on remote site and the other at main datacenter) for weekly full backup (Modular data center - Wikipedia ). Differential backup usually are not that  large and can be handles via  wire. If not, then USPS or FedEx is your  friend. You will still money left of a a couple of lavish parties in comparison with your "in the cloud" costs.

Yes there are some crazy people who are trying WAN transferees of, say, 50-100TB of data thinking that this is a new way to do backups or synch two remote datacenters. They typically pay arm-and-leg for some overhyped and overpriced software from companies like IBM that uses UDP to speed transfer over lines with  high latency. But huge site-to-site transfer still are a challenging tals with with the best of UDP-based transfer software, no matter what presentation IBM and other vendors will give to your top brass (which usually does not understands WAN networking, or networking at all; the deficiently on which IBM relay for a long long time ;-).

If you can shift the transfer at night and not overflow into day hours you are fine, but if you you overflow into morning working hours you disrupt work on many people.

Sill there is no question that multicast feature of such software is a real breakthrough and if you need to push the same file (for example a large videofile) to multiple sites for all employees to watch,  it is really great way to accomplish the task.

But you can't change basic limitations of WAN. Yes some gains can be achieved by using UDP instead of TCP/IP, better compression, and using rsync-style protocol to avoid moving of identical blocks of data. But all those methods are perfectly applicable to local data transfers. And on WAN you typically are facing 30-50MB/sec links (less then 10% fraction of 1Gbit link in best case; bandwidth typically doubles at night so you get  your 10%, but not more). Now please calculate how much time it will take to transferee 50-100TB of data over such a link.  On local 10Mbit link you can probably get 500 MB/sec. So the difference in speed is 10 times, give of take.

Reversal of decentralization trend will be in turn reversed

It is essentially a reversal of the 30 years old trend toward decentralization, which drove rapid adoption of PC  and elevated laptops to the status of essential corporate tool --  the desire to send a particular "central service provider" to hell.  But laptop also needs centralized services and that created drive toward centralization, which started with Web mail providers such as Hotmail (later bought by Microsoft) and remote disk space providers (which allow to access you file from multiple Pcs, or Pc and tablet, etc). With the rising power of smartphones there is further drive to provide access to data from multiple points. But in no way corporate users are ready to abandon their laptops for dump terminal of a new generation, such as like Chomebooks.

Chromebooks failed miserably to penetrate corporate environment. That failure raises a legitimate question about users resistance to "pure cloud" model. In addition, security consideration and widespread NSA interception of traffic and access of multiple agencies to major cloud providers webmail also became hotly debated issues.

Only pretty reckless IT management now would argue that using Google mail for corporate mail is a way to go, not matter what are the cost savings. Even universities for which it is a real cost saving measure, start to shun Gmail.  

Return on a new level to the centralization of services, which is at the heart of "cloud model"  along with solving some old problems inherent in decentralized environment,  bring backs old problem connected with centralization, which we hotly discussed at the era of mainframes and first of all the centralization of failures. Proponents often exaggerate positive features and underestimate possible problems and possible losses. The vision of the IT future based on a remote centralized and outsourced datacenter that provides services via "cloud" using high speed fiber links is utopian.  In it like neoliberal dream of "free markets" which never existed and will never exist.

Fiber optic lines made "in the cloud" computing more acceptable, including for some brave companies transatlantic traffic,  and some business models impossible in the past quite possible (Netflix).  But that does not mean that this technology is a "universal bottle opener".  Bottlenecks remain.  Replacement of LAN with WAN has its limits. According to Wikipedia "The Fallacies of Distributed Computing" are a set of common but flawed assumptions made by programmers in development of  distributed applications.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Nov 13, 2018] GridFTP : User s Guide

Notable quotes:
"... file:///path/to/my/file ..."
"... gsiftp://hostname/path/to/remote/file ..."
"... third party transfer ..."
toolkit.globus.org

Table of Contents

1. Introduction
2. Usage scenarios
2.1. Basic procedure for using GridFTP (globus-url-copy)
2.2. Accessing data in...
3. Command line tools
4. Graphical user interfaces
4.1. Globus GridFTP GUI
4.2. UberFTP
5. Security Considerations
5.1. Two ways to configure your server
5.2. New authentication options
5.3. Firewall requirements
6. Troubleshooting
6.1. Establish control channel connection
6.2. Try running globus-url-copy
6.3. If your server starts...
7. Usage statistics collection by the Globus Alliance
1. Introduction The GridFTP User's Guide provides general end user-oriented information. 2. Usage scenarios 2.1. Basic procedure for using GridFTP (globus-url-copy) If you just want the "rules of thumb" on getting started (without all the details), the following options using globus-url-copy will normally give acceptable performance:
globus-url-copy -vb -tcp-bs 2097152 -p 4 source_url destination_url
The source/destination URLs will normally be one of the following: 2.1.1. Putting files One of the most basic tasks in GridFTP is to "put" files, i.e., moving a file from your file system to the server. So for example, if you want to move the file /tmp/foo from a file system accessible to the host on which you are running your client to a file name /tmp/bar on a host named remote.machine.my.edu running a GridFTP server, you would use this command:
globus-url-copy -vb -tcp-bs 2097152 -p 4 file:///tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
[Note] Note
In theory, remote.machine.my.edu could be the same host as the one on which you are running your client, but that is normally only done in testing situations.
2.1.2. Getting files A get, i.e, moving a file from a server to your file system, would just reverse the source and destination URLs:
[Tip] Tip
Remember file: always refers to your file system.
globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://remote.machine.my.edu/tmp/bar file:///tmp/foo
2.1.3. Third party transfers Finally, if you want to move a file between two GridFTP servers (a third party transfer ), both URLs would use gsiftp: as the protocol:
globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://other.machine.my.edu/tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
2.1.4. For more information If you want more information and details on URLs and the command line options , the Key Concepts Guide gives basic definitions and an overview of the GridFTP protocol as well as our implementation of it. 2.2. Accessing data in... 2.2.1. Accessing data in a non-POSIX file data source that has a POSIX interface If you want to access data in a non-POSIX file data source that has a POSIX interface, the standard server will do just fine. Just make sure it is really POSIX-like (out of order writes, contiguous byte writes, etc). 2.2.2. Accessing data in HPSS The following information is helpful if you want to use GridFTP to access data in HPSS. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
[Note] Note
This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
2.2.2.1. GridFTP Protocol Module
The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
2.2.2.2. Data Transform Functionality
The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
2.2.2.3. Data Storage Interface (DSI) / Data Transform module
The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
2.2.2.4. HPSS info
Last Update: August 2005 Working with Los Alamos National Laboratory and the High Performance Storage System (HPSS) collaboration ( http://www.hpss-collaboration.org ), we have written a Data Storage Interface (DSI) for read/write access to HPSS. This DSI would allow an existing application that uses a GridFTP compliant client to utilize an HPSS data resources. This DSI is currently in testing. Due to changes in the HPSS security mechanisms, it requires HPSS 6.2 or later, which is due to be released in Q4 2005. Distribution for the DSI has not been worked out yet, but it will *probably* be available from both Globus and the HPSS collaboration. While this code will be open source, it requires underlying HPSS libraries which are NOT open source (proprietary).
[Note] Note
This is a purely server side change, the client does not know what DSI is running, so only a site that is already running HPSS and wants to allow GridFTP access needs to worry about access to these proprietary libraries.
2.2.3. Accessing data in SRB The following information is helpful if you want to use GridFTP to access data in SRB. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
[Note] Note
This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
2.2.3.1. GridFTP Protocol Module
The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
2.2.3.2. Data Transform Functionality
The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
2.2.3.3. Data Storage Interface (DSI) / Data Transform module
The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
2.2.3.4. SRB info
Last Update: August 2005 Working with the SRB team at the San Diego Supercomputing Center, we have written a Data Storage Interface (DSI) for read/write access to data in the Storage Resource Broker (SRB) (http://www.npaci.edu/DICE/SRB). This DSI will enable GridFTP compliant clients to read and write data to an SRB server, similar in functionality to the sput/sget commands. This DSI is currently in testing and is not yet publicly available, but will be available from both the SRB web site (here) and the Globus web site (here). It will also be included in the next stable release of the toolkit. We are working on performance tests, but early results indicate that for wide area network (WAN) transfers, the performance is comparable. When might you want to use this functionality: 2.2.4. Accessing data in some other non-POSIX data source The following information is helpful If you want to use GridFTP to access data in a non-POSIX data source. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
[Note] Note
This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
2.2.4.1. GridFTP Protocol Module
The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
2.2.4.2. Data Transform Functionality
The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
2.2.4.3. Data Storage Interface (DSI) / Data Transform module
Nov 13, 2018 | toolkit.globus.org

The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN).

The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc..

Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly. 3. Command line tools

Please see the GridFTP Command Reference .

[Nov 09, 2018] Troubleshoot WAN Performance Issues SD Wan Experts by Steve Garson

Feb 08, 2013 | www.sd-wan-experts.com

Troubleshooting MPLS Networks

How should you troubleshoot WAN performance issues. Your MPLS or VPLS network and your clients in field offices are complaining about slow WAN performance. Your network should be performing better and you can't figure out what the problem is. You can contact SD-WAN-Experts to have their engineers solve your problem, but you want to try to solve the problems yourself.

  1. The first thing to check, seems trivial, but you need to confirm that the ports on your router and switch ports are configured for the same speed and duplex. Log into your switches and check the logs for mismatches of speed or duplex. Auto-negotiation sometimes does not work properly, so a 10M port connected to a 100M port is mismatched. Or you might have a half-duplex port connected to a full-duplex port. Don't assume that a 10/100/1000 port is auto-negotiating correctly!
  2. Is your WAN performance problem consistent? Does it occur at roughly the same time of day? Or is it completely random? If you don't have the monitoring tools to measure this, you are at a big disadvantage in resolving the issues on your own.
  3. Do you have Class of Service configured on your WAN? Do you have DSCP configured on your LAN? What is the mapping of your DSCP values to CoS?
  4. What kind of applications are traversing your WAN? Are there specific apps that work better than others?
  5. Have your reviewed bandwidth utilization on your carrier's web portal to determine if you are saturating the MPLS port of any locations? Even brief peaks will be enough to generate complaints. Large files, such as CAD drawings, can completely saturate a WAN link.
  6. Are you backing up or synchronizing data over the WAN? Have you confirmed 100% that this work is completed before the work day begins.
  7. Might your routing be taking multiple paths and not the most direct path? Look at your routing tables.
  8. Next, you want to see long term trend statistics. This means monitoring the SNMP streams from all your routers, using tools such as MRTG, NTOP or Cacti. A two week sampling should provide a very good picture of what is happening on your network to help troubleshoot your WAN.

NTOP allows you to

MRTG (Multi-Router Traffic Grapher) provides easy to understand graphs of your network bandwidth utilization.

MRTG Picture

Cacti requires a MySQL database. It is a complete network graphing solution designed to harness the power of RRDTool 's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices.

Both NTOP and MRTG are freeware applications to help troubleshoot your WAN that will run on the freeware versions of Linux. As a result, they can be installed on almost any desktop computer that has out-lived its value as a Windows desktop machine. If you are skilled with Linux and networking, and you have the time, you can install this monitoring system on your own. You will need to get your carrier to provide read-only access to your router SNMP traffic.

But you might find it more cost effective to have the engineers at SD-WAN-Experts do the work for you. All you need to do is provide an available machine with a Linux install (Ubuntu, CentOS, RedHat, etc) with remote access via a VPN. Our engineers will then download all the software remotely, install and configure the machine. When we are done with the monitoring, beside understanding how to solve your problem (and solving it!) you will have your own network monitoring system installed for your use on a daily basis. We'll teach you how to use it, which is quite simple using the web based tools, so you can view it from any machine on your network.

If you need assistance in troubleshooting your wide area network, contact SD-WAN-Experts today !

You might also find these troubleshooting tips of interest;

Troubleshooting MPLS Network Performance Issues

Packet Loss and How It Affects Performance

Troubleshooting VPLS and Ethernet Tunnels over MPLS

[Nov 09, 2018] Cloud-hosted date must be accessed by users over existing WAN which creates performance issues due to bandwidth and latency constraints

Notable quotes:
"... Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps ..."
Nov 09, 2018 | www.eiseverywhere.com

However, cloud-hosted information assets must still be accessed by users over existing WAN infrastructures, where there are performance issues due to bandwidth and latency constraints.

THE EXTREMELY UNFUNNY PART - UP TO 20x SLOWER

Public/Private

Cloud

Thousands of companies
Millions of users
Varied bandwidth

♦ Per-unit provisioning costs does not decrease much with size after, say, 100 units.

> Cloud data centers are potentially "far away"

♦ Cloud infrastructure supports many enterprises

♦ Large scale drives lower per-unit cost for data center
services

> All employees will be "remote" from their data

♦ Even single-location companies will be remote from their data

♦ HQ employees previously local to servers, but not with Cloud model

> Lots of data needs to be sent over limited WAN bandwidth

Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps

> Disk-based deduplication technology

♦ Identify redundant data at the byte level, not application (e.g., file) level

♦ Use disks to store vast dictionaries of byte sequences for long periods of time

♦ Use symbols to transfer repetitive sequences of byte-level raw data

♦ Only deduplicated data stored on disk

[Nov 08, 2018] GT 6.0 GridFTP

Notable quotes:
"... GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks ..."
Nov 08, 2018 | toolkit.globus.org

The open source Globus® Toolkit is a fundamental enabling technology for the "Grid," letting people share computing power, databases, and other tools securely online across corporate, institutional, and geographic boundaries without sacrificing local autonomy. The toolkit includes software services and libraries for resource monitoring, discovery, and management, plus security and file management. In addition to being a central part of science and engineering projects that total nearly a half-billion dollars internationally, the Globus Toolkit is a substrate on which leading IT companies are building significant commercial Grid products.

The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability. It is packaged as a set of components that can be used either independently or together to develop applications. Every organization has unique modes of operation, and collaboration between multiple organizations is hindered by incompatibility of resources such as data archives, computers, and networks. The Globus Toolkit was conceived to remove obstacles that prevent seamless collaboration. Its core services, interfaces and protocols allow users to access remote resources as if they were located within their own machine room while simultaneously preserving local control over who can use resources and when.

The Globus Toolkit has grown through an open-source strategy similar to the Linux operating system's, and distinct from proprietary attempts at resource-sharing software. This encourages broader, more rapid adoption and leads to greater technical innovation, as the open-source community provides continual enhancements to the product.

Essential background is contained in the papers " Anatomy of the Grid " by Foster, Kesselman and Tuecke and " Physiology of the Grid " by Foster, Kesselman, Nick and Tuecke.

Acclaim for the Globus Toolkit

From version 1.0 in 1998 to the 2.0 release in 2002 and now the latest 4.0 version based on new open-standard Grid services, the Globus Toolkit has evolved rapidly into what The New York Times called "the de facto standard" for Grid computing. In 2002 the project earned a prestigious R&D 100 award, given by R&D Magazine in a ceremony where the Globus Toolkit was named "Most Promising New Technology" among the year's top 100 innovations. Other honors include project leaders Ian Foster of Argonne National Laboratory and the University of Chicago, Carl Kesselman of the University of Southern California's Information Sciences Institute (ISI), and Steve Tuecke of Argonne being named among 2003's top ten innovators by InfoWorld magazine, and a similar honor from MIT Technology Review, which named Globus Toolkit-based Grid computing one of "Ten Technologies That Will Change the World." The Globus Toolkit also GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks . The GridFTP protocol is based on FTP, the highly-popular Internet file transfer protocol. We have selected a set of protocol features and extensions defined already in IETF RFCs and added a few additional features to meet requirements from current data grid projects.

The following guides are available for this component:

Data Management Key Concepts For important general concepts [ pdf ].
Admin Guide For system administrators and those installing, building and deploying GT. You should already have read the Installation Guide and Quickstart [ pdf ]
User's Guide Describes how end-users typically interact with this component. [ pdf ].
Developer's Guide Reference and usage scenarios for developers. [ pdf ].
Other information available for this component are:
Release Notes What's new with the 6.0 release for this component. [ pdf ]
Public Interface Guide Information for all public interfaces (including APIs, commands, etc). Please note this is a subset of information in the Developer's Guide [ pdf ].
Quality Profile Information about test coverage reports, etc. [ pdf ].
Migrating Guide Information for migrating to this version if you were using a previous version of GT. [ pdf ]
All GridFTP Guides (PDF only) Includes all GridFTP guides except Public Interfaces (which is a subset of the Developer's Guide)

[Nov 08, 2018] globus-gridftp-server-control-6.2-1.el7.x86_64.rpm

Nov 08, 2018 | centos.pkgs.org
6.2 x86_64 EPEL Testing
globus-gridftp-server-control - - -
Requires
Name Value
/sbin/ldconfig -
globus-xio-gsi-driver(x86-64) >= 2
globus-xio-pipe-driver(x86-64) >= 2
libc.so.6(GLIBC_2.14)(64bit) -
libglobus_common.so.0()(64bit) -
libglobus_common.so.0(GLOBUS_COMMON_14)(64bit) -
libglobus_gss_assist.so.3()(64bit) -
libglobus_gssapi_error.so.2()(64bit) -
libglobus_gssapi_gsi.so.4()(64bit) -
libglobus_gssapi_gsi.so.4(globus_gssapi_gsi)(64bit) -
libglobus_openssl_error.so.0()(64bit) -
libglobus_xio.so.0()(64bit) -
rtld(GNU_HASH) -
See Also
Package Description
globus-gridftp-server-control-devel-6.1-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Library Development Files
globus-gridftp-server-devel-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Development Files
globus-gridftp-server-progs-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Programs
globus-gridmap-callout-error-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors
globus-gridmap-callout-error-devel-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors Development Files
globus-gridmap-callout-error-doc-2.5-1.el7.noarch.rpm Globus Toolkit - Globus Gridmap Callout Errors Documentation Files
globus-gridmap-eppn-callout-1.13-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap ePPN callout
globus-gridmap-verify-myproxy-callout-2.9-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap myproxy callout
globus-gsi-callback-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library
globus-gsi-callback-devel-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library Development Files
globus-gsi-callback-doc-5.13-1.el7.noarch.rpm Globus Toolkit - Globus GSI Callback Library Documentation Files
globus-gsi-cert-utils-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library
globus-gsi-cert-utils-devel-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library Development Files
globus-gsi-cert-utils-doc-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Documentation Files
globus-gsi-cert-utils-progs-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Programs
Provides
Name Value
globus-gridftp-server-control = 6.1-1.el7
globus-gridftp-server-control(x86-64) = 6.1-1.el7
libglobus_gridftp_server_control.so.0()(64bit) -
Required By Download
Type URL
Binary Package globus-gridftp-server-control-6.1-1.el7.x86_64.rpm
Source Package globus-gridftp-server-control-6.1-1.el7.src.rpm
Install Howto
  1. Download the latest epel-release rpm from
    http://dl.fedoraproject.org/pub/epel/7/x86_64/
    
  2. Install epel-release rpm:
    # rpm -Uvh epel-release*rpm
    
  3. Install globus-gridftp-server-control rpm package:
    # yum install globus-gridftp-server-control
    
Files
Path
/usr/lib64/libglobus_gridftp_server_control.so.0
/usr/lib64/libglobus_gridftp_server_control.so.0.6.1
/usr/share/doc/globus-gridftp-server-control-6.1/README
/usr/share/licenses/globus-gridftp-server-control-6.1/GLOBUS_LICENSE
Changelog
2018-04-07 - Mattias Ellert <[email protected]> - 6.1-1
- GT6 update: Don't error if acquire_cred fails when vhost env is set

[Nov 08, 2018] 9 Aspera Sync Alternatives Top Best Alternatives

Nov 08, 2018 | www.topbestalternatives.com

Aspera Sync is an elite, versatile, multi-directional no concurrent record replication and synchronization. It is intended to conquer the execution and versatility inadequacies of conventional synchronization instruments like Rsync. Aspera Sync can scale up and out for most extreme rate replication and synchronization over WANs. Prominent capacities are The FASP advantage, superior, smart trade for Rsync, underpins complex synchronization arrangements, propelled record taking care of, and so on. Aspera Sync is reason worked by Aspera for elite, versatile, multi-directional offbeat record replication and synchronization. Intended to beat the execution and adaptability deficiencies of conventional synchronization instruments like Rsync, Aspera Sync can scale up and out for greatest pace replication and synchronization over WANs, for now,'s biggest vast information record stores -- from a great many individual documents to the most significant document sizes. Hearty reinforcement and recuperation strategies secure business necessary information and frameworks so undertakings can rapidly recoup necessary documents, structures or a whole site in the occasion if a calamity. Be that as it may, these strategies can be undermined by average exchange speeds amongst essential and reinforcement locales, bringing about fragmented reinforcements and augmented recuperation times. With FASP – controlled transactions, replication fits inside the little operational window so you can meet your recuperation point objective (RPO) and recovery time objective (RTO).

1. Syncthing Syncthing replaces exclusive synchronize and cloud administrations with something open, reliable and decentralized. Your information is your information alone, and you should pick where it is put away if it is imparted to some outsider and how it's transmitted over the Internet. Syncthing is a record sharing application that permits you to share reports between various gadgets in an advantageous way. Its online Graphical User Interface (GUI) makes it conceivable Website Syncthing Alternatives

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: February 19, 2020