|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Inspired by "The seven networking truth by R. Callon, April 1, 1996
Adapted for HPC clusters by Nikolai Bezroukov on Feb 25, 2019
Status of this MemoThis memo provides information for the HPC community. This memo does not specify an standard of any kind, except in the sense that all standards must implicitly follow the fundamental truths. Distribution of this memo is unlimited.AbstractThis memo documents seven fundamental truths about computational clusters.AcknowledgementsThe truths described in this memo result from extensive study over an extended period of time by many people, some of whom did not intend to contribute to this work. The editor would like to thank the HPC community for helping to refine these truths.1. IntroductionThese truths apply to HPC clusters, and are not limited to TCP/IP, GPFS, scheduler, or any particular component of HPC cluster.2. The Fundamental Truths(1) Some things in life can never be fully appreciated nor understood unless experienced firsthand. Most problems in a large computational clusters with more then 640 nodes can never be fully understood by someone who never run a cluster with more then 64 nodes.(2) Every problem or upgrade on a large cluster always takes at least twice longer to solve than it seems like it should.
(3) One size never fits all but complexity dramatically increases with the size of the cluster. In most cases the severity of any single problem grows exponentially with the size of the cluster.(3a) Supercluster is an attempt to try to solve multiple separate problems via a single complex solution. But its size creates another set of problem which might outweigh the set of problem it intends to solve. .(4) On a large cluster issues are more interconnected with each other and a typical failure affect large number of nodes or components and take more effort to resolve(3b) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea.
(3c) Large, Fast, Cheap: you can't have all three.
(4a) Superclusters proves that it is always possible to add another level of complexity into each cluster layer, especially at networking layer until only applications that use a single node run well.(5) Functioning of a large computational cluster is undistinguishable from magic.(4b) On a supercluster it is easier to move a networking problem around, than it is to solve it.
(4c)You never understand how bad and buggy is your favorite scheduler is until you deploy it on a supercluster.
(4d) If the solution that was put in place for the particular cluster does not work, it will always be proposed later for new cluster under a different name...
(5a) User superstition that "the more cores, the better" is incurable, but the user desire to run their mostly useless models on as many cores as possible can and should be resisted.(5b) If you do not know what to do with the problem on the supercluster you can always "wave a dead chicken" e.g. perform a ritual operation on crashed software or hardware that most probably will be futile but is nevertheless useful to satisfy "important others" and frustrated users that an appropriate degree of effort has been expended.
(5c) Downtime of the large computational clusters has some mysterious religious ritual quality in it in modest doze increases the respect of the users toward the HPC support team. But only to a certain limit.
(6) "The more cores the better" is a religious truth similar to the belief in Flat Earth during Middle Ages and any attempt to challenge it might lead to burning of the heretic at the stake.
(6a) The number of cores in the cluster has a religious quality and in the eyes of users and management has power almost equal to Divine Spirit. In the stage of acquisition of the hardware it outweighs all other considerations, driving towards the cluster with maximum possible number of cores within the allocated budget Attempt to resist buying for computational nodes faster CPUs with less cores are futile.
(6b) The best way to change your preferred hardware supplier is buy a large computational cluster.
(6c) Users will always routinely abuse the facility by specifying more cores than they actually need for their runs
(7) For all resources, whatever the is the size of your cluster, you always need more.
(7a) Overhead increases exponentially with the size of the cluster until all resources of the support team are consumed by the maintaining the cluster and none can be spend for helping the users.
(7b) Users will always try to run more applications and use more languages that the cluster team can meaningfully support.(7c) The most pressure on the support team is exerted by the users with less useful for the company and/or most questionable from the scientific standpoint applications.
Security Considerations
This memo raises no security issues. However, security protocols used in the HPC cluster are subject to those truths.References
The references have been deleted in order to protect the guilty and avoid enriching the lawyers.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Feb 26, 2019 | www.softpanorama.org
Adapted for HPC clusters by Nikolai Bezroukov on Feb 25, 2019
Status of this MemoThis memo provides information for the HPC community. This memo does not specify an standard of any kind, except in the sense that all standards must implicitly follow the fundamental truths. Distribution of this memo is unlimited.AbstractThis memo documents seven fundamental truths about computational clusters.AcknowledgementsThe truths described in this memo result from extensive study over an extended period of time by many people, some of whom did not intend to contribute to this work. The editor would like to thank the HPC community for helping to refine these truths.1. IntroductionThese truths apply to HPC clusters, and are not limited to TCP/IP, GPFS, scheduler, or any particular component of HPC cluster.2. The Fundamental Truths(1) Some things in life can never be fully appreciated nor understood unless experienced firsthand. Most problems in a large computational clusters can never be fully understood by someone who never run a cluster with more then 16, 32 or 64 nodes.(2) Every problem or upgrade on a large cluster always takes at least twice longer to solve than it seems like it should.
(3) One size never fits all, but complexity increases non-linearly with the size of the cluster. In some areas (storage, networking) the problem grows exponentially with the size of the cluster.(3a) Supercluster is an attempt to try to solve multiple separate problems via a single complex solution. But its size creates another set of problem which might outweigh the set of problem it intends to solve. .(4) On a large cluster issues are more interconnected with each other and a typical failure often affects larger number of nodes or components and take more effort to resolve(3b) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea.
(3c) Large, Fast, Cheap: you can't have all three.
(4a) Superclusters proves that it is always possible to add another level of complexity into each cluster layer, especially at networking layer until only applications that use a single node run well.(5) Functioning of a large computational cluster is undistinguishable from magic.(4b) On a supercluster it is easier to move a networking problem around, than it is to solve it.
(4c)You never understand how bad and buggy is your favorite scheduler is until you deploy it on a supercluster.
(4d) If the solution that was put in place for the particular cluster does not work, it will always be proposed later for new cluster under a different name...
(5a) User superstition that "the more cores, the better" is incurable, but the user desire to run their mostly useless models on as many cores as possible can and should be resisted.(5b) If you do not know what to do with the problem on the supercluster you can always "wave a dead chicken" e.g. perform a ritual operation on crashed software or hardware that most probably will be futile but is nevertheless useful to satisfy "important others" and frustrated users that an appropriate degree of effort has been expended.
(5c) Downtime of the large computational clusters has some mysterious religious ritual quality in it in modest doze increases the respect of the users toward the HPC support team. But only to a certain limit.
(6) "The more cores the better" is a religious truth similar to the belief in Flat Earth during Middle Ages and any attempt to challenge it might lead to burning of the heretic at the stake.
(6a) The number of cores in the cluster has a religious quality and in the eyes of users and management has power almost equal to Divine Spirit. In the stage of acquisition of the hardware it outweighs all other considerations, driving towards the cluster with maximum possible number of cores within the allocated budget Attempt to resist buying for computational nodes faster CPUs with less cores are futile.
(6b) The best way to change your preferred hardware supplier is buy a large computational cluster.
(6c) Users will always routinely abuse the facility by specifying more cores than they actually need for their runs
(7) For all resources, whatever the is the size of your cluster, you always need more.
(7a) Overhead increases exponentially with the size of the cluster until all resources of the support team are consumed by the maintaining the cluster and none can be spend for helping the users.
(7b) Users will always try to run more applications and use more languages that the cluster team can meaningfully support.(7c) The most pressure on the support team is exerted by the users with less useful for the company and/or most questionable from the scientific standpoint applications.
(7d) The level of ignorance in computer architecture of 99% of users of large computational clusters can't be overestimated.
Security Considerations
This memo raises no security issues. However, security protocols used in the HPC cluster are subject to those truths.References
The references have been deleted in order to protect the guilty and avoid enriching the lawyers.
Google matched content |
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: January, 08, 2020