|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells
|News||RBAC and Role Engineering in Large Organizations||Recommended Links||Separation of Duties||Principle of Least Privilege|
|Domains and assess matrix||Protection matrix||Capabilities||Mandatory access control (MAC)||Authentication and Accounts Security|
|Solaris RBAC||Kerberos||AppArmor||sudo||Unix permissions model|
|Bell LaPadula Security Model||THE BIBA MODEL||The Clark Wilson Model||RBAC, SOX and Role Engineering in Large Organizations||Etc|
The Biba model addresses the issue of integrity, i.e. whether information can become corrupted. A new label is used to gauge integrity. If a high security object comes into contact with a low-level information, or be handled by a low-level program, the integrity level can be downgraded. For instance, if one used an insecure program to view a secure document, the program might covertly copy it to another part of the system.
Integrity is usually characterized by the tree following goals:
The model specifies three integrity axioms:
The main focus of the Biba Model is preserving integrity and is also governed by two axioms:
Fred Cohen & Associates
The Biba integrity model [Biba77] was published at Mitre one year after the B-L model. When Biba noticed that the B-L policy did not provide protection against a user at level X writing information at level Y when X was a lower security level than Y. Thus a low security user could overwrite highly classified documents unless some sort of integrity policy were in place.
Biba chose the mathematical dual of the B-L policy wherein there are a set of integrity levels, a relation between them, and two rules which, if properly implemented, have been mathematically proven to prevent information at any given integrity level from flowing to a higher integrity level. Typical integrity levels are "untrusted", "slightly trusted", "trusted", "very trusted", "so trusted that we don't need a higher level of trust", etc.
The first rule is that a subject at a given integrity level "X" cannot write information to another integrity level "Y" if X is lower integrity than Y. This rule assures that low integrity subjects cannot corrupt high integrity subjects (called "no write up"). The second rule is that a subject at a given integrity level "Y" cannot read information from another integrity level "X" if X is lower integrity than Y. This rule assures that high integrity subjects cannot become corrupt by reading low integrity information (called "no read down").
Figure 3.4 shows a pictorial of the Biba integrity policy.
Integrity Model -------------- high |\\\\\\\\\\\\| ... |\\no write\\| n+1 |\\\\\\\\\\\\| n | | n-1 |////////////| ... |//no read///| low |////////////| -------------- \\\ = no write /// = no read Figure 3.4 - The Biba Integrity Policy
Because the integrity policy is the dual of the security policy, any restrictions that apply to one apply to the other. Thus the precision problem remains, and information tends to concentrate on the lowest possible integrity level.
If we desire both secrecy and integrity, we will have a system where information tends towards the highest security level and the lowest integrity level, or in other words, we will have a great deal of highly classified, low integrity information (depending on your viewpoint, this may or may not be supported by historical data). Another side effect of combining secrecy with integrity is that the system tends to drift towards a set of isolated subsystems. The simplest example of combining the B-L and Biba policies is shown in figure 3.5.
----- ----- ----- n+1 |///| |\\\| |XXX| n | | + | | = | | n-1 |\\\| |///| |XXX| ----- ----- ----- \\\ = no write
/// = no read
XXX = no access Figure 3.5 - Combined Security and Integrity
As you see, the result is that no communication is possible between users at different levels. Such a system is quite safe in that it isolates users from each other, but doesn't afford any sharing whatsoever. One problem with eliminating sharing its that it defeats the purpose of having users together on the same system. Another more important problem is that it means a tremendous duplication of effort. For example, a user in one area could not use an editor written by a user in another area. Each area would have to separately embark on research and development projects ignorant of the results of colleagues.
Another policy comes directly from the military requirement that information be accessible only on a need to know basis. "Compartments" are designed to allow access only for those with a need to known information as determined administratively. The compartment policy can be defined mathematically as a set of compartments "C", with a subset "As" of C associated with each subject "s", and a subset "Bo" of C associated with each object "o". Any access request to object o by subject s is denied unless for all c in Bo, c is in As.
The lattice policy was developed as a result of the fact the set of security levels under B-L, integrity levels under Biba, and compartments under need to know, could be generalized to a single less restricted mathematical structure without sacrificing any of their desirable properties [Denning75] . A lattice is a structure with an infemum "INF" a supremum "SUP", a set of other places, and a less than relation "<". The INF can directly or indirectly send information to any other place in the structure. The "SUP" can directly or indirectly observe information in any other place in the structure. Other places can observe anything that is directly or indirectly less than them and send information to any place that is directly ir indirectly more than them. Figure 3.6 shows an example of a security lattice and its corresponding subject/object matrix, wherein information flows from the bottom towards the top.
A Security Lattice Corresponding S/O Matrix Objects SUP [a] a b c d e f g h / \ S a rw r r r r r r r [b] [c] u b w rw - r - - r r | / \ b c w - rw - r r r r [d] [e] [f] j d w w - rw - - r r \ / / e e w - w - rw - r r [g] / c f w - w - - rw - r \ / t g w w w w w - rw r INF [h] s h w w w w w w w rw Figure 3.6 - A Security Lattice and its Subject/Object Matrix
Since any policy that could be formed by a combination of the former three policies could be formed with a lattice, the generalization held for a long time. The lattice policy was further generalized to a POset policy [Cohen86] which is just like a lattice without requiring a SUP or INF. This generalization is applicable to systems where no subject need be able to read or write all information. This is particularly useful in discussing information networks where there is no single authority. Hierarchical policies where local policies are enforced at different levels of control have also been explored [Cohen87-2] . An example POset is shown in figure 3.7, with information flowing from left to right.
a--c-----h--k m \ / / \ b--d--f--i--l--n p--q \ / \ / e--g j-----o r--t--v--x--y \ / \ s--u--w-----z Figure 3.7 - An Example POset
The Biba integrity model is the mathematical dual of the Bell-LaPadula model for sensitivity.
Every subject(process) and object(e.g. file) of the system is assigned an integrity label which consists of a hierarchical component and a non-hierarchical, set based component.
In Trusted IRIX/B the hierarchical component is referred to as the grade, and the non-hierarchical, set based component as the division set.
One integrity label dominates another if its grade is not greater and its division set is a subset of the other. Two integrity labels are equal if they have the same grade and division sets with exactly the same members.
Another way to state the equality relationship is that the two labels dominate each other. A subject can read an object if and only if the integrity label associated with the subjectis dominated by the integrity label associated with the object.
A subject can write an object if and only if the integrity label associated with the object is dominated by the integrity label associated with the subject. A subject can read and write an object if and only if the integrity label associated with the object equals the integrity label associated
New objects are created with the same integrity label as the process which created them. Thus, a process can never give data a higher integrity than it had before.
A process can downgrade the integrity of data by reading data with a higher integrity than the process and writing it to a new file, which gets the integrity of the process.
In the analysis phase of every secure operating system project there is a point where the trusted computing base (TCB) must be defined. While those fortunate few who are doing ground-up development may breeze through this particular stage, it is notoriously difficult for those plucky engineers who decide to retrofit an existing system. Often, the administrative procedures make use of a large number of programs one would not normally associate with the security of a system. Let us consider an example. Most versions of the UNIX operating system invoke the generic command interpreter (for example /bin/sh ) to process a set of scripts which in turn invoke programs to provide a set of services. One of these services clears the system's public disk space directory /tmp. This service is accomplished using the /bin/rm command, which is the same program used when any user wishes to remove files. Thus, both of these programs, and all programs similarly invoked, must be included in the TCB. In theory, this should not be a problem. In practice, the number of programs used by the TCB can be in the hundreds.
Given a system such as the one just described, how can an analyst claim to have identified the TCB when the general command interpreter is included? Clearly some mechanism in addition to emphatic assertion must be employed to identify that which is in the TCB and to prevent the general command interpreter from being a genuine security hole.
Under the Biba integrity model a subject can execute a program or read a data file if the integrity of the object is higher than or equal to that of the subject. A subject is not permitted to read a data or program file which has a lower integrity. A high integrity process thus exists in an isolated environment in which everything visible has high integrity. This is exactly the environment desired for processes which are part of the TCB. The set of TCB programs can therefore be defined to be that set of program files whose integrity is greater than or equal to the lowest integrity used by any TCB subject. Similarly, the set of TCB data can be defined to be that set of data files whose integrity dominates the lowest integrity used by any TCBsubject.
Let us examine some implications here. A privileged process running with the highest possible integrity will be able to read data which also has the highest possible integrity, but not data with any lower integrity. No matter what a user with a lower integrity puts on the system, even if it's an executable trojan horse in the privileged process's normal execution path, the privileged process can not be effected by the attack. Furthermore, the attacker would not be able to put the evil file into a directory which the privileged process could read, as the lower integrity process would not be able to modify the directory to do so. Processes with low integrity will be able to look at, but not touch, system data. Where other secure systems count on discretionary permissions alone to protect system data thatthe unprivileged user would want to see, such as the userid to user name mappings, the system with integrity can simply make these files the highest possible and not worry as much about traditional permissions.
The single largest drawback of using an integrity policy to isolate the TCB is how well it actually works. The proper procedure is to run all privileged processes with a the highest possible integrity, thereby protecting them from any attacks. Unfortunately, mundane activities such as system backups introduce a bit of a problem, that being how to put information the process isn't allowed to see on a backup tape. Similarly, system log files may be inaccessible to the lower integrity processes which should be updating them. In a true mandatory integrity isolated TCB setuid programs raise another interesting problem. If the program file is given the highest possible integrity any process at any integrity would be able to invoke it, thereby setting a portion of the TCB running at a potentially lower integrity, violating the strict isolation.
The Trusted IRIX/B system uses Biba integrity to enforce TCB isolation. The implementation is not as strict as it could be, as a few shortcuts have been taken to address compatibility issues which just wouldn't go away. The exceptions introduced are carefully controlled so as to present minimal risk, but in some ways the exceptions prove the most interesting of the aspects of the TrustedIRIX/Bsystem.
Mandatory integrity labels are separated into four distinct types. The equal type equals any other integrity label. The high type dominates any other integrity label. The low type is dominated by any other integrity label. The biba type allows 256 hierarchical grades and up to 250 of 65,536 non-hierarchical divisions. Notice that low is dominated by the biba label with the lowest possible grade and no divisions and that the high label dominates the biba label with the highest possible grade and all possible divisions.
The introduction of types is an artifact of the size of data required to represent a large division set. It was much simpler to have a dedicated type than to have a clever encoding of a large number of disjoint divisions, hence the high type. The low type was added to provide symmetry with the high type, although it turns out to be of limited value. The equal type was added to address some of the compatibility issues described later.
All processes which perform system functions with privilege are started with high integrity on the Trusted IRIX/B system. They can thus read directories and files which also have high integrity, but no others. For these processes there is really no such thing as a user, as the data and activities associated with them are invisible to the system process.
Privileged processes are allowed to change their integrity labels. This is necessary to allow establishment of user sessions on terminals by login or through the window system by xdm. The print spooler lp also must change integrity to store a user's print data in the high integrity spool area. Programs which change integrity provide the greatest point of vulnerability in TrustedIRIX/B, and are the most carefully scrutinized of the TCB applictaions.
The UNIX setuid mechanism provides a particular challenge to this policy. The setuid program inherits the integrity label of the program which invoked it. If a non-high integrity process becomes a privileged process through this mechanism the program must be held responsible for the enforcement of system policies. One such program is the print spooler lp, which maintains label and ownership information about print jobs which it puts in the spool area. Generally, setuid programs are discouraged under TrustedIRIX/Bas a result of the increased confidence required in their behavior. There are about thirty such programs in TrustedIRIX/B.
Program files shipped with Trusted IRIX/B are labeled either high or the biba label with the lowest grade and no divisions. System processes are able to read or execute only those in the former group. The later set are inaccessible to the TCB and are not considered interesting from a security standpoint as they cannot obtain privilege except through the setuid mechanism, and the programs allowed to use this mechanism to obtain privilege are tightly controlled. The lack of divisions in the non-TCB integrity label is intentional. A process withdivisions in its integrity label will be unable to read any of the programs with this label. The notion is that these programs should have divisions added by the system administrator as each is assessed to perform to the administrator's satisfaction for users restricted to that division.
All directories, except user's home directories and them oldy directories described later, are labeled high. They cannot therefore be corrupted by user processes, although they can be read if permitted by the sensitivity and DAC policies. The administrator cannot be tricked into moving information into a user's directory simply because the administrator will not be able to access the lower integrity directory. Directories are treated the same as files with reguard to read and write accesses. In order to write a directory, the integrity labels of the process and directory must match.
Directory write operations include removing or creating a file, but not opening an existing file with write access.
6.5 Moldy Directories
Multilevel (moldy) directories are like regular directories except that access is segregated based on the sensitivity of the process attempting to access them.
Under TrustedIRIX/B the segregation includes the integrity.
Two processes with identical sensitivities but different integrities are treated the same way as they would be if the sensitivities where different.
Network connections offer their own unique set of opportunities and problems for an integrity policy.
There is no standard for transmission of integrity information analogous to the RIPSO (RFC 1038) sensitivity option of the Internet Protocol. Also, it could well be argued that putting data on a wire lowers its integrity, and that a network policy should enforce the degradation.
The TrustedIRIX/Bsystem speaks a variety of InterNet protocol options. TheRIPSO (RFC 1038) (RevisedInterNetProtocolSecurityOption) and CIPSO (CommercialInterNetProtocolSecurityOption) mechanisms provide transmission of sensitivity information, but not integrity. The CIPSO does provide for vendor extensions.
TrustedIRIX/B takes advantage of this provision to include an integrity component in the IP header. Unfortunatly, the limited size of the IPheader (forty bytes) results in a situation in which an integrity label does not fit.
6.7 UsersEach user of the TrustedIRIX/Bsystem is allowed a set of integrity labels which may be specified at login time. Normal users of a TrustedIRIX/Bsystem are given an integrity range within the biba type integrity labels. Users are cleared to run with the highor low labels in the case where the user is an administrator. The intended usage is for the sepersons to use that integrity value to perform administrative duties, but not otherwise
While it is fully expected that at some time someone will take full advantage of the biba label type, it is anticipated that most sites will find the sensitivity policy more than sufficiently confining. Adding divisions to a user's integrity label is sufficiently limiting to discourage doing so.
From the viewpoint of the user of TrustedIRIX/B, running with mandatory integrity is indistinguishable from running a system with the file permissions set in a uniformly strict manner.
The other users with which contact is appropriate, on the Silicon Graphics campus this is everyone, will have the same integrity, so the checks are rarely encountered.
The biggest impact is the frequency with which the administrator must be invoked.
The administrator has a bit more change to deal with than the normal user. Not all programs are given high integrity, thus such staples as vi and cshare not available for the administrator's use. System backups produce a special challenge, since the high integrity administrator cannot read lower integrity user files.
tripwire is a program that calculates a checksum for every file, directory, and symbolic link in (some subset of) a filesystem. Usually the checksums are cryptographic hashes. All of the checksums are stored in the Tripwire database, with the idea being that you run it once, then run it later to determine what has changed since the last time. How does this involve integrity? Well, it certainly tells you when it may have been violated, by telling you what files have changed. An integrity violation has probably occured if the system administrator runs tripwire and notices that /bin/login has been changed. But tripwire doesn't give you any help in learning how integrity has been violated; it can't necessarily tell you when, how, or by whom a file was changed. It can't tell you if the functionality or "expressivity" of an executable or data file has changed - other mechanism is needed to do that. tripwire basically can only warn, but in the case of viruses, having a warning of possible infection is exactly where detection and cleansing must begin.
A discussion of the Biba model can be found in the Amaroso text, the "runner-up" text for this course. The Amaroso text also happens to have an excellent biblography, including "The 25 Greatest Works in Computer Security", which would be a great place to dig for a term paper topic.
The Biba model deals with subject and objects, much like the Bella-LaPudula (BLP) model. Subjects and objects have integrity ratings, synonymous with the classifications from the BLP model. The Biba model compares the integrity ratings associated with files, processes, and people.
What should integrity ratings be? How should they be assigned? Can there be a universal collection of such ratings? There is the problem of deciding how to assign integrity ratings to things like your local version of Emacs, executables that you create yourself, a program you receive over email from a friend, or one you grab from a Usenet newsgroup. This is a level of trust problem, and is somewhat related to the sentiment of making sure that "you always pack your own parachute".
Other questions are, How should ratings evolve? Does a file's need for high integrity increase with use? There is the question of judging a file's need for integrity at all, for example, a file may need high integrity, but not high secrecy, or it may be a different combination of higher secrecy, but lower integrity.
The actual description of the Biba model can be found in the readings handout for this lecture. It looks like the Bella-LaPadula model upside-down: in the BLP model, reading an object at a lower secrecy level is allowed, but reading an object at a higher secrecy level is disallowed, while in the Biba model, reading an object in a lower integrity level is disallowed, but reading an object at a higher integrity level is allowed. Similarly, in the BLP model, writes to a higher or equal secrecy level are the only writes allowed, while the BLP model allows writes to only the same or lower level of integrity. Summarized, the BLP model disallows "read-ups" and "write-downs", while the Biba model disallows "read-downs" and "write-ups".
Because the allowed operations in the two models are opposite, there is the real problem of creating a system that implements both models simultaneously. That means that BLP does not makes any sence.
Does being at a high BLP secrecy level imply a high integrity level anyways? When something is at a high BLP level, that usually means a smaller number of people can modify it - does this mean that it will have a higher integrity, by virtue of the fact that it probably won't be changing much?
At second thought, it probably doesn't mean this, since there may be tools can (accidentally) be used that may not preserve integrity in the eyes of most.
When the BLP and Biba models are used in any combination, there is the real problem of making sense and maintaining consistency of the two different sets of ideals.
The Clark-Wilson model The Clark-Wilson (CW) model is another dealing with the issue of integrity. A detailed description of it can be found in the readings for this lecture.
The model tries to ensure the integrity of data objects, called "Constrained Data Items" (CDIs).
The model also speaks of "Integrity Verification Procedures" (IVPs) and "Transformation Procedures" (TPs). IVPs are meant to verify that CDIs are in a valid state, that their integrity has not been violated. TPs are the only procedures that are allowed to modify CDIs, or to take arbitrary user input and create new CDIs. IVPs and TPs are specifically certified to verify and transform, respectively, as the creators of the system see as necessary.
In the CW model, the users of the system are specifically constrained, by a form of access list, to modifying a stated set of CDIs, using a certain set of TPs. The model also calls for authentication of users, and that modifications be logged, for the purposes of creating an audit trail, and so that changes might be undone. 8 Handouts
The handouts for this lecture included the midterm exam, and a packet containing two chapters from the E. Amaroso book Fundamentals of Computer Security Technology, from Prentice Hall: Chapter 9, about the Bella-LaPadula model, and Chapter 12, about the Biba integrity model. The packet also contained the paper about the Clark-Wilson model,
1 Department of Computer Science, Gradutate School of Information Science and Engineering, Tokyo Institute of Technoloy
2 Tokyo Research Laboratory, IBM Japan
Runtime call stack inspection is a major security check system in Java. This system checks a call stack at runtime, and applies access control to resources, which is determined by principals of frames on the call stack. But in the case of complicated programs, there are some security holes such that the security check system cannot detect them easily by checking a call stack. We propose a new security check system for avoiding such security holes. In the security check system, the security holes are detected by tracing dependency of variables.
This system is based on the Biba's integrity model. Integrity denotes security level. If a value of a variable is updated by an untrusted program, then the integrity of the variable decreases and the value is regarded as untrusted. For example, for each formal parameter of file reading procedures, its integrity is checked. If it is lower than some fixed threshold, security exception is occurred. We call the security check system Java Integrity Model and implement a trancelator which embeds codes for integirity checking into Java source codes.
Google matched content
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: October, 20, 2018