Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Solaris Hardening

News Editorial Recommended books Recommended Links Minimization Tutorials & FAQs

Best Solaris Hardening Papers

Tips

Linux vs Solaris Security whitepaper
ACLs RBAC Privilege Sets Solaris 10 Zones Firewall  Log centralization and audit PAM SSH Access control
Titan
(no longer maintained)
JASS/SST
(architecturally weak, but resulting configurations are supported by Sun)
Reg Quinton
Solaris Hardening Kit
Slightly Skeptical)Solaris Security Toolkit (JASS) Notes Application Server hardening Slightly Skeptical Notes on Titan Web server hardening Humor Etc

Due to the size of the page an introductory note was converted to an  Editorial on a separate page. Please read it as it might help you to avoid typical hardening mistakes. It is not very current: with Solaris 10 it needs to be updated as zones and privilege sets changed the Solaris security landscape significantly. Also an internal firewall that is now became an integral component of any more or less secure server changes priorities in hardening quit e significantly. Still it might be valuable to beginner, as beginners usually have excessive zeal as for hardening and can screw a server (or a dozen ;-) almost in no time due to insufficient testing of the package on test servers or insufficient understanding of the limitations of the hardening packets or both. Although such an experiment represent tremendous learning opportunity it's better of avoid it. Somebody said “Experience keeps a very expensive school but fools can learn in no other” and George Bernard Shaw aptly added "There are two tragedies in life. One is to lose your heart's desire. The other is to gain it.".  Those two quotes are fully applicable to hardening. Naive belief into mass press recommendation (as well as security snake oil salesmen) and attempts to implement them on production servers probably caused ten times more damage that all hackers attacks together. 

It is very important to understand that many more servers were hosed due to hardening mistakes than from hacker attacks. That does not means that hardening is unimportant or that it should be better ignored. What it means is that you have not overdo it.  Excessive zeal really hurts here.  Each change that suppously increase the level of protection should be waited against convenience of work with the server and its importance.  The level of hardening should correspond to general level of security in a company.  If holes are everywhere and nobody is paying attention to such problems as role base access, authentication, etc, hardening does not increase the general level of security: it is always as good as the weakest link. 

Situation with hardening tools on Solaris looks like one man game: JASS is still maintained, but Titan (although I like Titan simple approach to writing hardening modules better) is not. Unless you want to improve it yourself (and Titan is more suitable for adaptation the JASS) it does not make much sense to use it (it never made sense to use Titan blindly, anyway).  Google list Comparison of Solaris Hardening Scripts among top finding on "Solaris hardening" topic. Ignore it, this is a very old paper that outlived its usefulness.  Another entry YASSP is dead for so long that I start to be vary about Goggle algorithm seeing it in the top finding list (YASSP: Hardening Script for Solaris - stable beta.).

Also with the availability of internal Solaris firewall creates a new situation where old a-la Josef Stalin recipe of hardening "if you have a network service you have a problem, if you do not have a network service you have no problems" ;-). Now you can skip  remove vulnerable services if they are important and raise productivity, but just limit the ip-ranges of server that can connect with them.  The excessive zeal in elimination of /etc/inetd.conf entries now is much less justified.

Dr. Nikolai Bezroukov


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2009 2008 2007 2006 2005 2004 2003_and_earlier

[Feb 14, 2005] Peeking Under Mount Points

This Tech Tip explains how to use NFS to inspect the underlying directory structure if the reported disk usage seems inconsistent.

[Jan 17, 2005] [PDF] Trend: Life expectancy increasing for unpatched or vulnerable Linux deployments.

Assessment Date: 17 December, 2004 What a junk under the disguise of a security paper! Those honeypot heroes put 24 unpatched boxes on the Internet and watched what will happen. Boxes has the default (but different for each box) install (to make life more interesting they were using a dozen of different version of Linux on 19 boxes ;-) plus, for a change, a couple of Solaris 8 and a couple of Solaris 9 boxes. After waiting until some of them was "owned" the honeypot heroes wrote a paper. The question arise, does any of honeypot heroes has any ideas about statistics ? Probably not. As for Solaris, actually it is even unclear what release of each version of Solaris they were using. This is a nice illustration of the level of deterioration of honeypot project. I really like this idea of using unpatched internet connected boxes; to make it really enticing they should next time also make root account without password and see what happens :-) What a junk !!!

Increasing life expectancy

The past 12-24 months has seen a significant downward shift in successful random attacks against Linux-based systems. Recent data from our honeynet sensor grid reveals that the average life expectancy to compromise for an unpatched Linux system has increased from 72 hours to 3 months. This means that a unpatched Linux system with commonly used configurations (such as server builds of RedHat 9.0 or Suse 6.2 ) have an online mean life expectancy of 3 months before being successfully compromised. Meanwhile, the time to live for unpatched Win32 systems appears to continues to decrease. Such observations have been reported by various organizations, including Symantec [1], Internet Storm Center[2] and even USAToday[3]. The few Win32 honeypots we have deployed support this. However, Win32 compromises appear to be based primarily on worm activity.

T H E D A T A

Background Our data is based on 12 honeynets deployed in eight different countries (US, India, UK, Pakistan, Greece, Portugal, Brazil and Germany). Data was collected from the calendar year of 2004, with most of the data collected in the past six months. Each honeynet deployed a variety of different Linux systems accessible from anywhere on the Internet. In addition, several Win32 based honeypots were deployed, but these were limited in number and could not be used to identify widespread trends. A total of 24 unpatched Unix honeypots were deployed, of which 19 were Linux, primarily Red Hat. These unpatched honeypots were primarily default server installations with additional services enabled (such as SSH, HTTPS, FTP, SMB, etc). In addition, on several systems insecure or easily guessed passwords were used. In most cases, host based firewalls had to be modified to allow inbound connections to these services. These systems were targets of little perceived value, often on small home or business networks. They were not registered in DNS or any search engines, so the systems were found by primarily random or automated means. Most were default Red Hat installations. Specifically one was RH 7.2, five RH 7.3, one RH 8.0, eight RH 9.0, and two Fedora Core1 deployments. In addition, there were one Suse 7.2, one Suse 6.3 Linux distributions, two Solaris Sparc 8, two Solaris Sparc 9, and one Free-BSD 4.4 system. Of these, only four Linux honeypots (three RH 7.3 and one RH 9.0) and three Solaris honeypots were compromised. Two of the Linux systems were compromised by brute password guessing and not a specific vulnerability. Keep in mind, our data sets are not based on targets of high value, or targets that are well known. Linux systems that are of high value (such as company webservers, CVS repositories or research networks) potentially have a shorter life expectancy.

[Jan 10, 2005] Basic Steps in Forensic Analysis of Unix Systems

The science is methodical, premeditated actions to gather and analyze evidence. The technology, in the case of computers, are programs that suite particular roles in the gathering and analysis of evidence. The crime scene is the computer and the network (and other network devices) to which it is connected.

Your job, as a forensic investigator, is to do your best to comb through the sources of evidence -- disc drives, log files, boxes of removable media, whatever -- and do two things: make sure you preserve as much of this data in its original form, and to try to re-construct the events that occurred during a criminal act and produce a meaningful starting point for police and prosecutors to do their jobs.

Every incident will be different. In one case, you may simply assist in the seizure of a computer system, which is analyzed by law enforcement agencies. In another case, you may need to collect logs, file systems, and first hand reports of observed activity from dozens of systems in your organization, wade through all of this mountain of data, and reconstruct a timeline of events that yields a picture of a very large incident.

In addition, when you begin an incident investigation, you have no idea what you will find, or where. You may at first see nothing (especially if a "rootkit" is in place.) You may find a process running with open network sockets that doesn't show up on a similar system. You may find a partition showing 100% utilization, but adding things up with du only comes to 50%. You may find network saturation, originating from a single host (by way of tracing its ethernet address or packet counts on its switch port), a program eating up 100% of the CPU, but nothing in the file system with that name.

The steps taken in each of these instances may be entirely different, and a competent investigator will use experience and hunches about what to look for, and how, in order to get to the bottom of what is going on. They may not necessarily be followed 1, 2, 3. They may be way more than is necessary. They may just be the beginning of a detailed analysis that involves De-compilation of recovered programs and correlation of packet dumps from multiple networks.

Instead of being a "cookbook" that you follow, consider this a collection of techniques that a chef uses to construct a fabulous and unique gourmet meal. Once learned, you'll discover there are plenty more steps than just those listed here.

Its also important to remember that the steps in preserving and collecting evidence should be done slowly, carefully, methodically, and deliberately. The various pieces of data -- the evidence -- on the system are what will tell the story of what occurred. The first person to respond has the responsibility of ensuring that as little of this evidence as possible is damaged, thereby making it useless in contributing to a meaningful reconstruction of what occurred.

One thing is common to every investigation, and it cannot be stressed enough. Keep a regular old notebook handy and take careful notes of what you do during your investigation. These may be necessary to refresh your memory months later, to tell the same long story to a new law enforcement agent who takes over the case, or to refresh your own memory when/if it comes time to testify in court. It will also help you accurately calculate the cost of responding to the incident, avoiding the potentially exaggerated estimates that have been seen in some recent computer crime cases. Crimes deserve justice, but justice should be fair and reasonable.

As for the technology aspect, the description of basic forensic analysis steps provided here assumes Red Hat Linux on i386 (any Intel compatible motherboard) hardware. The steps are basically the same with other versions of Unix, but certain things specific to i386 systems (e.g., use of IDE controllers, limitations of the PC BIOS, etc.) will vary from other Unix workstations. Consult system administration or security manuals specific to your version of Unix.

It is helpful to set up a dedicated analysis system on which to do your analysis. An example analysis system in a forensic lab might be set up as follows:

(It can be argued that no services should be running, not even SSH, on your analysis systems. You can use netcat to pipe data into the system, encrypting it with DES or Blowfish stream cyphers for security. This is fine, provided you do not need remote access to the system.)

Another handy analysis system is a new laptop. An excellent way of taking the lab to the victim, a fast laptop with 10/100 combo ethernet card, an 18+GB hard drive, and a backpack with padded case, allows you to easily carry everything you need to obtain file system images (later written to tape for long-term storage), analyze them, display the results, crack intruder's crypt() passwords you encounter, etc.

A cross-over 10Base-T cable allows you to get by without a hub or switch, and to still use the network to communicate with the victim system on an isolated mini-network of two systems. (You will need to set up static route table entries in order for this to work.)

A Linux analysis system will work for analyzing file systems from several different operating systems that have supported file system under Linux, e.g., Sun UFS. You simply need to mount the file system with the proper type and options, e.g. (Sun UFS):


# mount -r -t ufs -o ufstype=sun /dev/hdd2 /mnt

Another benefit to Linux are "loopback" devices, which allow you to mount a file containing an image copy (obtained with dd) into the analysis system's file system. See Appendices A and B.

Debian Hardened Aims For Security

Itch scratching, and audit (Score:3, Interesting)
by RedPhoenix (124662) on Tuesday September 14, @09:15PM (#10251879) At the risk of the post sounding like a discussion at a head-lice convention, everyone has their own personal itch to scratch.

Several posts thus far, have questioned the viability of establishing yet another secure-Debian project, similar to other existing projects, and have indicated that there would be a better use of available resources if everyone would just get along and work together (or at least, form under a single project). Fair enough.

However, there are a whole range of reasons why diversity and natural selection w.r.t many competing projects can provide benefits over and above a single large project - organizational inertia, effective and efficient communication, and development priority differences, for example.

'Organizational inertia' in particular, whereby the larger a organization/project gets, the slower it can react to changing requirements, is a good reason why this effort-amalgamation can potentially be a bad thing.

Each of these projects probably has a slightly different 'itch' to 'scratch'. There's no reason why, later on down the track, that the best elements of each of these projects cannot be merged into something cohesive.

A good example is the current situation in Linux Auditing (as in C2/CAPP style auditing and event logging, not code verification) and host-based audit-related intrusion detection. Over time, we've had Snare (http://www.intersectalliance.com), SLES (http://www.suse.com), and Riks Audit Daemon (http://www.redhat.com). Each project had a slightly different focus, and each development team have come up with some great solutions to the problems of auditing / event logging.

The developers of each of these projects are now communicating and collaborating, with a view to bringing a effective audit subsystem to Linux that incorporates the best ideas from each approach.

BTW: How about auditing in this project? Here's a starting point:
http://www.gweep.net/~malk/snare_debian.shtml

Red. (Snare Developer)

Freshmeat/pam_passwdqc 0.7.5 by Solar Designer - Sunday, November 2nd 2003 16:07 PDT

About: pam_passwdqc is a simple password strength checking module for PAM-aware password changing programs, such as passwd(1). In addition to checking regular passwords, it offers support for passphrases and can provide randomly generated passwords. All features are optional and can be (re-)configured without rebuilding.

Changes: The module will now assume invocation by root only if both the UID is 0 and the PAM service name is "passwd". This should fix changing expired passwords on Solaris and HP-UX and make "enforce=users" safe. The proper English explanations of requirements for strong passwords will now be generated for a wider variety of possible settings.

Slashdot Host Integrity Monitoring Using Osiris and Samhain

On a FreeBSD system, you can set the "immutable flag" on a file. Given a high enough system securelevel, that file will be completely resistant to change (including unsetting that flag). This is extremely handy for locking down file signature databases, kernel files, and other likely targets for stealth modification. So long as that portion of the kernel stands intact, the system can never be completely clandestinely owned

Very interesting. This FAQ [osxfaq.com] suggest that OS X retains BSD's immutable flag. In theory, the only way to change this flag in OS X is to reboot in single-user mode. I wonder if a rootkit could force a reboot into single user mode, change these flags, and reboot back to remotely own an OS X machine? I would assume that unless the rootkit can insert something into the single-user mode start-up sequence, the system immutable flag should be fairly safe. The big downside would be that System Update would cease to work (and probably create a corrupt partial update) if the wrong file were locked in this way (security vs. ease-of-use again!).

Recommended Links

New

Rated:

Linux sources that might be useful (some Linux HOW-TO are not bad and are largely applicable to other Unix environments):

Etc


Good Generic Unix Security Documents and FAQs

FAQs and RFCs


Deception Toolkit (DTK) and HoneyNet project

A word of thanks is due Dr. Cohen for making this valuable tool freely available. Check it out !

HoneyNet project:

Etc


Information about the Stanford network

Information for Users

Security Information for Administrators

Papers, FAQ's, and Other Documents

Security Home Pages at Universities

Etc


Proxy Server

Insecure Web Proxy Servers

Some insecurely-configured Web proxy servers can be exploited by a remote attacker to make arbitrary connections to unauthorized hosts. Two common abuses of a misconfigured proxy server are to use it to bypass firewall restrictions and to send spam email. A server is used to bypass a firewall by connecting to the proxy from outside the firewall and then opening a connection to a host inside the firewall. A server is used to send spam by connecting to the proxy and then having it connect to a SMTP server. It has been reported that many Web proxy servers are distributed with insecure default configurations.

Users should carefully configure Web proxy servers to prevent unauthorized connections. It has been reported that http://www.monkeys.com/security/proxies/ contains secure configuration guidelines for many Web proxy servers. We can not verify the accuracy of this information, and if there are any questions users should contact their vendors.


Application Server Hardening


E-mail archives

Sun Managers Mailing List Archive

Yassp Development Mailing List Archive

See also


Random Findings

Stack overflow problems

A stack smashing attack is most typical for C programs. Many C programs have buffer overflow vulnerabilities, both because the C language lacks array bounds checking, and because the culture of C programmers encourages a performance oriented style that avoids error checking... Several papers contain "cook book style" descriptions of stack smashing attacks exploitation. If the attacker has access to a non-privileged account than unless the server has a hardware or software protection the only remaining work for an wanna-be attacker is to find a suitable non-patched utility and download or write an exploit. Hundreds of such exploits have been reported in recent years.

*** SecurityPortal.com Securing your File System in Linux. Average discussion.

Best practices in Linux file system security dictate a philosophy of configuring file system access rights in the most restrictive way possible that still allows legitimate users and processes to function properly. However, even with the most careful planning and restrictive settings, successful file system attacks and corruption can occur. To have the most comprehensive plan for Linux file system security, a system administrator needs to modify a default installation's settings, proactively monitor and audit file system changes and have multiple methods to recover from a file system attack.

In configuring file system security, the key areas to be concerned about are: access rights granted to legitimate users to create/modify files and execute programs, access to the file system granted to remote machines, and any area of the file system designated as world-writable.

To quickly review Linux permissions for files and directories, there are three basic types: read (numerically represented as 4), write (2) and execute (1). The values are summed to determine the permissions for the file or directory - a value of 4 meaning read-only, a value of 7 meaning read, write and execute are allowed. A file or directory is assigned three standard sets of permissions: access allowed to the owner, the associated group, and everyone.

umask A common occurrence over time on Linux systems is that when files get created or modified, the permissions become significantly more liberal than what was originally intended. When new files are created by users, administrators or processes, the default permissions granted are determined by umask. By using umask to set a restrictive default, files and directories that are created will retain more restrictive permissions unless they are manually changed with chmod. Umask defaults for all users are set in /etc/profile. Default permissions are determined by subtracting the umask value from 777. Files created by a user with a umask of 037 would have permissions of 640 (that isn't new math, the execute bit is not getting set for the owner), which means the owner can read/write the file, the group can read the file, and everyone has no access. Setting umask values to 077 means no one else has any access to files created by the owner.

... ... ...

NFS, Samba - The "Not For Security" file system should be avoided where possible on Linux boxes directly connected to the Internet. NFS requires a high degree of trust of the peer machine that will be mounting your partitions. You must be very careful about providing anything beyond read access to hosts in the /etc/exports. Samba, while not using a peer trust system, can nonetheless be complex to maintain user rights. They are both network filing services, and the only way to be sure that your file system is not at risk is to be running in a completely trusted environment.

Auditing your file system regularly is a must. You should look for files with the permissions anomalies described above. You should also be looking for changes in standard files and packages. At a minimum, you can use the find command to search for questionable file permissions:

Suid & sgid: find / \( -perm -2000 -o -perm -4000 \) -ls (You can add -o -perm -1000 to catch sticky bit files and directories)

World-writable files: find / -perm -2 ! -type l -ls

Files with no owner: find / \( -nouser -o -nogroup \) -print (thanks to Michael Wood for correcting this)

You can write and create cron job for a simple script that directs this output to a file, compares it with a file created by a search the day before and mails the difference to you.

As you might guess, several people have written simple to complex tools that check for files with questionable permissions, checksum binaries to detect tampering and a host of other functions. Here are a few:

Remote audit services


Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October 05, 2020