Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Oracle Performance Tuning

News Linux Performance Tuning

Recommended Books

Recommended Links

Performance tuning

Performance Monitoring  Linux and Solaris on Midrange Servers
Oracle kernel parameters tuning on Linux Linux tmpfs uptime command top pmap/ps/ptree strace lsof
sar vmstat iostat procstat sar nfsstat mostat
 tcpdump iptraf netstat nmon LAMP Stack Linux Kernel Tuning via /proc system /etc/security/limits
Disk subsystem tuning Linux Kernel Tuning Oracle kernel parameters tuning on Linux Linux Virtual Memory Subsystem Tuning Semaphore and Shared Segment Kernel Parameters TCP performance tuning NFS performance tuning

Troubleshooting Linux Performance

Linux performance bottlenecks

Linux Swap filesystem

VMware Virtualization  Sysadmin Horror Stories Etc

Oracle is complex enough software system to be approached using classic medicine principle "do not harm". And that's first of all is relevant to misguided attempts of Oracle optimization. It is stupid to read some page on the Web and then change settings of your Oracle installation without deep thinking, understanding of the consequences and testing on non-production system. This is true for the information of this page too. You are warned...  

Remember:

You should be very careful in Oracle optimization and the main criteria is "not do any harm" rather then "achieve spectacular improvements"

There is no free lunch and the more optimized system is, the more specialized for a particular application it became; as a result any change in application can disproportionably affect performance. We can  categorized the effects of performance tuning (aka optimization)  in two categories: 

There are five major areas of optimization:

  1. Hardware optimization.
  2. Operating system based optimization. There are some standard settings that Oracle recommends and they should be implemented (see Oracle kernel parameters tuning on Linux).  Generally this area requires careful measurements, for example using D-trace on Solaris or Oracle Linux.  Indexes should generally be kept on different storage then the database. Here again SSD or  tmpfs should be considered if you are talking about better performance seriously and have resources to reorganize you system both operating system wise and hardware wise. Removing redundant packages and subsystem might slightly help too. At least this never harm.  Killing higher management idea to move database on VM (especially those idiots who want to move it on VMware) or blades (where you pay arm and leg for SAN cards and SAN storage and receive mediocre performance as a bonus) also can be considered as optimizations :-).
  3. Database engine level optimization (Oracle is highly tunable and has multiple performance measurement tools). Also at some point it you might be better off using in-memory SQL database instead of Oracle because many solutions employed in Oracle are connected with hardrives as database media and are suboptimal for SSD and memory based storage.
  4. Database schema and SQL-level optimization.  Here much depends on application (see below)
  5. Application level optimization . Here the most dramatic improvements are possible; that's why open source applications can generally beat closed-source application; but understanding the application on the level that makes possible its tuning is often limited by the brainpower available; this is especially true  for complex, off-the-shelf applications). If your application is open source (which is rare for key commercial applications) you can look at internals and you will instantly see way to make performance better as often you can select different compile options and optimize location of key data.  Using Intel compiler of Linux provides higher speed of application then GNU complier, but that depends on application.

As we go down the list we generally can get higher and higher returns on the efforts. But risks also generally increase. Also not all application are open source so application level optimization if often limited to the contains of the particular implementation.

The level of optimization available to the organization usually are dependent of the qualification of the staff. The higher the qualification of staff the more levels of optimization are available and the more is the potential return.  

Much of the Oracle performance is hardware dependent. Usage of SSD drives, usage TMPS for indexes, doubling of the RAM and having a dozen of 15K RPM internal drives (you need approximately eight drives to make internal SCSI or SAS controller to work in regime close to optimal).

For databases less then 128GB it is now possible to have database completely stored in RAM (you need good UPS and backup generator to make this safe ;-). If money are not an issue this is a way to go. Even midrange servers like HP DL580 allow having for 32 or 40 core CPUs with 256GB of RAM now.   And that's for less then $50K per server.

Those measures  usually provide significant performance boost. Short of  better hardware the key area to pay attention to is indexes.

Do not believe in SAN hype.

One important but often overlooked advantage of using Intel/AMD hardware and linux or Solaris is the ability to cut expenses of SANs. While SAN has legitimate usage in large enterprise environment for large databases and often is indispensable for clusters and multiple virtual machines running on the same box, this is a very expensive solution and one should not try to shoot birds with a cannon.   Small, especially catalog type databases (mostly read operations) can perform very well with the local storage, especially solid state drives and CPI cards like Fusion-io  ioDrive.  SAS RAID controllers can scale up to 6GB/sec and can have 1GB internal cache with battery backup.  You just need to get, say eight 15K RPM disks per controller and I/O speed will be pretty good and competitive with any fiber connected SAN which costs almost five times as much as internal disks. Solid state drive and, especially 160GB ioDrive just smoke SAN and NAS in  read operation, Such drive has  almost zero read latency, sustained sequential read speed of up to 770MB/s (ioDrive) and sustained sequential write speed of up to 770 MB/sec).

At the same time when dealing with extremely large databases accessing data is not the single issue that should be addressed. Other operations such as importing/exporting data into database, backups and restore operations are equally important.  A SAN with dedicated ports for data and backups can separates those operations.  A good SAN card when properly configured can get 3.2 GB/sec from a single port. That means 6.4 GB from two channels. That's important for backups (see  Joe Chang Recommended Server Systems, 2008 Q3 - Dunnington six-core)

One need to understand that transmit data via fiber or copper with typical for local disk controller speeds (6 GB per sec) are achievable with SAN but will cost you a lot more money. Money that probably can be spend elsewhere with much more significant effect on the speed of the database (for example on doubling of RAM).  Network latency and, especially, possibility of oversubscription of available network channels are issues that should not be overlooked.  Local bus is local, but SAN is typically shared between multiple servers. Additional complexity is another factor: you add more expensive components like cards (each approximately 2K or the cost of eight 15K 300GB drives ;-) and switches, each of which can fail. You can duplicate them by suffering from adding costs but sometimes a simple question arise: "Can local storage be just adequate for the task in hand?"

And the answer in many cases is definite yes, as few databases exceed a four-eight terabytes range that is now easily achievable with local storage (and even with solid state drives). Also the cost of  SAN connection is substantial and for equal comparison the money spend on SAN cards, switch and SAN storage box should be used  factored it for a fair comparison. If spend for improving I/O (by increasing the number of controllers or totally switching to solid state drives), increasing memory size to max and using the latest Intel CPUs they often can provide much more significant performance boost.  

The rule "use SAN only as heavy artillery" is often violated.  Flexibility achievable with SANs is often overrated and using drive images and appropriate software can provide 80% of flexibility with lesser costs.  In many cases it is used with RISC-based servers just because of fashion considerations, aggressive marketing or due to inertia linked to long gone limitations on the size of individual hard drives (which  is  the thing of the past at least since 2005). 

In any case Intel/AMD servers has no such limitations and can use dozen of  large size drives (for SAS that means 600G per one 15K drive and, amazingly, for SATA that means up to 3T per drive) without problems.  That means that other factors that scalability should be considered before making a decision.

Using such trivial improvements as high-end I/O controller and 15K RPM drives, bigger RAM and larger amount of physical drives (one pair per critical filesystem) one can save money that otherwise lands in EMC coffers ;-). It is not uncommon for a large enterprise paying EMC almost the same sum annually as for two major Unix vendors server maintenance. This amount can (and probably should) be cut using other vendors but the same question  "Can local storage be just adequate for the task in hand?" should still be asked before making final decision.

Losses due to overinvestment in SANs can be substantial. Benchmarks for the equal cost solutions are a must (usually you can increase the number of cores and/or amount of memory on the server in solution that use local storage instead of SANs: just two SAN cards cost over $3K or, for a $30K server this is 10% of the total cost ).

Indexes

Accidental removal of indexes and indexes being disabled, that the DBA has not noticed, are a common problem. For example, an index can get disabled when the SQL*Loader is used to directly load data, and the re-index failed. So, check the indexes for the following:

If an index is missing, there is nothing within the database to check on it. To detect a missing index, there must be a definitive list of required indexes, and the columns that should be in the index.

If there is an index problem, then you better fix it before tuning anything else.

If indexes are missing, then use the Oracle parallel index creation feature to make the index in the shortest possible time. This works very well on SMP machines, but you may want to restart the database with larger than normal SORT_AREA_SIZE to allow fast sorting of the resulting index.

Basic Oracle parameters

There are a small number of the Oracle parameters that have a large impact on performance. Without these being set correctly, Oracle cannot operate properly or give good performance. These need to be checked before further fine tuning. See Oracle kernel parameters tuning on Linux for introduction but fine tuning here should be done using Dtrace.

Analyze database tables and indexes

Oracle has warned all customers that rule based optimization will be dropped in future releases. As the cost based optimizer is now likely to give the best performance in most cases, this should be used. The cost based optimizer needs data to decide the access plan, and this data is generated by the analyze command or the newer dbms_stats package. For Oracle 9i, always use dbms_stats.

Oracle depends on data about the tables and indexes. Without this data, the optimizer has to guess. It is worth checking that the optimizer has the following information:

Most parallel SQL statements, SQL hints, and many of the new performance features of Oracle, such as hash, star joins, and partitions, will only be available using the cost based optimizer. If the dbms_stats.get_table_stats, analyze, or dbms_utility.analyze_schema command is not run, and the SQL does not use SQL hints, the optimizer has to use rule based optimization and will not make the new performance feature available.

The optimization mode is set in the Oracle parameters using the init.ora file with the OPTIMIZER_MODE variable. While the parameter is usually set to CHOOSE or RULE, the possible values are:

  CHOOSE: This means that Oracle must choose if it will use the cost based optimizer (CBO) or the rule based optimizer (RBO). Oracle makes this choice based on the availability of statistical information for at least one of the tables in the query.
  RULE: Use the RBO.
  ALL_ROWS: This means the optimizer must always uses CBO, even if no statistics are available, but try to finish the query as soon as possible and maximize throughput (good for large batch queries).
  FIRST_ROWS: This means the same as ALL_ROWS, but try to supply the first row of the results as early as possible (good for small indexed queries). This optimizer hint may lead to performance degradation, and is now kept only for backward compatibility.
  FIRST_ROWS_N: This means the same as FIRST_ROWS, but try to optimize the response time to N rows. N may be 1, 10, 100, or 1000 rows.

It can also be set at the session level by using the OPTIMIZER_GOAL or OPTIMIZER_MODE options of the alter session command. We do not generally recommend setting this at the session level, but the syntax is:

ALTER SESSION SET OPTIMIZER_GOAL = RULE

or for Oracle 8.1.5 onwards:

ALTER SESSION SET OPTIMIZER_MODE = RULE

Setting the optimizer mode to CHOOSE in the init.ora file so that the cost based optimizer is used is highly recommend ed in most cases. The main exception to using the cost based optimizer is when an application has been manually tuned by developers for the rule based optimizer. Even this should be tested with the latest release of Oracle to check if the cost based optimizer can now improve on the older rule based query plans and performance.

To determine if a table is analyzed, check the AVG_ROW_LEN column of the USER_TABLES table. If it is non-zero, the analyze or dbms_stats command has been run at least once on this table.

To determine if an index is analyzed, check the COLUMN_LENGTH column of the USER_IND_COLUMNS table. If it is non-zero, the analyze or dbms_stats command has been run at least once on this index.

If you analyze a table, then its current indexes are automatically analyzed too. If you analyze a index, then the table is not analyzed.

For tables and indexes where the data is highly skewed, it is worth creating histogram statistics. The database usually assumes that there is an even spread of values between the highest and lowest value found in a column. If this is not true, the data is skewed. For example, in England, there are many people with surnames of Smith and Jones and hardly any starting with the letter Z. This means a surname column has skewed values. Another example might be a column containing the year's sales order. If a ten year old company is growing rapidly, it might find that the last year includes 60% of sales orders; this is highly skewed. The command will be similar to:

ANALYZE TABLE orders COMPUTE STATISTICS FOR COLUMN release_date

or for Oracle 9i:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS(`scott', `orders',METHOD_OPT=>`FOR COLUMN SIZE 10 release_date')

The default number of the histogram buckets is 75. For table columns with a large ranges and large numbers of clustered values, having a higher number of buckets can be useful.

To investigate the histograms on the system, use the USER_HISTOGRAMS table. For example:

SELECT * FROM USER_HISTOGRAMS;

Collecting the full statistics using the analyze table <name> compute statistics command on large tables takes a lot of time and space in the system (roughly the cost of a full table scan and sort operation). For tables with an even distribution of data in the columns, this will yield little extra value over an estimated analysis using a sample of the rows. This is performed with the analyze table <name> estimate statistics command. We recommend an estimate of 5% of the rows as a minimum for large tables and their indexes. For example:

ANALYZE TABLE orders ESTMATE STATISTICS SAMPLE 5 PERCENT

or for Oracle 9i:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS(`scott', `orders', DBMS_STATS.AUTO_SAMPLE_SIZE)

With the cost based optimizer, there is a further choice to make about the way the database is requested to provide the results. If your application can make use of the first few rows of the SQL statement, for example, displaying them on the user's screen, then the OPTIMIZER_GOAL of FIRST_ROWS_N can make the application look faster to the user. This is set at the session level with:

ALTER SESSION SET OPTIMIZER_GOAL = FIRST_ROWS_100

In general, it is hard to code an application this way, so it is not common. Otherwise, the OPTIMIZER_GOAL of ALL_ROWS will finish the query in the shortest possible time. This changes the access method the optimizer will choose, as some techniques provide the first rows earlier than others. The default is to maximize throughput of the system. We recommend using the default unless early screen updates are coded into the application.

Evaluate the top Oracle parameters

The non-default Oracle parameter values are worth investigating. Of course, many of them have to be non-default for the system to run at all. For Oracle 8i, see The UTLBSTAT/UTLESTAT monitoring tool. For Oracle 9i, see Statpack package. Table 15-1 shows the summary of these top parameters.

Hint Benefit Risk
15.3.1, db_block_size Medium None
15.3.2, db_block_buffers or db_cache_size Very high None
15.3.3, disk_asynch_io Very high None
15.3.4, db_writer_processes and dbwr_io_slaves Medium None
15.3.5, shared_pool_size and sga_max_size Medium None
15.3.6, sort_area_size Medium Low
15.3.7, sql_trace Medium None
15.3.8, timed_statistics Medium None
15.3.9, optimizer_mode High Low
15.3.10, log_buffer Medium None
15.3.11, rollback_segments or undo_management Medium None

Here is a relevant quote from Oracle database Performance Tuning FAQ - Oracle FAQ

Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.

Database sizes

[edit]

What tools/utilities does Oracle provide to assist with performance tuning?

Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:

[edit]

When is cost based optimization triggered?

It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, and optimizer dynamic sampling isn't performed, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.

Generally, the CBO can change the execution plan when you:

[edit]

How can one optimize %XYZ% queries?

It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.

If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.

[edit]

Where can one find I/O statistics per table?

The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the most I/O operations.

The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.

For more details, look at the header comments in the catio.sql script.

[edit]

My query was fine last week and now it is slow. Why?

The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.

Some factors that can cause a plan to change are:

What do you think the plan should be? Run the query with hints to see if this produces the required performance.

[edit]

Does Oracle use my index or not?

One can use the index monitoring feature to check if indexes are used by an application or not. When the MONITORING USAGE property is set for an index, one can query the v$object_usage to see if the index is being used or not. Here is an example:

SQL> CREATE TABLE t1 (c1 NUMBER);
Table created.

SQL> CREATE INDEX t1_idx ON t1(c1);
Index created.

SQL> ALTER INDEX t1_idx MONITORING USAGE;
Index altered.

SQL>
SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME                     INDEX_NAME                     MON USE
------------------------------ ------------------------------ --- ---
T1                             T1_IDX                         YES NO

SQL> SELECT * FROM t1 WHERE c1 = 1;
no rows selected

SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME                     INDEX_NAME                     MON USE
------------------------------ ------------------------------ --- ---
T1                             T1_IDX                         YES YES

To reset the values in the v$object_usage view, disable index monitoring and re-enable it:

ALTER INDEX indexname NOMONITORING USAGE;
ALTER INDEX indexname MONITORING   USAGE;
[edit]

Why is Oracle not using the damn index?

This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:

Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).

There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.

[edit]

When should one rebuild an index?

You can run the ANALYZE INDEX <index> VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.

[edit]

How does one tune Oracle Wait event XYZ?

Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:

[edit]

What is the difference between DBFile Sequential and Scattered Reads?

Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.

db file sequential read:

A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.

db file scattered read:

Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.

The following query shows average wait time for sequential versus scattered reads:

prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from   sys.v_$system_event a, sys.v_$system_event b
where  a.event = 'db file sequential read'
and    b.event = 'db file scattered read';

Database engines requirements for key subsystems of modern servers

The database server's primary function is to store, search, retrieve, and update data from disk. Examples of Database engines include IBM DB2, Microsoft SQL Server, and Oracle. Due to the high number of random I/O requests that database servers are required to do and the computation intensive activities that occur, the potential areas that have the most impact on performance are:

A balanced system is especially important, for example, if adding additional CPUs, consider upgrading other subsystems such as increasing memory and ensuring that disk resources are adequate. The key subsystems that influence database performance in the servers are:

There are also several subsystems that doesn't affect server performance. one is the video subsystem which in a server is relatively insignificant.

Accidental removal of indexes and indexes being disabled, that the DBA has not noticed, are a common problem. For example, an index can get disabled when the SQL*Loader is used to directly load data, and the re-index failed. So, check the indexes for the following:

If an index is missing, there is nothing within the database to check on it. To detect a missing index, there must be a definitive list of required indexes, and the columns that should be in the index.

If there is an index problem, then fix it before tuning anything.

If indexes are missing, then use the Oracle parallel index creation feature to make the index in the shortest possible time. This works very well on SMP machines, but you may want to restart the database with larger than normal SORT_AREA_SIZE to allow fast sorting of the resulting index.

Basic Oracle parameters

There are a limited number of all the Oracle parameters that have a large impact on performance. Without these being set correctly, Oracle cannot operate properly or give good performance. These need to be checked before further fine tuning. This is covered in detail in 15.3, Evaluate the top Oracle parameters. It is worth tuning the database further only if all these top parameters are okay.

Analyze database tables and indexes

Oracle has warned all customers that rule based optimization will be dropped in future releases. As the cost based optimizer is now likely to give the best performance in most cases, this should be used. The cost based optimizer needs data to decide the access plan, and this data is generated by the analyze command or the newer dbms_stats package. For Oracle 9i, always use dbms_stats.

Oracle depends on data about the tables and indexes. Without this data, the optimizer has to guess. It is worth checking that the optimizer has the following information:

Most parallel SQL statements, SQL hints, and many of the new performance features of Oracle, such as hash, star joins, and partitions, will only be available using the cost based optimizer. If the dbms_stats.get_table_stats, analyze, or dbms_utility.analyze_schema command is not run, and the SQL does not use SQL hints, the optimizer has to use rule based optimization and will not make the new performance feature available.

The optimization mode is set in the Oracle parameters using the init.ora file with the OPTIMIZER_MODE variable. While the parameter is usually set to CHOOSE or RULE, the possible values are:

It can also be set at the session level by using the OPTIMIZER_GOAL or OPTIMIZER_MODE options of the alter session command. We do not generally recommend setting this at the session level, but the syntax is:

ALTER SESSION SET OPTIMIZER_GOAL = RULE

or for Oracle 8.1.5 onwards:

ALTER SESSION SET OPTIMIZER_MODE = RULE

Setting the optimizer mode to CHOOSE in the init.ora file so that the cost based optimizer is used is highly recommend ed in most cases. The main exception to using the cost based optimizer is when an application has been manually tuned by developers for the rule based optimizer. Even this should be tested with the latest release of Oracle to check if the cost based optimizer can now improve on the older rule based query plans and performance.

To determine if a table is analyzed, check the AVG_ROW_LEN column of the USER_TABLES table. If it is non-zero, the analyze or dbms_stats command has been run at least once on this table.

To determine if an index is analyzed, check the COLUMN_LENGTH column of the USER_IND_COLUMNS table. If it is non-zero, the analyze or dbms_stats command has been run at least once on this index.

If you analyze a table, then its current indexes are automatically analyzed too. If you analyze a index, then the table is not analyzed.

For tables and indexes where the data is highly skewed, it is worth creating histogram statistics. The database usually assumes that there is an even spread of values between the highest and lowest value found in a column. If this is not true, the data is skewed. For example, in England, there are many people with surnames of Smith and Jones and hardly any starting with the letter Z. This means a surname column has skewed values. Another example might be a column containing the year's sales order. If a ten year old company is growing rapidly, it might find that the last year includes 60% of sales orders; this is highly skewed. The command will be similar to:

ANALYZE TABLE orders COMPUTE STATISTICS FOR COLUMN release_date

or for Oracle 9i:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS(`scott', `orders',METHOD_OPT=>`FOR COLUMN SIZE 10 release_date')

The default number of the histogram buckets is 75. For table columns with a large ranges and large numbers of clustered values, having a higher number of buckets can be useful.

To investigate the histograms on the system, use the USER_HISTOGRAMS table. For example:

SELECT * FROM USER_HISTOGRAMS;

Collecting the full statistics using the analyze table <name> compute statistics command on large tables takes a lot of time and space in the system (roughly the cost of a full table scan and sort operation). For tables with an even distribution of data in the columns, this will yield little extra value over an estimated analysis using a sample of the rows. This is performed with the analyze table <name> estimate statistics command. We recommend an estimate of 5% of the rows as a minimum for large tables and their indexes. For example:

ANALYZE TABLE orders ESTMATE STATISTICS SAMPLE 5 PERCENT

or for Oracle 9i:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS(`scott', `orders', DBMS_STATS.AUTO_SAMPLE_SIZE)

With the cost based optimizer, there is a further choice to make about the way the database is requested to provide the results. If your application can make use of the first few rows of the SQL statement, for example, displaying them on the user's screen, then the OPTIMIZER_GOAL of FIRST_ROWS_N can make the application look faster to the user. This is set at the session level with:

ALTER SESSION SET OPTIMIZER_GOAL = FIRST_ROWS_100

In general, it is hard to code an application this way, so it is not common. Otherwise, the OPTIMIZER_GOAL of ALL_ROWS will finish the query in the shortest possible time. This changes the access method the optimizer will choose, as some techniques provide the first rows earlier than others. The default is to maximize throughput of the system. We recommend using the default unless early screen updates are coded into the application.

Evaluate the top Oracle parameters

The non-default Oracle parameter values are worth investigating. Of course, many of them have to be non-default for the system to run at all.

Hint Benefit Risk
15.3.1, db_block_size Medium None
15.3.2, db_block_buffers or db_cache_size Very high None
15.3.3, disk_asynch_io Very high None
15.3.4, db_writer_processes and dbwr_io_slaves Medium None
15.3.5, shared_pool_size and sga_max_size Medium None
15.3.6, sort_area_size Medium Low
15.3.7, sql_trace Medium None
15.3.8, timed_statistics Medium None
15.3.9, optimizer_mode High Low
15.3.10, log_buffer Medium None
15.3.11, rollback_segments or undo_management Medium None

Top Oracle parameters

Having said that, they are strongly pushing the one stop shop solution of Enterprise Linux and Oracle technology. Given that most of Oracle's application stack is certified for use on Red Hat Enterprise Linux, it would make sense for anyone currently deploying Oracle applications and considering a move to Linux to try Oracle's offering.

However, Red Hat have been in the Linux business a long time and are home to some top Linux bods, making their offering rock solid. Also, as Red Hat quite rightly point out there will always be a slight delay for important security fixes to trickle down to Oracle Enterprise Linux as they work to strip out any trademarked items and rebuild each patch.

More importantly though, Oracle are intent on applying bugfixes of their own to Enterprise Linux, something that invalidates any Red Hat support you may have and something that you seriously need to consider.

Unusually, Oracle are claiming that they will support your operating system indefinitely as part of the Premier Support package which works out at $1199 and $1999. This includes bugfixes for that version as well as support requests and access to software. Not even Red Hat provides this length of support, officially only supporting RHEL for a maximum of seven years. Our advice? Consider a switch to Oracle Enterprise Linux.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Mar 19, 2012] Linux Free Memory: Is It Free Or Reclaimable

November 17, 2009

I have certain performance test harnesses that expect to have a lot of free memory. The test harnesses produce a lot of output that facilitate my performance analysis efforts. The output of these harnesses is mostly in the form of text files and pipe-delimited files (to be used by SQL*Loader) and CSV files which I upload to Excel.

Sometimes I execute these performance harnesses for long periods of time in which case I can generate a significant amount of file system content. The problem is that I wind up with little or no free memory.

I Want Free Memory, Not Reclaimable Memory
On Linux (as is the case with every Unix derivation) user memory allocations (e.g., stack/heap growth) can be satisfied by the OS through page reclaims. Page reclaims are simply reuse of memory pages that have clean content such as pages of text files read from the file system. A reclaimable page can be matched to a process request for memory extremely fast, I'll agree. However, there is still a difference between free memory and reclaimable memory. Whether that difference is no more significant than the output of such tools as free(1) is not the point of this blog entry. In fact, if you want to see the "buffer adjusted" free memory on Linux you can use –o argument to the free(1) command. The problem for me can sometimes be that I don't want to munge through scripts and programs that expect to start with a lot of free memory and change them to weed out the reclaimable effect.

If you want to effectively clean out physical memory of all the cached file system dead wood, and you have proper permissions, you can write values into /proc/sys/vm/drop_caches and get the results you are looking for.

In the following example I have a 72GB physical memory system that is totally idle. There is a single instance of Oracle Database 11g Release 2 booted with an SGA of roughly 22GB. There are no significantly large processes running either. I'm basically sitting on about 50 GB of cached file system "stuff" that I don't want cached. As the example shows I've echoed the numeral 3 into drop_caches and the subsequent execution of free(1) shows the 50 GB of cached file system junk is now wiped out and shown under the "free" column.

01 # free
02 total used free shared buffers cached
03 Mem: 74027752 73688248 339504 0 528164 62777200
04 -/+ buffers/cache: 10382884 63644868
05 Swap: 16779884 954164 15825720
06 # echo 3 >/proc/sys/vm/drop_caches
07 #
08 # free
09 total used free shared buffers cached
10 Mem: 74027752 22956444 51071308 0 2144 13320800
11 -/+ buffers/cache: 9633500 64394252
12 Swap: 16779884 954164 15825720
13 #

Of course I understand that this is just a shuffle in memory accounting by the kernel, but for certain scripting purposes this might be helpful for someone searching the web someday.

[PDF] A Quantitative Comparison between Raw Devices and File Systems for implementing Oracle Databases

Hp 11 v3 is the first version to support async I/O.

The debate on whether to use a file system or raw devices for implementing databases still rages among Oracle database administrators. Most database administrators are familiar with the UNIX file system implementation, and prefer the ease of management of a file system as compared to a raw implementation. In addition, all UNIX commands and tools for file manipulation can still be used with a file system implementation. This is not the case for a raw database. Therefore, DBAs frequently choose file systems for implementing databases.

However, there is considerable performance overhead when using a file system to implement a database. The file locking inherent to UNIX file systems (files are exclusively accessed during write operations) and double buffering (once in kernel space, once in user space) introduce significant performance overhead. This paper attempts to quantify these overheads.

Our results indicate that database implementations using raw devices have the best performance, with 92% transactional throughput improvement over file-system based implementations (log option). We believe that administrators will progressively migrate to raw databases as database management tools, such as Oracle Enterprise Manager (OEM), continue to improve.

We used an OLTP workload for our evaluation. We report performance measurements of the Oracle database using raw logical volumes and using file system and explore the effect of using asynchronous and synchronous I/O operations for the raw device implementation. We also investigate various file system optimizations to determine their effect on database performance. We also vary the number of Oracle database writer processes and Oracle I/O slaves processes to get the best performance for a given configuration. The experiments tend to cover all the different combinations possible when implementing an Oracle database. The Oracle Stripe And Mirror Everything (SAME) methodology was used for the database layout for all the tests.

... ... ...

On HP-UX, asynchronous IO is only supported with a raw device (raw disk partition or raw logical volume), although this will change with HP-UX 11i v3 (internally known as 11.31). With asynchronous IO configured, Oracle can submit multiple IO requests in parallel. In some cases it may be necessary to use multiple database writer processes (DBWR) to stress the backend, but we were able to achieve this with only 2 DBWR processes. In our case, we did not see any improvement beyond 2 database writer processes. The right number could be determined through the Oracle IO statistics but is beyond the scope of this paper. The usage of asynchronous IO will require HP-UX kernel changes and modification of the Oracle initialization parameters file, init.ora. For more information, consult the Oracle Installation and Administrators Reference Guide documentation on HP.

Pro Oracle Database 10g RAC on Linux: Installation ...

Optimizing Oracle 10g on Linux Non-RAC ASM vs. LVM

[Dec 12, 2007]Two blogs on Oracle Optimization that are worth a look:

An Oracle take on Windows vs. Linux

www.oracle.com/technology/tech/dotnet/col/europe2006/ukoug_2006-best_practices_rac_on_64-bit_windows.ppt

"Load Test Windows 2003 v Red Hat 4.0 (64 bit)"
...here are my summary from their slides:

I didn't know Oracle was really getting into deciding which OS to prefer? Frequent Linux crashes drove customer to Solaris

Sun-PostgreSQL win shows benefit of Solaris over Linux. Pretty interesting trends happening in open-source databases.

You can read more at: http://www.computerworld.com.au/index.php/id;413111662

This also shows why you can't just look at over-optimized benchmarks link TPC-C, one needs to look at real factors in the datacenter. For TPC-C issues you see previous posts:

http://blogs.sun.com/bmseer/entry/judging_by_the_wrong_things

...and also some questionable(?) changes over time:
http://blogs.sun.com/bmseer/entry/ibm_tpc_c_more_hints

Recommended Links

Recommended Papers

Performance & Availability



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019