How to set the trace file session?
ALTER SESSION SET SQL_TRACE = TRUE;
SELECT COUNT(*)
FROM dual;
ALTER SESSION SET SQL_TRACE = FALSE;
The resulting trace file will be located in the USER_DUMP_DEST directory. This can then be interpreted using TKPROF at the command prompt as follows.
TKPROF <trace-file> <output-file> explain=user/password@service table=sys.plan_table
The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands.
*****************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
******************************************************************************
SELECT COUNT(*)
FROM dual
call count cpu elapsed disk query current rows
------- ----- ----- ------- ------- ------- ------- ------------------------------------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 1 4 1
----- ----- ----- ------- ------- ------- -- --------------------------------------------
total 4 0.02 0.02 0 1 4 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 1
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
1 TABLE ACCESS FULL DUAL
Things to look out for in the trace include:
Understanding of Trace file
When tracing lots of statements at once, such as batch processes, you can quickly discard those statements which have an acceptable cpu times. It's often better to focus on those statements that are taking most of the cpu time.
Inefficient statements are mostly associated with a high number of block visits. The query column indicates block visits for read consistency, including all query and sub query processing. The current column indicates visits not related to read consistency, including segment headers and blocks that are going to be updated.
The number of blocks read from disk is shown in the disk column. Since disk reads are slower than memory reads you would expect this value to be significantly lower than the sum of the query and current columns. If it is not you may have an issue with your buffer cache.
Locking problems and inefficient PL/SQL loops may lead to high cpu/elapsed values even when block visits are low.
Multiple parse calls for a single statement imply a library cache issue.
Once you've identified your problem statements you can check the execution plan to see why the statement is performing badly.
ALTER SESSION SET SQL_TRACE = TRUE;
SELECT COUNT(*)
FROM dual;
ALTER SESSION SET SQL_TRACE = FALSE;
The resulting trace file will be located in the USER_DUMP_DEST directory. This can then be interpreted using TKPROF at the command prompt as follows.
TKPROF <trace-file> <output-file> explain=user/password@service table=sys.plan_table
The resulting output file contains the following type of information for all SQL statements processed, including the ALTER SESSION commands.
*****************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
******************************************************************************
SELECT COUNT(*)
FROM dual
call count cpu elapsed disk query current rows
------- ----- ----- ------- ------- ------- ------- ------------------------------------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 1 4 1
----- ----- ----- ------- ------- ------- -- --------------------------------------------
total 4 0.02 0.02 0 1 4 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 1
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
1 TABLE ACCESS FULL DUAL
Things to look out for in the trace include:
Understanding of Trace file
When tracing lots of statements at once, such as batch processes, you can quickly discard those statements which have an acceptable cpu times. It's often better to focus on those statements that are taking most of the cpu time.
Inefficient statements are mostly associated with a high number of block visits. The query column indicates block visits for read consistency, including all query and sub query processing. The current column indicates visits not related to read consistency, including segment headers and blocks that are going to be updated.
The number of blocks read from disk is shown in the disk column. Since disk reads are slower than memory reads you would expect this value to be significantly lower than the sum of the query and current columns. If it is not you may have an issue with your buffer cache.
Locking problems and inefficient PL/SQL loops may lead to high cpu/elapsed values even when block visits are low.
Multiple parse calls for a single statement imply a library cache issue.
Once you've identified your problem statements you can check the execution plan to see why the statement is performing badly.