SQL Tuning: Where to START?

July 7, 2014 Leave a comment

Found following Oracle support notes to be good starting point while tuning SQL -

  • TROUBLESHOOTING: Tuning a New Query (Doc ID 372431.1)
  • Tuning Queries: ‘Quick and Dirty’ Solutions (Doc ID 207434.1)
  • TROUBLESHOOTING: Advanced Query Tuning (Doc ID 163563.1)
  • Troubleshooting: High CPU Utilization (Doc ID 164768.1)
  • Potentially Expensive Query Operations (Doc ID 162142.1)

Cache Buffer Chains

April 24, 2014 Leave a comment

Recently, I have been seeing cache buffer chains (CBC) in our OLTP database. We have been working to resolve for more than 2 weeks now. I came across following interesting articles:

 

We have got some Oracle recommendations. We are also trying hash partitioning our hot indexes. Lets see how it goes.

Oracle Parallelism: Handle with care

May 28, 2013 Leave a comment

Recently, i have been investigating a performance issue in one of my production data warehouse environment and started reading about controlling and tuning parallelism on the database server.

Two very good links i have hit during this process.

1. http://oracledoug.com/px.pdf

2. https://blogs.oracle.com/datawarehousing/entry/workload_management_statement

This post will be updated with more details and resources in context with parallel tuning.

High Redo Logs or Archives?

August 11, 2011 Leave a comment

It is sometimes crazy to see those archive destination filling up fast and your RMAN job is not able to cope-up with the redo log generation speed. Cleaning up logs is not a solution if you data is critical and you require PTTR (Point-in-time-recovery). DBA has a responsibility to identify the root cause of it. Here I have taken one such case on how to do identify the culprit.

Approach

  • Identify the log generation rate and time, how many of them are generating and when? Use the below script to see the trend.
    select to_char(FIRST_TIME,'MM/DD') day,
    to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') h00,
    to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') h01,
    to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') h02,
    to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') h03,
    to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') h04,
    to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') h05,
    to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') h06,
    to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') h07,
    to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') h08,
    to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') h09,
    to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') h10,
    to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') h11,
    to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') h12,
    to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') h13,
    to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') h14,
    to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') h15,
    to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') h16,
    to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') h17,
    to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') h18,
    to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') h19,
    to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') h20,
    to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') h21,
    to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') h22,
    to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') h23,
    count(*) Total
    from v$log_history
    where
    FIRST_TIME > sysdate - 8 --last 7 days trend
    group by to_char(FIRST_TIME,'MM/DD') order by substr(to_char(FIRST_TIME,'MM/DD'),1,10) desc ;
    
  • For the given time period, take the awr or statspack report and identify the SQL doing more redo. If statspack/awr is not configured then try to see it from v$sql with search where upper(sql_text) like ‘%INSERT%’ or upper(sql_text) like ‘%DELETE%’ or upper (sql_text) like ‘%UPDATE%’ and row_processed > 1000
  • If you are not lucky enough to get it from library cache then at the last, you have logminer option available which gives guarantee of identifying the culprit SQL.
  • Most systematic approach is to use logminer utility provided by Oracle but for that you need to perform additional setup.  It is described in detail on MOS Doc Id 300395.1
  • You can see top sessions which are doing more block changes. More changes mean more redo information.
    --  to see which session is doing how many block changes.
    SELECT
    s.sid,
    s.serial#,
    s.username,
    s.program,
    i.block_changes
    FROM v$session s, v$sess_io i
    WHERE s.sid = i.sid
    ORDER BY 5 desc;
    

 

Update: By using “Snapper”

I found the article by Tanel Poder on the same, and his snapper script is quite useful in identifying the users generating huge amount of redo.  Quick Tip: Use the snapredo provided by him.

Related MOS Notes:

Note 188691.1  : How to Avoid Generation of Redolog Entries
Note 167492.1  : How to Find Sessions Generating Lots of Redo
Note 199298.1  : Diagnosing excessive redo generation

If you have any other idea, please share in comments. :-)

Full table scans internals

August 11, 2011 Leave a comment

RAC Hacking Session by Riyaj Shamsudeen

August 10, 2011 Leave a comment


RAC Hacking Session: Probing LMS Process Internals With Advanced UNIX Utilities from poderc seminars on Vimeo.
A deep dive discussion about the LMS background process (global cache service). Riyaj will show you the techniques that he use to study the behavior of RAC processes. For this session, he’ll probe the internals of the LMS process using DTrace, truss, and other advanced UNIX utilities. After attending this session, you’ll understand the inner workings of the LMS process better and you’ll learn to do your own probing at home too (try it at home and not in a production system! :)

E-Business Suite Performance Tuning

August 6, 2011 Leave a comment

I was going through Steven Chan’s blog and have got a very good presentation on EBS performance tuning. It has given the approach, note references and tools to be used during investigation. It covers all the aspects of the system – client browser, middle-ware, network and database. It is a must reference for all the Oracle Apps DBAs.

Apart from the tools mentioned – I strongly recommend to use ASH Viewer and Tanel Poder’s Snapper when tuning database.

Presentation (pdf 1.9M)

Source: http://blogs.oracle.com/stevenChan/entry/oaug_collaborate_recap_best_practices

Follow

Get every new post delivered to your Inbox.