Monthly Archives: August 2011

High Redo Logs or Archives?

It is sometimes crazy to see those archive destination filling up fast and your RMAN job is not able to cope-up with the redo log generation speed. Cleaning up logs is not a solution if you data is critical and you require PTTR (Point-in-time-recovery). DBA has a responsibility to identify the root cause of it. Here I have taken one such case on how to do identify the culprit.

Approach

  • Identify the log generation rate and time, how many of them are generating and when? Use the below script to see the trend.
    select to_char(FIRST_TIME,'MM/DD') day,
    to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') h00,
    to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') h01,
    to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') h02,
    to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') h03,
    to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') h04,
    to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') h05,
    to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') h06,
    to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') h07,
    to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') h08,
    to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') h09,
    to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') h10,
    to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') h11,
    to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') h12,
    to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') h13,
    to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') h14,
    to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') h15,
    to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') h16,
    to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') h17,
    to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') h18,
    to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') h19,
    to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') h20,
    to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') h21,
    to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') h22,
    to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') h23,
    count(*) Total
    from v$log_history
    where
    FIRST_TIME > sysdate - 8 --last 7 days trend
    group by to_char(FIRST_TIME,'MM/DD') order by substr(to_char(FIRST_TIME,'MM/DD'),1,10) desc ;
    
  • For the given time period, take the awr or statspack report and identify the SQL doing more redo. If statspack/awr is not configured then try to see it from v$sql with search where upper(sql_text) like ‘%INSERT%’ or upper(sql_text) like ‘%DELETE%’ or upper (sql_text) like ‘%UPDATE%’ and row_processed > 1000
  • If you are not lucky enough to get it from library cache then at the last, you have logminer option available which gives guarantee of identifying the culprit SQL.
  • Most systematic approach is to use logminer utility provided by Oracle but for that you need to perform additional setup.  It is described in detail on MOS Doc Id 300395.1
  • You can see top sessions which are doing more block changes. More changes mean more redo information.
    --  to see which session is doing how many block changes.
    SELECT
    s.sid,
    s.serial#,
    s.username,
    s.program,
    i.block_changes
    FROM v$session s, v$sess_io i
    WHERE s.sid = i.sid
    ORDER BY 5 desc;
    

 

Update: By using “Snapper”

I found the article by Tanel Poder on the same, and his snapper script is quite useful in identifying the users generating huge amount of redo.  Quick Tip: Use the snapredo provided by him.

Related MOS Notes:

Note 188691.1  : How to Avoid Generation of Redolog Entries
Note 167492.1  : How to Find Sessions Generating Lots of Redo
Note 199298.1  : Diagnosing excessive redo generation

If you have any other idea, please share in comments. 🙂

Advertisements

RAC Hacking Session by Riyaj Shamsudeen


RAC Hacking Session: Probing LMS Process Internals With Advanced UNIX Utilities from poderc seminars on Vimeo.
A deep dive discussion about the LMS background process (global cache service). Riyaj will show you the techniques that he use to study the behavior of RAC processes. For this session, he’ll probe the internals of the LMS process using DTrace, truss, and other advanced UNIX utilities. After attending this session, you’ll understand the inner workings of the LMS process better and you’ll learn to do your own probing at home too (try it at home and not in a production system! 🙂

E-Business Suite Performance Tuning

I was going through Steven Chan’s blog and have got a very good presentation on EBS performance tuning. It has given the approach, note references and tools to be used during investigation. It covers all the aspects of the system – client browser, middle-ware, network and database. It is a must reference for all the Oracle Apps DBAs.

Apart from the tools mentioned – I strongly recommend to use ASH Viewer and Tanel Poder’s Snapper when tuning database.

Presentation (pdf 1.9M)

Source: http://blogs.oracle.com/stevenChan/entry/oaug_collaborate_recap_best_practices