顯示具有 oracle 標籤的文章。 顯示所有文章
顯示具有 oracle 標籤的文章。 顯示所有文章

2016年8月16日 星期二

OCP 11gR2: Administration II 練習筆記 (九)

Lesson 09
===============================================================================
01. Resource Manager (CPU usage, undo generation, number of active session….)
- Database consolidation - multi instance CPU consumer by instance caging
- Set default by parameter resource_manager_plan
- Other group must in resource plan and level 1 percentage total not 100%
- Can grant administer_resource_manager for other user manage
e.g.  exec dbms_resource_manager_privs.grant_system_privilege('SCOTT','administer_resource_manager',false);

-OLTP users: more resource, DSS users: fewer resource, Batch users: least resource
        "dbms_resource_manager_privs" package included
        - resource_consumer_group
        - resource_plan
        - resource_plan_directives
        - resource_allocation_methods

e.g.  create consumer group named "EmpGroup", add the user scott to"EmpGroup"
        create new resource manager plan named "EmpPlan" which restricts CPU usges to 50% and undo tablespace to 2MB (under undo retention policy) and Maximum Number of active session to 1. This plan only affects        "EmpGroup" consumer group.

        EM > Server > Resource Manager > Consumer Group > Create > EmpGroup (*round robin) > add Scott.
        EM > Server > Resource Manager > Consumer group mapping > add rule for selected user > Scott.
        EM Server > Resource Manager > Plans > select "default_plan" and action "create like" > Go > Plan name
        "EmpPlan" > click modify to add "EmpGroup" to the list of Resource Allocation > select mode: percentage, type 50
        For EmpGroup, session pool: 1, undo page 200 > OK.
        Active plan by set scheduler or select active (7x24) > go
        select sid,username, resource_consumer_group from v$session;
        EM> Resource Manager > Statistic

* Round Robin: Multi-session have chance to run statement
* Run to completion: Session 2 need wait Session 1 complete
- change resource plan by "select plan, num_plan_directives, status, mandatory from dba_rsrc_plans;"
- create scheduler windows for resource manager
        EM> server > oracle scheduler > windows > create > select "EmpPlan" and set recurring on everyday 9:00 A.M.
- switch or kill session when consumer group achieved resource limit
        begin
         dbms_resource_manager.create_plan_directive(
         plan=>'EmpPlan',group_or_subplan=>'EmpGroup', comment=>'EmpGroup',mgmt._p1=>50,
         switch_group=>'low_group',switch_io_reqs=>10000,switch_io_megabytes=>2500,switch_for_call=>true);
        end;
        /
* Consumer_group limit can use by max_utilization_limit
* instance caging by parameter cpu_count




02. Oracle jobs scheduler
- create normal job step by create program > create schedule >create job
e.g.  create a program, a schedule and a job, it will insert 1 record to scott.emp
        EM > Server > Schedule > Program > 'name', enable, type PL SQL, insert into scott.emp values(……..
        EM > Server > Schedule > Schedule > 'name', do not repeat > submit
        EM > Server > Schedule > Jobs >
        Schedule job with resource consumer group (create job class)
        EM> Server > Schedule > Job classes > create 'name', assign 'consumer group' modify previous job class.

Lightweight jobs (EM not support)
- means easy to create and store less meta data in database, it is not schema object and cannot assign to other users.
- must be a program (not procedure, executable, java……)
e.g.          begin
                 dbms_scheduler.create_job(job_name=>'name',program=>'name',
                 schedule_name=>'name',job_style=>'lightweight');
                end;
                /

Job chains
- when job 1 succes run job 2, if failed run job 3 …..

        Create few programs > create job chains > step > create rule….



2016年8月15日 星期一

OCP 11gR2: Administration II 練習筆記 (七)

Lesson 07
===============================================================================
Managing Memory

- by Full auto, Semi-automatic, manual
Full Auto:
- memory_target                <- set database RAM start, other component auto adjust
 memory_max_target        <- hard limit, if memory_target=500, max 800 DB cannot over 800.

- buffer Cache (SGA)
 Base on db_block_size (8K default), if not use auto, set db_cache_size for manual

- buffer status
 Pinned: server process or background process locked RAM (database latch) it will caused latch wait.
 Clean: unpinned RAM, wait checkpoint to HDD
 Free/ unused
 Dirty: unpinned buffer which still not write to HDD (2,3 sec).

- Support multiple block size in same database (unix only)
        Try: "create tablespace 'name' datafile 'location' size 50M blocksize 32768;"
- if create non-default data block size tablespace, you should config other size cache_size

How many db_cache_size should set ?
- check v$db_cache_advice; (size_factor=1 is current status), estd_physical_read is HDD I/O (lower is better).
        select SIZE_FACTOR, ESTD_PHYSICAL_READS from v$db_cache_advice;
 * calculate buffer cache hit ratio (statement not need to use HDD I/O)
          Select name,value from v$sysstat where name in ('db block gets from cache', 'consistent gets from cache', 'physical reads cache');
  Ratio = 1-(('physical reads cache')/( 'consistent gets from cache'+'db block gets from cache')x100), better 100%
select ((1-(select value from v$sysstat where name='physical reads cache')/
((select value from v$sysstat where name='consistent gets from cache')+
(select value from v$sysstat where name='db block gets from cache')))*100) as cache_ratio from dual;

- multiple buffer pools
 db_recycle_cache_size: object will keep in memory but fast in fast out (faster remove object in buffer pool)
 db_keep_cache_size: object keep in pool, retain object sin memory (remove object from buffer pool few hours)
 db_cache_size: object will keep 3 sec in memory then remove object from buffer pool
e.g.          create index cust_ids …. storage (buffer_pool keep);
                alter table oe.customers storage (buffer_pool recycle);
                alter index oe.cust_lname_ix storage (buffer_pool keep);

Shared pool (manual)
- large pool
 RMAN backup, parallel query operation
 Shared server, oracle X/Open <- 2/3PC (two/three phase commit), server to server commit and rollback
 Manage by AMM and ASMM and parameter with "large_pool_size"

- log_buffer
 Circular buffer in SGA -> LGWR -> Redo log (size depend on HDD I/O, small is better)

* SGA should not use virtual memory, "lock_sga set to true (not support in AMM)"
* set duplicatied SQL to keep pool or set buffer cache higher or use memory adviser

AMM = automate PGA and SGA (40%, 60% ratio)
ASMM = Semi-automatic SGA, manual PGA
Off AMM & ASMM = all manual
Oracle recommends use AMM unless you have special requirments.

- if set wrong memory_target setting
        create pfile from spfile;
        edit memory_target
        create spfile from pfile;
                or
        restore spfile from autobackup
                or
        by alert.log startup log (control_files, memory_target, db_name)

- if memory_max_target not set, memory_max_target=memory_target
 If enable memory_target, sga_target and pga_aggregate_target. It means the PGA and SGA are minimum values
- monitor AMM by "v$memory_dynamic_components", "v$memory_resize_ops","v$memory_target_advice"

ASMM
- disable memory_target, you should set sga_target and pga_aggregate_target to use ASMM.
 If you need fix PGA but dynamic SGA size
 If AMM disable pga_aggregate_target is fixed size
- in ASMM sga_target is dynamic, ASMM base on MMON process and MMON use memory advisor.
- Get minimum value for sga_target, if database in manual mode
        select ((select sum(value) from v$sga)-(select current_size from v$sga_dynamic_free_memory))
        "sga_target" from dual;

Disable ASMM to manual mode
- when memory_target disable and set sga_target=0, database will auto tune memory and write to parameter.
- show parameter "pool_size", show parameter "db_cache_size"

- select * from v$sga_dynamic_components;


2016年8月14日 星期日

OCP 11gR2: Administration II 練習筆記 (六)

Lesson 06
===============================================================================
01. Oracle Total Recall (flashback data archive, recycle-bin)
- Flashback archive enable at table level with specified retention period (year).
- FDA contents cannot be modifying directly.
- For audit or historical report
- view "dba_flashback_archive", "dba_flashback_archive_ts", "dba_flashback_archive_tables" or user_
- When data modified > record in undo tbs > when undo expired > record trans to FDA (compress)
- Better create tbs to hold FDA
- Table not support by exadata
- FDA tablespace cannot transport to other server by oracle transport (use RMAN backup & restore)
- You can grant flashback archive administrator to user
e.g   create tablespace FDA_DATA_1 datafile '/location' size 5g autoextend on;
        create flashback archive fda1 tablespace FDA_DATA_1 retention 1 year;
        (don't set quota, if quota is full, the new transaction will stop !)
        alter table 'name' flashback archive fda1;
        - if FDA_DATA_1 tbs full, you may create another tbs and add in to fda1
        create tablespace FDA_DATA_2 datafile 'location' size 5g autoextend on;
        alter flashback archive fda1 add tablespace FDA_DATA_2;

Schema evolution
- after enable FDA, DDL may not support (drop , add column, truncate)
- disassociate and associate table with FDA by dbms_flashback_archive
- if source table DDL change, FDA table also need too
e.g.  exec dbms_flashback_archive.disassociate_FBA('schema','table');
        exec dbms_flashback_archive_reassociate_FBA('schema','table');
* remember to reassociate
* use SCN for first queries
* before flashback, commit or rollback first



02. Flashback Drop and recycle bin
- recycle bin default is on, drop table just rename table
- database will clean recycle bin when space not enough
- flashback drop will restore table object (index, trigger….)
e.g   flashback table 'name' to before drop;
                or
        flashback table 'name' to before drop [rename to 'name'];
- manual remove recycle bin by purge command "purge table 'name';"
- to bypass recycle bin use command "drop table 'name' purge;"
- check view by "dba_recyclebin" or user_



03. Flashback database
- like rewind database (normally flashback logs keep 1 to 2 days)
- used in case of logical data corruption made by user (truncate table)
- flashback opposite is "RMAN recover" (when you give up to flashback database)
- cannot perform when controlfile corrupted
- drop tablespace cannot flashback
- flashback database will also use RMAN backupset, archivelog, redo and undo.
- flashback database retention should not over RMAN retention
- enable flashback log in FRA will overhead system 10% performance
e.g.  shutdown immediate;
        startup mount exclusive; <- exclusive = dba single connection
        alter system set db_flashback_retention_target=2880; <- 2 days
        alter database flashback on;
        alter database open;
        select flashback_on from v$database;
        create restore point 'name' guarantee flashback database;
        truncate table 'name';
        shutdown immediate;
        startup mount exclusive;
        <RMAN / SQL> in exclusive mode;
        flashback database to until time "to_date('20150505 23:00:00','yyyymmdd hh24:mi:ss')";
                Or
        flashback database to until scn xxxxxxx;
                or
        flashback database to restore point 'name';
        alter database open read only;
        - check data
        alter database open resetlogs;


- flashback database view by "v$flashback_database_log;"


2016年8月9日 星期二

OCP 11gR2: Administration II 練習筆記 (五)

Lesson 05
============================================================================
01. Diagnosing the database
- Data Recovery Advisor (DRA)
           Cannot detect data block corruption
           Only for single instance, not support RAC
           Support failover to standby, but not analysis and repair of standby database
           * Create standby database by using data guard

* RMAN> change failure = close impact or change priority, command "change failure 5 priority low;"
* RMAN> list failure [All | critical | high | low | closed];, default is all option [exclude failure (failnum)]
* RMAN> advise failure [All | critical | high | low | closed];, default is all option [exclude failure (failnum)]
* RMAN> repair failure [using advise option failnum {preview | noprompt}];
DRA view: v$ir_failure, v$ir_manual_checklist, v$ir_repair, v$ir_failure_set

- HM automate run and save report, DRA read it by "list failure" to check the db health status
- Proactive check: use validate command "validate database plus archivelog;" for data block corruption




02. Data failure examples
- not accessible components:
           Missing datafile at O/S, incorrect permission, offline tbs and deleted.
- physical corruption
           8KB data block, 4K for checksum: checksum failure may caused by invalid block header (dbf from different platform)
- logical corruption
           Related to schema object, corrupt index, corrupt transaction inconsistencies
- controlfile inconsistencies
- I/O failures, reach limit on max open files in O/S




03. Block corruption detect
- for physical corruption by command "validate database plus archivelog;" will check database all data block
- for logical corruption by command "validate database plus archivelog check logical;" will check index, schema object
- ORA-01578 appear in alert.log , v$database_block_corruption and validate command and caused by hardware issue.
- don't run any defragment tools in oracle database, it will cause corruption
- Analyze command can check error is permanent or not.
           collect statistics for table, index ….. by command "analyze [table|index] 'name' compute statistics; "

Parameter to auto detect corruption
- db_block_checking  <- default is false, oracle recommend "full" [off | false | low | medium | full or true] 10% overhead
                                            (after update, insert, it will run block header and object contents check except index)
- db_block_checksum  <- default is typical, oracle recommend typical [OFF | TYPICAL | FULL]
- db_lost_write_protect <- default is none, for standby database only, the committed transaction will also effect in standby
- db_ultra_safe               <- use this to tune upon 3 option by [OFF | DATA_ONLY | DATA_AND_INDEX]

db_ultra_safe
OFF
DATA_ONLY
DATA_AND_INDEX
db_block_checking
OFF / FALSE
MEDIUM
FULL / TRUE
db_block_checksum
TYPICAL
FULL
FULL
db_lost_write_protect
TYPICAL
TYPICAL
TYPICAL




04. Block media recovery example
- check v$database_block_corruption (record by validate command)
           01. offline tbs, corrupt some block by hex tools then online tbs (may have error)
           02. RMAN> validate tablespace 'name' check logical;
           03. Read alert.log and select * from v$database_block_corruption;
           04. RMAN> recover corruption list;
           05. RMAN> recover tablespace 'name'; and online the tbs





05. Automatic diagnostic
Critical error -> ADR      <- 1. diagnostic_dest
                                           <- 2. $ORACLE_BASE
                                           <- 3. $ORACLE_HOME/log

- 11g not use core_dump, user_dump parameter anymore
- can check from EM database first page
- check by adrci (v$diag_info)
           Show incident, show problem , show alert ….. e.g.





06. Health Monitor (HM)
- v$hm_check (check which option can use to run diagnostic)
- view the HM report by v$hm_run, dbms_hm, adrci , EM
- run HM by dbms_hm or EM
           Example:
           exec dbms_hm.run_check('check_name','name');
           select dbms_hm.get_run_report('name') from dual;
                     or
           adrci -> show hm_run






07. Flashback
- Most depend by undo tablespace (undo management auto, retention, and guarantee)
- parallel insert will take more undo resource
- under sys schema cannot flashback

Object Level
Scenario examples
Flashback Technology
Depends on
Affects data
Database
Truncate table, made multi-table change
Database
Flashback logs
True
Table
Drop table
Drop
Recycle bin
True

Update with the wrong where clause
Table
Undo data
True
Compare current data from past
Query
Undo data
False
Compare version of a row
Version
Undo data
False
Keep historical transaction data
Data archive
Undo data
True
Transaction
Investigate and back out suspect transaction
transaction
Undo/ redo archivelog
True

- flashback query
        1. Query all data at a specified point in time
        2. only for DML, not support DDL
 e.g.         select * from emp as of timestamp to_timestamp('nnnnnnnn','YYYYMMDD:HH24:MI:SS');
                select * from emp as scn xxxxxxxx;


- flashback version query (will not show haven't committed history)
        1. see all version of a row between two times.
        2. see the transactions that changed the row.
        3. not support external tables, temp tables, fixed tables (sys.table / X$table), view.
        4. not support DDL
 e.g.         select versions_xid, versions_startscn, versions_endscn, versions_operation, ename, sal from
                emp versions between scn xxxxx and xxxxx;
                        or
                select versions_xid, versions_startscn, versions_endscn, versions_operation, ename, sal from
                emp versions between scn minvalue and maxvalue; (enable supplement log)
                        reverse row
                update emp set sal=(select sal from emp as of timestamp to_timestamp('nnnnnnnnn','yyddmm hh24miss')
                where empno=2000) where empno=2000;


- flashback table
        1. not support sys schema
        2. EM -> Perform recovery -> table -> recover
        3. enable row movement
e.g.          alter table emp enable row movement;
                grant flashback any table to scott;
                grant select,insert,update, delete on emp to scott;
                flashback table emp to scn XXXXXXX;


- flashback transaction
        1. see all changes made by a transaction statement (query by versions_XID)
e.g.          select * from flashback_transaction_query where xid='xxxxxxxx';
        2. enable supplemental log
                alter database add supplemental log data;

                alter database add supplemental log data (primary key) columns;
        3. each transaction only can perform once
        3. consider transaction_backout option <- transaction dependency.

        Let's understand a concept before use the option: Transaction Dependence.
        For example, 2 transactions TX1 and TX2, if match below any point that means TX2 dependence TX1:
          01. WAW (Write After Write), means TX1 modify some row and TX2 modify same row again.
          02. Primary Key dependence means the table with primary key. TX1 delete the row and TX2 insert a new           row with same primary key.
          03. Foreign Key dependence means TX1 made new foreign key data after insert or update and TX2 insert           or update rows use new foreign key data.
               
          Understand transaction dependence can help to solve the problem when conflict. Example for point 02, if           you want to reverse TX1 but don't care TX2, the table will have duplicate primary key.

     - [ NOCASCADE ]: TX1 cannot have dependence by other transaction, otherwise cancel operation and error.
     - [ CASCADE ]: TX1 and TX2 reverse together.
     - [ NOCASCADE_FORCE ]: ignore TX2, reverse TX1 if constraint no conflict the operation will success,            otherwise the conflicted constraint will error and operation fail.
     - [ NONCONFILICT_ONLY ]: reverse TX1 without influence TX2. Not same with NOCASCADE_FORCE, it      will filter TX1 reverse SQL and make sure the change not affect TX2.

     Example details for point 01: WAW, for a table only has 3 rows and no constraint:
       
Case 1: Original
Case 2: TX1 update 3 rows
Case 3: TX2 update 2 rows
ID
ID
ID
---------------
---------------
---------------
1
11
11
2
22
222
3
33
333
        
This is typical WAW example, TX2 dependence TX1
Now try to reverse TX1 with different backout option will have different result:
[NOCASCADE] will cause "ORA-55504: Transaction conflicts in NOCASCADE mode" , table status in Case 3
[CASCADE] will reverse table to Case 1
[NOCASCADE_FORCE], TX2 transaction not affect, but TX1 will reverse to below

ID
---------------
1
222
333
       
Here has some strange from the rule of [ NOCASCADE_FORCE ]: the reverse should ignore TX2 and perform on all rows, but why row 2 & 3 didn't reverse ? Because in this sample, reverse SQL the where clause include    ROWID for constraint because database enabled supplemental log.
       
[NONCONFILICT_ONLY] the result is same as [NOCASCADE_FORCE]. However, even the result is same, but  the process is different. The rows reverse SQL which related to TX2 will be filter first.

Real world example:
        Under HR schema, manger wants to increase Michael salary 10%.
        SQL> select FIRST_NAME, SALARY from hr.employees where employee_id=201;
        FIRST_NAME                    SALARY
        --------------------                            ----------
        Michael                           13000
       
        However, by user mistake, all staff increase salary 500% and committed
        SQL> update hr.employees set salary=salary*5;
        106 rows updated.
        SQL> commit;
       
        Then HR on progress increase Michael 10%
        SQL> update hr.employees set salary=salary*1.1 where employee_id=201;
        1 row updated.
        SQL> commit;
       
        Manger just wants Michael salary from 13000 to 14300. However, by the user mistake, Michael salary is           71500.
        After few minutes, HR discovered all staff salary abnormal and contact DBA investigation. DBA decide             use flashback_transaction_query to find out the problem on last 15 minutes.
        SQL> select distinct xid, commit_scn from flashback_transaction_query where table_owner='HR' and         table_name='EMPLOYEES' and commit_timestamp > systimestamp - interval '15' minute order by                     commit_scn;
        DBA found last 15 minutes have 2 transactions
        XID                            COMMIT_SCN
        ----------------                        ----------
        0300080081090000          3062978
        01001D00AE080000        3063594
       
        DBA check XID 0300080081090000 and found abnormal bulk salary update
        SQL> select undo_sql from flashback_transaction_query where xid='0300080081090000'
        UNDO_SQL
        ----------------------------------------------------------------------------------------------------
        update "HR"."EMPLOYEES" set "SALARY" = '10000' where ROWID = 'AAAVTFAAFAAAADOAAJ';
        update "HR"."EMPLOYEES" set "SALARY" = '8300' where ROWID = 'AAAVTFAAFAAAADOAAI';
        update "HR"."EMPLOYEES" set "SALARY" = '10000' where ROWID = 'AAAVTFAAFAAAADOAAG';
        update "HR"."EMPLOYEES" set "SALARY" = '3000' where ROWID = 'AAAVTFAAFAAAADNABh';
        ..     
        …
        Now DBA decide use transaction_backout with nocascade option to rollback that transaction
        exec dbms_flashback.transaction_backout(1,xid_array('0300080081090000'),dbms_flashback.nocascade);
        *
        ERROR at line 1:
        ORA-55504: Transaction conflicts in NOCASCADE mode
        The result show dependence, so for the reasonable option should be rollback all change by cascade                   option.
        exec dbms_flashback.transaction_backout(1,xid_array('0300080081090000'),dbms_flashback.cascade);
        PL/SQL procedure successfully completed.
        After run the flashback transaction, dba check the report and found both transaction are rollback
        SQL> select xid,dependent_xid,backout_mode from dba_flashback_txn_state;
        XID                    DEPENDENT_XID           BACKOUT_MODE
        ----------------                ----------------                        ----------------
        01001D00AE080000                                 CASCADE
        0300080081090000    01001D00AE080000 CASCADE
        Finally, DBA check Michael salary rollback to original.


- parallel DML enhance performance
 e.g.         alter session enable parallel dml;
                insert into emp select * from emp;
                commit;
                alter session disable parallel dml;
                        or

                select /*+ parallel */ count(*) from sys.all_objects;