2016年8月16日 星期二

OCP 11gR2: Administration II 練習筆記 (完)

Lesson 10
===============================================================================
- OMF (db_create_dest)
- block space, 11gR2 PCT_FREE default 10%
- Row chaining and migration will affect performance, due to pct_free 10% db will allow update statement in
 same data block. However, if block not enough space. It will cause row migration, db will migrate half records
 to new data block. Segment advisor will discover row migration. It will update row header and cause row chaining.

 In segment(table, index, undo…) have a lot extent (group of data block), each extent have BMB (bitmap block) to
 keep check each data block HDD physical address, if data block change, BMB also need to update records by
 automatic space management. It will save dba time to shrink table by manual.

- extent allocation by uniform or auto-allocate
        Check extent map in EM > tbs > view tbs > show tbs contents

- Segment Advisor: for tablespace extent check
        EM > advisor central > segment advisor > add tbs > submit

- Allocating Space (if: deferred_segment_creation=true)
 Create table without insert data, will not use storage in tablespace
 Config by spfile or alter session or by create table clause
* unusable indexes, if statistics not yet gather the index, it will not use storage

- compression (to use table compression for DML, tbs compression must enable)
 Option: {compress [Basic | for OLTP | nocompress}
        "Basic" is for bulk import (SQL Loader /direct-path), command: "create table ….compress basic;"
        "for OLTP" support DML, command: "create table …. compress for OLTP;"
        - not support cluster table (IOT, indexes organized table )
        - will not compress existing records ( for existing records, you need to use datapump export and import)
        - predick compression level by dbms_compression package
e.g.          set serveroutput on;
                declare
                 lv_cmp_ratio number;
                 lv_comptype_str varchar2(300);
                 lv_BLKCNT_CMP number;
                 lv_BLKCNT_UNCMP number;
                 lv_ROW_CMP number;
                 lv_ROW_UNCMP number;
                begin
                 dbms_compression.GET_COMPRESSION_RATIO(
                 SCRATCHTBSNAME=>'USERS',
                 OWNNAME=>'SCOTT',
                 TABNAME=>'BIGFILE',
                 PARTNAME =>null,
                 COMPTYPE =>2,
                 BLKCNT_CMP=>lv_BLKCNT_CMP,
                 BLKCNT_UNCMP=>lv_BLKCNT_UNCMP,
                 ROW_CMP =>lv_ROW_CMP,
                 ROW_UNCMP =>lv_ROW_UNCMP,
                 CMP_RATIO=>lv_cmp_ratio,
                 COMPTYPE_STR=>lv_COMPTYPE_STR);
                 dbms_output.put_line('==============================');
                 dbms_output.put_line('1. Compression Ratio :'||lv_cmp_ratio);
                 dbms_output.put_line('2. Block Count :'||lv_blkcnt_cmp);
                 dbms_output.put_line('3. Compression Type :'||lv_comptype_str);
                 dbms_output.put_line('4. Blk Count Compressed :'||lv_BLKCNT_CMP);
                 dbms_output.put_line('5. Blk Count Un-compressed:'||lv_BLKCNT_UNCMP);
                 dbms_output.put_line('6. Row Count Compressed :'||lv_row_cmp);
                 dbms_output.put_line('4. Row Count Un-Compressed:'||lv_row_uncmp);
                 dbms_output.put_line('===============================');
                end;
                /

- proactive tablespace monitoring
        By thresholds, tablespace alerts or warning (MMON)
        By AWR compare snapshot between 2 times

- Segments Shrink
        To find out high water mark by command: "select block, empty_blocks from dba_tables where…;"
e.g.  alter table 'name' enable row movement;
        alter table 'name' shrink space compact;
        alter table 'name' shrink space [ cascade ];
* the cascade clause extends the segment shrink operation to all dependent (index)
                Or by segment advisor
        EM > advisor central > segment advisor > tbs > select all permanent tbs > submit
* segment advisor support permanent tbs only
* automatic maintenance will run segment advisor everyday, check the report in scheduler central >
 Automatic maintenance task) to find any recommendation

- Resumable Space Allocation (if tbs full, process will suspend)
        By spfile resumable_timeout or session
        It will resume when space problem fixed, otherwise it will abort the process when over the timeout setting.

- 4 KB-sector disks (old HDD usually are 512B sector, new 1 TB HDD is 4 KB)
 * 11gR2 auto detect sector size, if set the db_block_size to 4096 (4KB), database only need read/write 1 time
  for 1 data block to improve performance. Also BIOS need set to native mode (emulation mode will use more
  loading). In native mode the archivelog and redo log also consider to use 4KB by "blocksize 4096"

- Transporting Database
        System, sysaux tbs cannot transport
Flow:                make tbs read-only
                Use datapump to extract metadata
                Check platform endian, if endian format not same, convert datafiles by RMAN (better perform on target)
                "select * from v$transportable_platform;" and "select platform_name from v$database;"
                Copy datafiles and dump file to target
                Use datapump import metadata and make tablespace read/write

e.g.          exec dbms_tt.transport_set_check('tbs',true);
                select * from transport_set_violations;
                alter tablespace 'name' read only;
                mkdir -p /tmp/transport
                create or replace directory transport as '/tmp/transport';
                expdp \'/ as sysdba \' logfile=trans.log dumpfile=trans.dmp directory=transport transport_tablespace='name'
                copy tbs.dbf to /tmp/transport
                copy all /tmp/transport folder to target server

                in target server
                move dbf to organial path which like source server
                create or replace directory transport as '/tmp/transport';
                impdp \'/ as sysdba \' logfile=trans.log dumpfile=trans.dmp directory=transport transport_tablespace='name'
                alter tablespace 'name' read write;

                if need convert
                RMAN > CONVERT DATAFILE '/tmp/transport/name.dbf' TO PLATFORM="Solaris[tm] OE (64-bit)"
                FROM PLATFORM="Microsoft Windows x86 64-bit"
                DB_FILE_NAME_CONVERT= '/tmp/transport/', '/tmp/transport/import';


OCP 11gR2: Administration II 練習筆記 (九)

Lesson 09
===============================================================================
01. Resource Manager (CPU usage, undo generation, number of active session….)
- Database consolidation - multi instance CPU consumer by instance caging
- Set default by parameter resource_manager_plan
- Other group must in resource plan and level 1 percentage total not 100%
- Can grant administer_resource_manager for other user manage
e.g.  exec dbms_resource_manager_privs.grant_system_privilege('SCOTT','administer_resource_manager',false);

-OLTP users: more resource, DSS users: fewer resource, Batch users: least resource
        "dbms_resource_manager_privs" package included
        - resource_consumer_group
        - resource_plan
        - resource_plan_directives
        - resource_allocation_methods

e.g.  create consumer group named "EmpGroup", add the user scott to"EmpGroup"
        create new resource manager plan named "EmpPlan" which restricts CPU usges to 50% and undo tablespace to 2MB (under undo retention policy) and Maximum Number of active session to 1. This plan only affects        "EmpGroup" consumer group.

        EM > Server > Resource Manager > Consumer Group > Create > EmpGroup (*round robin) > add Scott.
        EM > Server > Resource Manager > Consumer group mapping > add rule for selected user > Scott.
        EM Server > Resource Manager > Plans > select "default_plan" and action "create like" > Go > Plan name
        "EmpPlan" > click modify to add "EmpGroup" to the list of Resource Allocation > select mode: percentage, type 50
        For EmpGroup, session pool: 1, undo page 200 > OK.
        Active plan by set scheduler or select active (7x24) > go
        select sid,username, resource_consumer_group from v$session;
        EM> Resource Manager > Statistic

* Round Robin: Multi-session have chance to run statement
* Run to completion: Session 2 need wait Session 1 complete
- change resource plan by "select plan, num_plan_directives, status, mandatory from dba_rsrc_plans;"
- create scheduler windows for resource manager
        EM> server > oracle scheduler > windows > create > select "EmpPlan" and set recurring on everyday 9:00 A.M.
- switch or kill session when consumer group achieved resource limit
        begin
         dbms_resource_manager.create_plan_directive(
         plan=>'EmpPlan',group_or_subplan=>'EmpGroup', comment=>'EmpGroup',mgmt._p1=>50,
         switch_group=>'low_group',switch_io_reqs=>10000,switch_io_megabytes=>2500,switch_for_call=>true);
        end;
        /
* Consumer_group limit can use by max_utilization_limit
* instance caging by parameter cpu_count




02. Oracle jobs scheduler
- create normal job step by create program > create schedule >create job
e.g.  create a program, a schedule and a job, it will insert 1 record to scott.emp
        EM > Server > Schedule > Program > 'name', enable, type PL SQL, insert into scott.emp values(……..
        EM > Server > Schedule > Schedule > 'name', do not repeat > submit
        EM > Server > Schedule > Jobs >
        Schedule job with resource consumer group (create job class)
        EM> Server > Schedule > Job classes > create 'name', assign 'consumer group' modify previous job class.

Lightweight jobs (EM not support)
- means easy to create and store less meta data in database, it is not schema object and cannot assign to other users.
- must be a program (not procedure, executable, java……)
e.g.          begin
                 dbms_scheduler.create_job(job_name=>'name',program=>'name',
                 schedule_name=>'name',job_style=>'lightweight');
                end;
                /

Job chains
- when job 1 succes run job 2, if failed run job 3 …..

        Create few programs > create job chains > step > create rule….



2016年8月15日 星期一

OCP 11gR2: Administration II 練習筆記 (八)

Lesson 08
===============================================================================
01. Database performance
- set performance planning
- instance tuning
        Well-defined goals (process 500 business sales transactions per mins).
        Memory turning, HDD or network I/O maybe bottleneck of database.
        Tune step from top down: (1. Check instance parameter, 2. Check application code, 3. Check design).
        Consider SQL using resource, the longest wait, the largest service time.
        Stop tuning when the goal is met.

- Performance monitoring
        By EM (may need flashplayer plug-in) > performance

- Performance tuning statistics by AWR (user session, SQL, service)
 * parameter statistics = typical (because basic no time column), (All for 16 core CPU up)

 SQL performance tuning depends on accurate statistics
 Statistics affect database search methods(e.g… by index or full scan table)
 Optimizer statistics: object statistics (table size … column values …), O/S statistics
        exec dbms_stats.gather_table_stats('scott','emp');
 collect statistics by automatic maintenance task or manual "dbms_stats" (after bulk operation)
        EM > scheduler central > optimizer statistics gathering > schema.table
                Or
        exec dbms_stats package
- optimizer_dynamic_sampling default is 2, which when found a table no statistic, db will auto collect it.
- skewness make statistics distort, collect statistics need use option "100%" collecation
- stale_percent = the table change many % then collect statis
- view table statistics by "v$dba_tab_statistics" or "v$dba_tab_col_statistics"

Oracle wait events (idle wait, buffer busy wait, latch(RAM lock), I/O wait….)
        EM > performance > CPU wait
        select sid, program from v$session where program like '%sqlplus%'); <- to found sid
        select * from v$system_wait_class where wait_class like '%I/O%';
        EM > performance > top activity

- SQL Trace (*.trc file): parameter sql_trace default is false, record all resource when enable
        exec dbms_session.set_sql_trace(true); <- will slow down database performance a lot;
        select * from 'table_name';
        select sid, sql_trace from v$session;

- service related statistics, check workload connect from service.
        EM > performance > top activity > to service
        Create services by "dbms_service.create_service(……)"

- resource contention
        V$lock, v$undostat, v$waitstat, v$latch (immediate_get, immediate_misses)

- AWR, automatic workload repository
 AWR in sysaux, manual snapshot by ADDM "dbms_workload_repository.create_snapshot()"
        Check report: EM > server > automatic workload repository > run compare periods report
        "dba_hist_db_cache_advice", "dba_hist_dispatcher", "dba_hist_dyn_remaster_stats", "

- Database replay: capture workload and reply in other machine



02. SQL tuning: EM > advisor central . SQL advisors
- SQL tuning advisor: identify poorly tuned SQL statement (target by individual SQL)
        Structure, indexes, materialize, view, statistics analysis, SQL profiling, access path…
        From top activity, SQL tuning set (create set between time), historical SQL (AWR)

e.g.  import bigfile.cvs
        EM > SQL tuning sets > create > name as tuningset1 > next
        Set the frequenting to 'duration 1 minutes', freq 5 sec
        Set filter like select * from 'table' > finish
        select * from table where…..
        select * from table where…..
        select * from table where…..
        …
        ..
        Back to sql tuning set, select tuningset1 and click schedule sql tuning adviser
        Set time limt, scope of analysis to comprehensive and submit (option "limited" will not create SQL profile)
        * duplicate SQL means SQL structure same but some value not same

- SQL access advisor: turn entire workload of SQL (by bulk workload)
        Bulk workload to advisor to tune, it can verify access path, hypothetical workload (DB simulate DML for self-tune).

e.g. EM > advisor central > SQL Access Advisor > recommend new access structures using existing tuningset1 >
        comprehensive, time 1 min, journaling Full (log ) and submit.

- Automatic SQL tuning Result: check with finding option <- if recommend Ok, click implement (for first start DB only)
SQL profile:
 Stored access plan, database use the profile and no need to consider how to run the statement

-sql performance analyzer: predict potential performance problem (system change)
        Predict oracle different version

        Predict parameter change…


OCP 11gR2: Administration II 練習筆記 (七)

Lesson 07
===============================================================================
Managing Memory

- by Full auto, Semi-automatic, manual
Full Auto:
- memory_target                <- set database RAM start, other component auto adjust
 memory_max_target        <- hard limit, if memory_target=500, max 800 DB cannot over 800.

- buffer Cache (SGA)
 Base on db_block_size (8K default), if not use auto, set db_cache_size for manual

- buffer status
 Pinned: server process or background process locked RAM (database latch) it will caused latch wait.
 Clean: unpinned RAM, wait checkpoint to HDD
 Free/ unused
 Dirty: unpinned buffer which still not write to HDD (2,3 sec).

- Support multiple block size in same database (unix only)
        Try: "create tablespace 'name' datafile 'location' size 50M blocksize 32768;"
- if create non-default data block size tablespace, you should config other size cache_size

How many db_cache_size should set ?
- check v$db_cache_advice; (size_factor=1 is current status), estd_physical_read is HDD I/O (lower is better).
        select SIZE_FACTOR, ESTD_PHYSICAL_READS from v$db_cache_advice;
 * calculate buffer cache hit ratio (statement not need to use HDD I/O)
          Select name,value from v$sysstat where name in ('db block gets from cache', 'consistent gets from cache', 'physical reads cache');
  Ratio = 1-(('physical reads cache')/( 'consistent gets from cache'+'db block gets from cache')x100), better 100%
select ((1-(select value from v$sysstat where name='physical reads cache')/
((select value from v$sysstat where name='consistent gets from cache')+
(select value from v$sysstat where name='db block gets from cache')))*100) as cache_ratio from dual;

- multiple buffer pools
 db_recycle_cache_size: object will keep in memory but fast in fast out (faster remove object in buffer pool)
 db_keep_cache_size: object keep in pool, retain object sin memory (remove object from buffer pool few hours)
 db_cache_size: object will keep 3 sec in memory then remove object from buffer pool
e.g.          create index cust_ids …. storage (buffer_pool keep);
                alter table oe.customers storage (buffer_pool recycle);
                alter index oe.cust_lname_ix storage (buffer_pool keep);

Shared pool (manual)
- large pool
 RMAN backup, parallel query operation
 Shared server, oracle X/Open <- 2/3PC (two/three phase commit), server to server commit and rollback
 Manage by AMM and ASMM and parameter with "large_pool_size"

- log_buffer
 Circular buffer in SGA -> LGWR -> Redo log (size depend on HDD I/O, small is better)

* SGA should not use virtual memory, "lock_sga set to true (not support in AMM)"
* set duplicatied SQL to keep pool or set buffer cache higher or use memory adviser

AMM = automate PGA and SGA (40%, 60% ratio)
ASMM = Semi-automatic SGA, manual PGA
Off AMM & ASMM = all manual
Oracle recommends use AMM unless you have special requirments.

- if set wrong memory_target setting
        create pfile from spfile;
        edit memory_target
        create spfile from pfile;
                or
        restore spfile from autobackup
                or
        by alert.log startup log (control_files, memory_target, db_name)

- if memory_max_target not set, memory_max_target=memory_target
 If enable memory_target, sga_target and pga_aggregate_target. It means the PGA and SGA are minimum values
- monitor AMM by "v$memory_dynamic_components", "v$memory_resize_ops","v$memory_target_advice"

ASMM
- disable memory_target, you should set sga_target and pga_aggregate_target to use ASMM.
 If you need fix PGA but dynamic SGA size
 If AMM disable pga_aggregate_target is fixed size
- in ASMM sga_target is dynamic, ASMM base on MMON process and MMON use memory advisor.
- Get minimum value for sga_target, if database in manual mode
        select ((select sum(value) from v$sga)-(select current_size from v$sga_dynamic_free_memory))
        "sga_target" from dual;

Disable ASMM to manual mode
- when memory_target disable and set sga_target=0, database will auto tune memory and write to parameter.
- show parameter "pool_size", show parameter "db_cache_size"

- select * from v$sga_dynamic_components;


2016年8月14日 星期日

OCP 11gR2: Administration II 練習筆記 (六)

Lesson 06
===============================================================================
01. Oracle Total Recall (flashback data archive, recycle-bin)
- Flashback archive enable at table level with specified retention period (year).
- FDA contents cannot be modifying directly.
- For audit or historical report
- view "dba_flashback_archive", "dba_flashback_archive_ts", "dba_flashback_archive_tables" or user_
- When data modified > record in undo tbs > when undo expired > record trans to FDA (compress)
- Better create tbs to hold FDA
- Table not support by exadata
- FDA tablespace cannot transport to other server by oracle transport (use RMAN backup & restore)
- You can grant flashback archive administrator to user
e.g   create tablespace FDA_DATA_1 datafile '/location' size 5g autoextend on;
        create flashback archive fda1 tablespace FDA_DATA_1 retention 1 year;
        (don't set quota, if quota is full, the new transaction will stop !)
        alter table 'name' flashback archive fda1;
        - if FDA_DATA_1 tbs full, you may create another tbs and add in to fda1
        create tablespace FDA_DATA_2 datafile 'location' size 5g autoextend on;
        alter flashback archive fda1 add tablespace FDA_DATA_2;

Schema evolution
- after enable FDA, DDL may not support (drop , add column, truncate)
- disassociate and associate table with FDA by dbms_flashback_archive
- if source table DDL change, FDA table also need too
e.g.  exec dbms_flashback_archive.disassociate_FBA('schema','table');
        exec dbms_flashback_archive_reassociate_FBA('schema','table');
* remember to reassociate
* use SCN for first queries
* before flashback, commit or rollback first



02. Flashback Drop and recycle bin
- recycle bin default is on, drop table just rename table
- database will clean recycle bin when space not enough
- flashback drop will restore table object (index, trigger….)
e.g   flashback table 'name' to before drop;
                or
        flashback table 'name' to before drop [rename to 'name'];
- manual remove recycle bin by purge command "purge table 'name';"
- to bypass recycle bin use command "drop table 'name' purge;"
- check view by "dba_recyclebin" or user_



03. Flashback database
- like rewind database (normally flashback logs keep 1 to 2 days)
- used in case of logical data corruption made by user (truncate table)
- flashback opposite is "RMAN recover" (when you give up to flashback database)
- cannot perform when controlfile corrupted
- drop tablespace cannot flashback
- flashback database will also use RMAN backupset, archivelog, redo and undo.
- flashback database retention should not over RMAN retention
- enable flashback log in FRA will overhead system 10% performance
e.g.  shutdown immediate;
        startup mount exclusive; <- exclusive = dba single connection
        alter system set db_flashback_retention_target=2880; <- 2 days
        alter database flashback on;
        alter database open;
        select flashback_on from v$database;
        create restore point 'name' guarantee flashback database;
        truncate table 'name';
        shutdown immediate;
        startup mount exclusive;
        <RMAN / SQL> in exclusive mode;
        flashback database to until time "to_date('20150505 23:00:00','yyyymmdd hh24:mi:ss')";
                Or
        flashback database to until scn xxxxxxx;
                or
        flashback database to restore point 'name';
        alter database open read only;
        - check data
        alter database open resetlogs;


- flashback database view by "v$flashback_database_log;"


2016年8月9日 星期二

OCP 11gR2: Administration II 練習筆記 (五)

Lesson 05
============================================================================
01. Diagnosing the database
- Data Recovery Advisor (DRA)
           Cannot detect data block corruption
           Only for single instance, not support RAC
           Support failover to standby, but not analysis and repair of standby database
           * Create standby database by using data guard

* RMAN> change failure = close impact or change priority, command "change failure 5 priority low;"
* RMAN> list failure [All | critical | high | low | closed];, default is all option [exclude failure (failnum)]
* RMAN> advise failure [All | critical | high | low | closed];, default is all option [exclude failure (failnum)]
* RMAN> repair failure [using advise option failnum {preview | noprompt}];
DRA view: v$ir_failure, v$ir_manual_checklist, v$ir_repair, v$ir_failure_set

- HM automate run and save report, DRA read it by "list failure" to check the db health status
- Proactive check: use validate command "validate database plus archivelog;" for data block corruption




02. Data failure examples
- not accessible components:
           Missing datafile at O/S, incorrect permission, offline tbs and deleted.
- physical corruption
           8KB data block, 4K for checksum: checksum failure may caused by invalid block header (dbf from different platform)
- logical corruption
           Related to schema object, corrupt index, corrupt transaction inconsistencies
- controlfile inconsistencies
- I/O failures, reach limit on max open files in O/S




03. Block corruption detect
- for physical corruption by command "validate database plus archivelog;" will check database all data block
- for logical corruption by command "validate database plus archivelog check logical;" will check index, schema object
- ORA-01578 appear in alert.log , v$database_block_corruption and validate command and caused by hardware issue.
- don't run any defragment tools in oracle database, it will cause corruption
- Analyze command can check error is permanent or not.
           collect statistics for table, index ….. by command "analyze [table|index] 'name' compute statistics; "

Parameter to auto detect corruption
- db_block_checking  <- default is false, oracle recommend "full" [off | false | low | medium | full or true] 10% overhead
                                            (after update, insert, it will run block header and object contents check except index)
- db_block_checksum  <- default is typical, oracle recommend typical [OFF | TYPICAL | FULL]
- db_lost_write_protect <- default is none, for standby database only, the committed transaction will also effect in standby
- db_ultra_safe               <- use this to tune upon 3 option by [OFF | DATA_ONLY | DATA_AND_INDEX]

db_ultra_safe
OFF
DATA_ONLY
DATA_AND_INDEX
db_block_checking
OFF / FALSE
MEDIUM
FULL / TRUE
db_block_checksum
TYPICAL
FULL
FULL
db_lost_write_protect
TYPICAL
TYPICAL
TYPICAL




04. Block media recovery example
- check v$database_block_corruption (record by validate command)
           01. offline tbs, corrupt some block by hex tools then online tbs (may have error)
           02. RMAN> validate tablespace 'name' check logical;
           03. Read alert.log and select * from v$database_block_corruption;
           04. RMAN> recover corruption list;
           05. RMAN> recover tablespace 'name'; and online the tbs





05. Automatic diagnostic
Critical error -> ADR      <- 1. diagnostic_dest
                                           <- 2. $ORACLE_BASE
                                           <- 3. $ORACLE_HOME/log

- 11g not use core_dump, user_dump parameter anymore
- can check from EM database first page
- check by adrci (v$diag_info)
           Show incident, show problem , show alert ….. e.g.





06. Health Monitor (HM)
- v$hm_check (check which option can use to run diagnostic)
- view the HM report by v$hm_run, dbms_hm, adrci , EM
- run HM by dbms_hm or EM
           Example:
           exec dbms_hm.run_check('check_name','name');
           select dbms_hm.get_run_report('name') from dual;
                     or
           adrci -> show hm_run






07. Flashback
- Most depend by undo tablespace (undo management auto, retention, and guarantee)
- parallel insert will take more undo resource
- under sys schema cannot flashback

Object Level
Scenario examples
Flashback Technology
Depends on
Affects data
Database
Truncate table, made multi-table change
Database
Flashback logs
True
Table
Drop table
Drop
Recycle bin
True

Update with the wrong where clause
Table
Undo data
True
Compare current data from past
Query
Undo data
False
Compare version of a row
Version
Undo data
False
Keep historical transaction data
Data archive
Undo data
True
Transaction
Investigate and back out suspect transaction
transaction
Undo/ redo archivelog
True

- flashback query
        1. Query all data at a specified point in time
        2. only for DML, not support DDL
 e.g.         select * from emp as of timestamp to_timestamp('nnnnnnnn','YYYYMMDD:HH24:MI:SS');
                select * from emp as scn xxxxxxxx;


- flashback version query (will not show haven't committed history)
        1. see all version of a row between two times.
        2. see the transactions that changed the row.
        3. not support external tables, temp tables, fixed tables (sys.table / X$table), view.
        4. not support DDL
 e.g.         select versions_xid, versions_startscn, versions_endscn, versions_operation, ename, sal from
                emp versions between scn xxxxx and xxxxx;
                        or
                select versions_xid, versions_startscn, versions_endscn, versions_operation, ename, sal from
                emp versions between scn minvalue and maxvalue; (enable supplement log)
                        reverse row
                update emp set sal=(select sal from emp as of timestamp to_timestamp('nnnnnnnnn','yyddmm hh24miss')
                where empno=2000) where empno=2000;


- flashback table
        1. not support sys schema
        2. EM -> Perform recovery -> table -> recover
        3. enable row movement
e.g.          alter table emp enable row movement;
                grant flashback any table to scott;
                grant select,insert,update, delete on emp to scott;
                flashback table emp to scn XXXXXXX;


- flashback transaction
        1. see all changes made by a transaction statement (query by versions_XID)
e.g.          select * from flashback_transaction_query where xid='xxxxxxxx';
        2. enable supplemental log
                alter database add supplemental log data;

                alter database add supplemental log data (primary key) columns;
        3. each transaction only can perform once
        3. consider transaction_backout option <- transaction dependency.

        Let's understand a concept before use the option: Transaction Dependence.
        For example, 2 transactions TX1 and TX2, if match below any point that means TX2 dependence TX1:
          01. WAW (Write After Write), means TX1 modify some row and TX2 modify same row again.
          02. Primary Key dependence means the table with primary key. TX1 delete the row and TX2 insert a new           row with same primary key.
          03. Foreign Key dependence means TX1 made new foreign key data after insert or update and TX2 insert           or update rows use new foreign key data.
               
          Understand transaction dependence can help to solve the problem when conflict. Example for point 02, if           you want to reverse TX1 but don't care TX2, the table will have duplicate primary key.

     - [ NOCASCADE ]: TX1 cannot have dependence by other transaction, otherwise cancel operation and error.
     - [ CASCADE ]: TX1 and TX2 reverse together.
     - [ NOCASCADE_FORCE ]: ignore TX2, reverse TX1 if constraint no conflict the operation will success,            otherwise the conflicted constraint will error and operation fail.
     - [ NONCONFILICT_ONLY ]: reverse TX1 without influence TX2. Not same with NOCASCADE_FORCE, it      will filter TX1 reverse SQL and make sure the change not affect TX2.

     Example details for point 01: WAW, for a table only has 3 rows and no constraint:
       
Case 1: Original
Case 2: TX1 update 3 rows
Case 3: TX2 update 2 rows
ID
ID
ID
---------------
---------------
---------------
1
11
11
2
22
222
3
33
333
        
This is typical WAW example, TX2 dependence TX1
Now try to reverse TX1 with different backout option will have different result:
[NOCASCADE] will cause "ORA-55504: Transaction conflicts in NOCASCADE mode" , table status in Case 3
[CASCADE] will reverse table to Case 1
[NOCASCADE_FORCE], TX2 transaction not affect, but TX1 will reverse to below

ID
---------------
1
222
333
       
Here has some strange from the rule of [ NOCASCADE_FORCE ]: the reverse should ignore TX2 and perform on all rows, but why row 2 & 3 didn't reverse ? Because in this sample, reverse SQL the where clause include    ROWID for constraint because database enabled supplemental log.
       
[NONCONFILICT_ONLY] the result is same as [NOCASCADE_FORCE]. However, even the result is same, but  the process is different. The rows reverse SQL which related to TX2 will be filter first.

Real world example:
        Under HR schema, manger wants to increase Michael salary 10%.
        SQL> select FIRST_NAME, SALARY from hr.employees where employee_id=201;
        FIRST_NAME                    SALARY
        --------------------                            ----------
        Michael                           13000
       
        However, by user mistake, all staff increase salary 500% and committed
        SQL> update hr.employees set salary=salary*5;
        106 rows updated.
        SQL> commit;
       
        Then HR on progress increase Michael 10%
        SQL> update hr.employees set salary=salary*1.1 where employee_id=201;
        1 row updated.
        SQL> commit;
       
        Manger just wants Michael salary from 13000 to 14300. However, by the user mistake, Michael salary is           71500.
        After few minutes, HR discovered all staff salary abnormal and contact DBA investigation. DBA decide             use flashback_transaction_query to find out the problem on last 15 minutes.
        SQL> select distinct xid, commit_scn from flashback_transaction_query where table_owner='HR' and         table_name='EMPLOYEES' and commit_timestamp > systimestamp - interval '15' minute order by                     commit_scn;
        DBA found last 15 minutes have 2 transactions
        XID                            COMMIT_SCN
        ----------------                        ----------
        0300080081090000          3062978
        01001D00AE080000        3063594
       
        DBA check XID 0300080081090000 and found abnormal bulk salary update
        SQL> select undo_sql from flashback_transaction_query where xid='0300080081090000'
        UNDO_SQL
        ----------------------------------------------------------------------------------------------------
        update "HR"."EMPLOYEES" set "SALARY" = '10000' where ROWID = 'AAAVTFAAFAAAADOAAJ';
        update "HR"."EMPLOYEES" set "SALARY" = '8300' where ROWID = 'AAAVTFAAFAAAADOAAI';
        update "HR"."EMPLOYEES" set "SALARY" = '10000' where ROWID = 'AAAVTFAAFAAAADOAAG';
        update "HR"."EMPLOYEES" set "SALARY" = '3000' where ROWID = 'AAAVTFAAFAAAADNABh';
        ..     
        …
        Now DBA decide use transaction_backout with nocascade option to rollback that transaction
        exec dbms_flashback.transaction_backout(1,xid_array('0300080081090000'),dbms_flashback.nocascade);
        *
        ERROR at line 1:
        ORA-55504: Transaction conflicts in NOCASCADE mode
        The result show dependence, so for the reasonable option should be rollback all change by cascade                   option.
        exec dbms_flashback.transaction_backout(1,xid_array('0300080081090000'),dbms_flashback.cascade);
        PL/SQL procedure successfully completed.
        After run the flashback transaction, dba check the report and found both transaction are rollback
        SQL> select xid,dependent_xid,backout_mode from dba_flashback_txn_state;
        XID                    DEPENDENT_XID           BACKOUT_MODE
        ----------------                ----------------                        ----------------
        01001D00AE080000                                 CASCADE
        0300080081090000    01001D00AE080000 CASCADE
        Finally, DBA check Michael salary rollback to original.


- parallel DML enhance performance
 e.g.         alter session enable parallel dml;
                insert into emp select * from emp;
                commit;
                alter session disable parallel dml;
                        or

                select /*+ parallel */ count(*) from sys.all_objects;