Monday, 24 March 2014

DataPump II



DataPump export using Schema mode:-

$expdp test2/tes2 dumpfile=test2.dmp logfile=test2.log schemas=test2;

**Table Level:-
$expdp test2/test2 dumpfile=test2.dmp logfile=test.log directory=dump_dir1 tables=emp,dep,bonu
$expdp system/manager dumpfile=test2.dmp logfile=test2.log directory=dump_dir1      tables=test2.emp,dep,bonu


$impdp test/test dumpfile=test2.dmp logfile=imp.dmp directory=dump_dir1 tables=emp,dep,bonus

**Schema Level:-
$expdp dumpfile=schemsa.dmp logfile=shesma.log directory=dump_dir1 schemas=test

$impdp system/manager dumpfile=schema.dmp logfile=imp.log directory=dump_dir1 remap_schema=test:test5

**Tablespace Level:-
$expdp system/manager dumpfile=ts.dmp logfile=expts.log tablespaces=test01 directory=dump_dir1

$impdp system/manager dumpfile=ts.dmp logfile=impts.log remap_tablespace=test01 directory=dump_dir1


Transportable Tablespace:-
Make the tablespace read only
sql>alter tablespace zee read only;

$expdp dumpfile=tts.dmp logfile=ttsh.log transport_tablespaces=zee directory=dump_dir2

copy datafiles to the location of database where you want to import Tablespace

$impdp dumpfile=tts.dmp logfile=imptts.log transport_datafiles='/oradata/oracle/db/test/zee01.dbf' directory=dump_dir2
File Size:-
The FILESIZE parameter is purely optional, and it specifies the size of the dump file in bytes by default. You may use bytes, kilobytes, megabytes, and gigabytes to specify the FILESIZE parameter. If you don’t specify this parameter, the dump file has no limits on its size. If you use the FILESIZE parameter by specifying, say 10MB, as the maximum dump file size, your export will stop if your dump file reaches its size limit, and you can restart it after correcting the problem.$expdp test2/test2 directory=dump_dir1 dumpfile=expdat1.dmp filesize= 100KB schemas=test2

Directory:- 
You can use multiple export directory for export
$expdp system/manager dumpfile=dump_dir1:schema.dmp,dump_dir2:shcmea2.dmp logfile=shema.log schemas=test
Exclude:-
$expdp test/test dumpfile=shcmea2.dmp logfile=shema.log schemas=test directory=dump_dir1 EXCLUDE=TABLE:\"LIKE\'EMP%\'\"

Compression:-
By default, all metadata is compressed before it’s written out to an export dump file. You can disable compression by specifying a value of NONE for the COMPRESSION parameter, as shown here
$expdp test2/test2 directory=dump_dir1 dumpfile=expdat1.dmp compression=none schemas=test2

CONTENT:-
By using the CONTENT parameter, you can filter what goes into the export dump file. The CONTENT
parameter can take three values:
ALL exports both table data and table and other object definitions (metadata).
DATA_ONLY exports only table rows.
METADATA_ONLY exports only metadata.
$expdp test/test dumpfile=shcmea2.dmp logfile=shema.log schemas=test directory=dump_dir1  content=metadata_only

Wednesday, 19 March 2014

Manually Cloning a Database (Hot Backup Cloning)

Source database name = prod
Target database name = clone

1) On the target(clone) location, create all the directories for the various files:-

2)Copy Oracle home from source(prod) database machine to target(clone) database machine:-

$tar -czvf home.tgz db_1
$scp -r hmoe.tgz oracle@192.168.0.12:/u01
$tar -xzvf home.tgz


3)Create new Init.ora file for Target database (clone):-
* New changes in init.ora file:-

db_name= clone

* DB_FILE_NAME_CONVERT: This parameter transforms the target data file names to the duplicate database data file names.
*LOG_FILE_NAME_CONVERT: This parameter converts the target database redo log file names to the duplicate database redo log file names.
*.Db_file_name_convert='/oradata/oracle/datafiles/prod/','/oradata/datafiles'

*If you have multiple datafile locations at Source database then

*.Db_file_name_convert='/oradata/oracle/datafiles/prod/','/oradata/datafiles','/u01/datafiles/prod','/oradata/datafiles'
*.log_file_name_convert='/oradata/oracle/datafiles/prod/','/oradata/datafiles'


4)Take a Hot backup of Source(prod) datafile to Target(clone):-
sql> alter system switch logfile;
       System altered.

sql> alter database begin backup;
        Database altered.

sql> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/oradata/oracle/datafiles/prod/system01.dbf
/oradata/oracle/datafiles/prod/undotbs01.dbf
/oradata/oracle/datafiles/prod/sysaux01.dbf
/oradata/oracle/datafiles/prod/users01.dbf
/oradata/oracle/datafiles/prod/test01.dbf
/oradata/oracle/datafiles/prod/test03.dbf
/oradata/oracle/datafiles/prod/zee01.dbf
/oradata/oracle/datafiles/prod/zee02.dbf
/oradata/oracle/datafiles/prod/zee03.dbf
9 rows selected.

SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
/oradata/oracle/datafiles/prod/redo03.log
/oradata/oracle/datafiles/prod/redo02.log
/oradata/oracle/datafiles/prod/redo01.log

SQL> select name from v$controlfile;
NAME
--------------------------------------------------------------------------------
/oradata/oracle/datafiles/prod/control01.ctl
/oradata/oracle/datafiles/prod/control02.ctl
/oradata/oracle/datafiles/prod/control03.ctl

$ cd /oradata/oracle/datafiles/prod/

* Copy the following four sets of files from the Source (prod) database to the target database(clone):-
   control files, data files, and redo log files.
$scp -r * oracle@192.168.0.12:/u01/datafiles           (all datafiles, controlfiles, redolog files)

sql> alter database end backup;
       Database altered.

sql> alter system switch logfile;
       System altered.

5) Take a trace of control file:-

SQL> alter database backup controlfile to trace;
          Database altered.

* Go to udump directory of the source database.

$ ls –ltr

* Copy the last trace file with another name to Target(clone) database and edit it:-

$ cp prod_ora_22305.trc /u01/bkpctrl.sql

$vi bkpctrl.sql

* And change
CREATE CONTROLFILE REUSE DATABASE "CATALOG" RESETLOGS ARCHIVELOG
                                                           to
CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG

* Change all the logfiles locations according to your directories:

GROUP 1 '/oradata/oracle/datafiles/catalog/redo01.log' SIZE 50M,
                                              to
GROUP 1 '/u01/datafiles/redo01.log' SIZE 50M,

* Change all the datafiles locations according to your directories:

'/oradata/oracle/datafiles/catalog/system01.dbf'
                              to
'/u01/datafiles/system01.dbf',

6) Take Backup of all Archivelogs from source(prod) to target(clone):-

7)On Target database:-

$ export ORACLE_HOME=/oradata/oracle/oracle/product/10.2.0/db_1
$ export ORACLE_SID=clone

* Login as a sysdba
* Start database in nomount stage

SQL> startup nomount
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes

* Now run edited controlfiles trace:

SQL> @bkpctrl.sql
         Control file created.

8) Recover database and open with resetlogs:-
Sql>recover database using backup controlfile until cancel;
(apply all archives and specify all redo log files if required)

SQL> alter database open resetlogs;
           Database altered.

SQL> select status from v$instance;
STATUS
------------
OPEN

Tuesday, 18 March 2014

Cloning database with RMAN Catalog

 Source database name = prod
Catalog database name = catalog
Target database name = clone

1) Copy Oracle home from source(prod) database machine to target(clone) database machine:-

$ tar -czvf home.tgz db_1
$ scp -r hmoe.tgz oracle@192.168.0.12:/u01
$ tar -xzvf home.tgz

2) Create new Init.ora file for auxiliary database (target database):-
* New changes in init.ora file:-

db_name= clone

DB_FILE_NAME_CONVERT: This parameter transforms the target data file names to the duplicate database data file names.
• LOG_FILE_NAME_CONVERT: This parameter converts the target database redo log file names to the duplicate database redo log file names.

*.Db_file_name_convert='/oradata/oracle/datafiles/prod/','/oradata/datafile'
If you have multiple datafile locations at Source database then:-
*.Db_file_name_convert='/oradata/oracle/datafiles/prod/','/oradata/datafile','/u01/datafiles/prod','/oradata/datafile'*.log_file_name_convert='/oradata/oracle/datafiles/prod/','/oradata/datafile'

3) Create listner.ora on auxiliary(clone) database:-

CLONE =
   (DESCRIPTION_LIST =
     (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.12)(PORT = 1521))
       )
 )
SID_LIST_CLONE =
(SID_LIST =
   (SID_DESC =
     (SID_NAME = clone)
       (SID = clone)
          (ORACLE_HOME = /oradata/oracle/oracle/product/10.2.0/db_1)
        )
   )

4) Create Password file on auxiliary(clone) database:-
$ orapwd file=orapwclone password=manager force=y

5) Configure tnsnames.ora on catalog database for auxiliary(clone) database:-
TO_PROD =
         (DESCRIPTION =
             (ADDRESS_LIST =
                    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.11)(PORT = 1521))
               )
(CONNECT_DATA =
          (SID_NAME = prod)
                (SID = prod)
             )
       )

TO_CLONE =
             (DESCRIPTION =
                  (ADDRESS_LIST =
                         (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.12)(PORT = 1521))
                       )
(CONNECT_DATA =
              (SID_NAME = clone)
                     (SID = clone)
                     )
                )

6) Export the Oracle Home on auxiliary(clone) database:-

$export $ORACLE_HOME=/oradata/oracle/oracle/product/10.2.0/db_1
$export ORACLE_SID=clone


7) Start the Auxiliary database in nomount stage:-

$sqlplus 'as sysdba'
SQL> startup nomount
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes

8) Take RMAN backup of Source database(prod):-
* Login to Catalog database and connect to source database(prod)

$ rman catalog rman/rman target test/test@to_prod

Recovery Manager: Release 10.2.0.1.0 - Production on Tue Mar 18 19:46:40 2014
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: PROD (DBID=687831768)
connected to recovery catalog database

RMAN> backup database plus archivelog;

RMAN>backup current controlfile;

9) After backing up Source(prod) database SCP the backup to target(clone)database:-

$scp -r /u01/rman_backp oracle@192.168.0.12:/u01/

10) Connect catalog to Source(prod) and Target(clone) database:-
 login to Catalog database
$rman catalog rman/rman target test/test@to_catalog auxiliary sys/manager@to_clone

Recovery Manager: Release 10.2.0.1.0 - Production on Tue Mar 18 19:53:33 2014
Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: PROD (DBID=687831768)
connected to recovery catalog database
connected to auxiliary database: CLONE (DBID=1041372533)

11) After connecting issue the following command to clone Source(prod) database to Target(clone):-

RMAN>duplicate target database to 'clone';                    (here 'clone' is a SID of the auxiliary database)

Starting Duplicate Db at 18-MAR-14
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: sid=155 devtype=DISK
==
==
==
==
==
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 18-MAR-14

12) Go to Cloned database and check the status:-

SQL> select status from v$instance;

STATUS
------------
OPEN

Friday, 7 March 2014

Oracle Tables

• Heap-organized tables: 
A heap-organized table is nothing but the normal Oracle table, where data is stored in no particular order.

• Index-organized tables:
 An index-organized table stores data sorted in a B-tree indexed structure.

• Clustered tables:
 A clustered table is part of a group of tables that shares the same data blocks, because columns of the clustered tables are often requested together.

• Partitioned tables:
 A partitioned tables lets you divide a large amount of data into sub tables, called partitions, according to various criteria. Partitioning is especially useful in a data warehouse environment.

Create Table:-

sql>create table test3 (
       numb number(5),
       name varchar2(20) not null,
       addr varchar2(30)
       constraint test3_fkey primary key)
       tablespace test01;

                  OR
sql>create table test as select * from emp;

Add Column to the Table:-

sql>alter table test add (job varchar2(20));    ('Job' New Column name)

Droping a Column from Tables:-

sql>alter table test drop (job);
Where 'Job' is Column name.

Rename a Column:-

sql>alter tables test rename column job to designation;

Insert Rows in a Table:-

sql>insert into test values (3,'manager','local','business');

Rename a Table:-

sql>alter table test rename to final;

Removing all data from Table:-

sql>truncat table test;          (you cannot rollback data)

Move Table to new Tablespace:-

sql>alter table test move to (New Tablespace);

Dropping a Table:-

sql>drop table emp;            (It will move table to recycle bin)

sql>drop table emp Purge;       (It will permanently drop table)

sql>drop table emp cascade constraints;             (It will drop constraints eg, primary key foreign key etc.)

Bring back a deleted table (no purged):-

sql>flashback table emp to before drop;

Recyclebin:-

sql>select * from recyclebin;
sql>select owner, object_name, ts_name from dba_recyclebin;
sql>select * from "BIN$8sL/jMU2hOrgQKjACQADrQ==$0";
sql>purge table "BIN$8sL/jMU2hOrgQKjACQADrQ==$0";
  (It will delete table from Recycle bin)