Thursday, 31 December 2015

Oracle Row Chaining and Migration

Oracle Row Chaining and Migration:


If we notice poor performance in your oracle database Row chaining and Row Migration may be one of several reasons.

Row migration and Row chaining are two potential problems that can be prevented. By suitable diagnosing, we can improve database performance.

The main considerations are:

* What is Row Migration and Row Chaining?
* How to identify Row Migration and Row Chaining?
* How to avoid Row Migration and Row Chaining?

Migrated rows affect OLTP systems which use indexed reads to read singleton rows. In the worst case, we can add an extra I/O to all reads which would be really bad. Truly chained rows affect index reads and full table scans.

OLTP : On-line Transaction Processing is a class of software programs capable of supporting Transaction-Oriented applications on the internet. Typically, OLTP systems are used for order entry, financial trasactions, customer relationship management (CRM) and retail sales.

Oracle Block:

-------------

The Operating System Block Size is the minimum unit of operation (Read/Write) by the OS and is a property of the OS file system.
While creating an oracle database we have to choose the <<Database Block Size>> as a multiple of the Operating System Block size.
The minimum unit of operation (Read/Write) by the Oracle Database would be this <<Oracle Block>>, and not the OS block.
Once set, the <<Database Block Size>> cannot be changed during the life of the database (except in case of Oracle 9i). 
To decide on a suitable block size for the database, we take into consideration factors like the size of the database and the concurrent number of transactions expected.

The database block has the following structure (Within the whole database structure).





Header:
Header contains the general infomation about the data i.e. block address, Type of segment (table, index etc).
It also contains the information about table and the actual row (address) which that holds the data.

Freespace:
Space allocated for future update/insert oprations. Generally affected by the values of PCTFREE and PCTUSED paramters.

Data:
Actual row data.

FREELIST, PCTFREE and PCTUSED:

While creating/altering any table/index, Oracle used two storage parameters for space control.
PCTFREE - The percentage of space reserved for future update of existing data.
PCTUSED - The percentage of minimum space used for insertion of new data.The value determines when the block gets back into the FREELISTS structure.
FREELIST - Structure where Oracle maintains a list of all free available blocks.

    Oracle will first search for a free block in the FREELIST and then the data is inserted into that block.The availability of the block in the FREELIST is decided by the PCTFREE value. Initially an empty block will be listed in the FREELIST structure, and it will continue to remain there until the free spae reaches the PCTFREE value.

    Oracle use FREELIST to increase the performance. So for every insert operation, oracle needs to search for the free blocks only from the FREELIST structure instead of searching all blocks.


ROW MIGRATION:
--------------
We will migrate a row when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). 
A migration means that the entire row will move and we just leave behined the <<forwarding address>>.
So, the original block just has the rowid (address of the rows) of the new block and the entire row is moved.



Full table scans are not affected by migrated rows:
---------------------------------------------------
The forwarding addresses are ignored. We know that as we continue the full scan, we'll eventually get to that row so we can ignore the forwarding address and just process the row when we get there. Hence, in full scan migrated rows don't cause us to really do any extra work.

Index Read will cause additional IO's on migrated rows:
-------------------------------------------------------
When we index Read into a table, then a migrated row will cause additional IO's. That is because the index will tell us <<goto file x, block y, slot z to find this row>>. But when we get there we find a message that says <<well, really got file a, block b, slot c to find this row>>. we have to do another IO (Logical or Physical) to find the row.

ROW CHANNING:
-------------
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as oracle break wide tables up into pieces. So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.



Chained rows affect us differently. Here, it depends on the data we need. If we had a row with two columns that was spread over two blocks, the query:

select col1 from table;

where col1 is in block 1, would not cause any <<table fetch continued row>>. It would not actually have to get col2, it would not follow the chained row all of the way out. On the other hand, if we ask for:

select col2 from table;

and col2 is in block2 due to row chaining, then you would in fact see a <<table fetch continued row>>.






Maintaining the RMAN Backups and the Repository:

Maintaining the RMAN Backups and the Repository:


Tuesday, 29 December 2015

Performing Tablespace Point-in-Time Recovery

Performing Tablespace Point-in-Time Recovery

In the following post we’ll see how to recover tablespace to a point in time (TSPITR), some of the young DBAs would be thinking why do we need to do TSPITR its an incomplete recovery and why would we ever want to do so. To those I would like to introduce them to a fraternity known as Developer (refereed as an application guy), If you are a DBA these guys job is to make sure that each and every penny that Organisation is giving you is paid off. There are n number of ways in which they do that (Did I hear the F- word…), well you’ll come across a scenario where a developer thought that they were in a test environment and issued commands to delete data from several tables in one tablespace whereas It turns out that they were in the production environment ( It might seem Stupid but I don’t know a single DBA who hasn’t faced such a situation).

So this post is to describe how to use TSPITR to restore the tablespace in question to the point-in-time just before the DELETE ( Or any other erroneous DML/DDL ) was issued.

For ease of discussion lets name the Tablespace for which we have to perform TSPITR as USERS.

STEP 1:- Determine and Resolve Dependencies between the objects in the tablespace of interest (USERS) and objects related to these via referential integrity constraint.

To get these dependencies Oracle has given a view called as TS_PITR_CHECK, which can be used as:-

SELECT * FROM sys.ts_pitr_check WHERE (ts1_name = ‘USERS‘ AND ts2_name != ‘USERS‘)  OR (ts1_name != ‘USERS‘ AND ts2_name = ‘USERS‘);
So if there are any dependencies the above query would give you all the details that are required, and accordingly you can disable the constraints and make a note of it. If we don’t disable the constraints, TSPITR would not be successful.

STEP 2:- Determine Which Objects Will Not Be Recovered

We are doing a TSPITR to a point-in-time when that erroneous SQL was executed, so what about objects which were created after that time, here comes the introduction to a new fraternity which are known as Database Administrators, no matter how poorly the SQl are written, no matter what edition the database is, no matter what infra does the database run, DBAs have to ensure magical performance and almost recovery of everything… EVERYTHING… (Did I hear the Awwww expression…) so henceforth the STEP number 2.

For this special case Oracle has  TS_PITR_OBJECTS_TO_BE_DROPPED view, which can be used as:-

SELECT owner, name, tablespace_name FROM ts_pitr_objects_to_be_dropped WHERE tablespace_name =’USERS’ AND creation_time > to_date(‘<Time of TSPITR>’,’dd-mon-rrrr hh24:mi:ss’);
Above query would give you all the objects that were created after the point-in-time till where you’ll restore the tablespace (<Time of TSPITR>).

Once you get the objects you can use datapump to export the objects in concern, for example:-

expdp <username/password> tables=<Comma separated list of table name> directory=DATA_PUMP_DIR dumpfile=TSPITR_after_objects.dmp logfile=TSPITR_after_objects.log
You might have to amend the expdp statement above, directory information you’ll get via DBA_DIRECTORIES view, and if using OS authentication and want to connect as sysdba use (\”/ as sysdba\”) in UNIX.

Step 3: Create an Auxiliary Destination

Several Steps are performed internally for which RMAN needs an auxiliary destination, so we need approximately equivalent space as the size of the datafiles in the tablespace to be recovered. So once we ensure there is proper space, on that filesystem we create an Auxiliary destination.

mkdir <Path where you have sufficient space>/aux

STEP 4:- Recover the tablespace.

Now RMAN is ready to recover your tablespace you can now use until time (Until Time Clause) or until SCN (Until SCN Clause) or until sequence (Until Sequence Clause) clause to recover your tablespace.

RMAN> connect target /
RMAN> recover tablespace users until time “to_date(’12-nov-2006 16:00:00′,’dd-mon-rrrr hh24:mi:ss’)” auxiliary destination ‘<Path where you have sufficient space>/aux';
Now the most important things comes up, THE OUTPUT, from the output you can see that there is a hell lot of tasks that RMAN does for you, I have listed the tasks below in case you haven’t personally seen the beauty:-

Creates an auxiliary instance, starts it, and connects to it
Takes offline tablespaces that are involved with TSPITR
Restores the backup control file that corresponds to the target restore time
Restores datafiles to a destination specified by your AUXILIARY DESTINATION
Recovers restored datafiles in an auxiliary location
Opens an auxiliary database with the open resetlogs command
Exports the auxiliary data dictionary metadata about recovered tablespaces and shuts down the auxiliary database
Issues a switch command on the target database to update the control file to point at the recovered auxiliary datafiles
Imports objects associated with recovered tablespaces into the target database
Deletes the auxiliary datafiles
See, isn’t it a beauty..!! All this tasks are being performed while a DBA is just sitting and explaining WHAT/WHEN/HOW/WHERE to Senior Managers and Stakeholders, which I think is more difficult than the task done by RMAN in the background, if it were to me I’ll ask the RMAN to switch the tasks :)…

I always wondered if RMAN if doing all this is stuff why not export import the tables too so that we can also recover tables from RMAN, which was not possible in 11g, however studying the code helps me figure out the glitch. The great news is this has been fixed in 12c and YES you can even recover a table via RMAN in 12c.

STEP 5:- Importing the objects which were determined during STEP 2.

You guys thought its over right :)… Well it isn’t, neither with the explanation to the Senior Management nor with the TSPITR.

impdp <username/password> tables=<Comma separated list of table name> directory=DATA_PUMP_DIR dumpfile=TSPITR_after_objects.dmp logfile=TSPITR_after_objects_imp.log
So above command would be used to import the tables that were exported in STEP 5, so that even by TSPITR no objects that were created after the erroneous query are missed.

STEP 6:- Back Up the Tablespace and Alter It Online

Gotcha..!! why are you guys in so much of a hurry, its not a SQL Server or Sybase or DB2, its Oracle and you are a Oracle DBA we come with a guarantee, that we ensure every mistake done by the first fraternity (Developers) is fixed ;)

So backup your database because Oracle ensures everything except a Developer ( I am pretty sure this time I heard the F- word) and his cute little ANNOYING practices.

RMAN> backup tablespace <Tablespace Name on which we did TSPITR>;
RMAN> sql ‘alter tablespace <Tablespace Name on which we did TSPITR> online';
Now its done… No… Oh Wait… Those Senior Management on the Bridge Call, well I wish I have a STEP and a GUN for the same…

I hope this post would be helpful in scenarios when you know the recovery process but as there is lot of stress on DBAs during recovery scenarios even googling at that time for the syntax is an overhead, hence the idea is to make a repository that cover all possibile recovery scenarios step by step with syntax of commands at a single destination. I would try to cover all the scenarios in my subsequent posts. Please comment if any of the curious readers are interested to know the internals of the recovery process like which process does that, what information is in redo that is applied to the datafiles to recover them to point in time and etc.

Useful Links for Oracle DBA

Interview QA

Interview QA:
-------------

1. How to perform the user managed backup manually?

If we want to take a user managed backups. First we should be put database is in begin backup mode and the cp all the required file to backup location and change the database mode to end backup.

> ALTER DATABASE BEGIN BACKUP;

$CP *dbf /u02/backup/

> ALTER DATABASE END BACKUP;

2. Why more logs are generated after change the database mode to begin backup?

The two things will happen when the tablespace in backup mode

1.) The datafile header associated with the tablespace in the backup up mode will be in frozen state.So it stops updating the datafile header, but it continually writes into the datafile.

2.) More redos are generated.This is because oracle will write all the changed blocks into the redo log.Normally only the changes are logged into the redo logs.

Datafiles are not freezed, only the datafile headers will be freezed !! So simply imagine that when you put the tablespace in backup mode, Oracle will take a checkpoint and update the datafile headers with checkpoint SCN and there after it is freezed until we take tablespace out of backup mode.

3. CACHE FUSION?



Oracle 12c Architecture

Oracle 12c Architecture:
-----------------------




With 12c you might have always heard of multitenant architecture and container & pluggable database, So we'll start with understanding these things, also 12c configuraion has following option:

* MULTITENANT CONFIGURATION: A CDB consists of zero, one, or more     PDB's. You need to license the Oracle Multitenant option.

* SINGLE-TENANT CONFIGURATION: Doesn't require the licensed oracle     multitenant option.
  
* NON-CDB: This is the same as the pre-oracle 12c database    architecture.
  
 So it doesnt mean that 12c is only about multitenant configuration,  it can be configured as the same way as your beloved 11g.

Multitenant Architecture:
-------------------------
For those of you who have worked with SQL Server, Sybase etc this architecture won't be new. Basically till 11g we used to have 1 instance 1 database (Excluding RAC access for simplicity), so even you have a very small application you need to have a separate instance for that database, separate instance means memory, process and everything (But the oracle was designed to handle large & critical databases). So with the changing requirements Oracle has changed its architecture where you can have multiple databases
with a singel instance. To build a little perspective on CDB (CDB$ROOT) and PDB think of this single instance as CDB and multiple databases as PDB.














  Now its time to understand what is CDB & PDB, How is memory allocated to these different PDB, how does CDB maintains PDBs, where's REDO, where's UNDO, etc,.

CDB & PDB:




A multitenant container database has three types of containers:

* The oracle supplied container is called the root container          (CDB$ROOT) and consist of just oracle metadata (and may be a little   bit of user data) and common users. Each CDB has one root.
 
* The seed container is named PDB$SEED, and there is one of these     per CDB. The purpose of the seed container isn't to store user       data - its there for you as a template to create PDBs.
 
* The user container, which is actually called a pluggable database   (or PDB), consistes of user metadata and user data.
 
Each of these - the root, the seed, and the PDBs - is called a container, and each container has a unique container ID (CON_ID) and container name (CON_NAME). 

Each PDB also has a globally unique identifier (GUID).

The idea behind the concept of a container is to separate oracle metadata and user data by placing the two types of data into separate containers. 

That is, the SYSTEM and USER data are separated. There's a SYSTEM tablespace in both the central container and the PDB continers, however, they contain different types of data. 

The root container consists of Oracle metadata whereas the PDB container's SYSTEM tablespace contains just user metadata. The Oracle metadata isn't duplicated by storing it in each PDB - Its stored in a central location for use by all the PDBs that are part of that CDB. 

The CDBs contain pointers to the Oracle metadata in the root container, thus allowing the PDBs to access these system objects without duplicating them in the PDBs.

A CDB has similar background processes and files as a normal non-CDB database. However, some of the processes and files are common for both a CDB and its member PDB databases, and osme aren't.

Common entities between CDB and PDBs:
-------------------------------------

* Background processes: There's single set of background processes  for the CDB. The PDBs dont have any background processes attached  to them.

* Redo log files: These are common for the entire CDB, with Oracle     annotating the redo data with the identity of the specific PDB       associated with the change. There's one active online redo log for   a single-instance CDB or one active online redo log for each         instance of an Oracle RAC CDB. A CDB also has a single set of       archived redo log files.

* Memore: You allocate memory only to the CDB, because thats the       only instance you need in multitenant database.

* Contorl files: These are common for the entire CDB and will         contain information that reflrects the changes in each PDB.
  
* Oracle metadata: All oracle-supplied packages and related objects   are shared.

* Temporary tablespace: Theres a common temporary tablespace for an   entire CDB. Both the root and all the PDBs can use this temporary   tablespace. This common tablespace acts as the default TEMP         tablespace. In addition, each PDB can also have a separate           temporary tablespace for its local users.
 
* Undo Tablespace: All PDBs use the same undo tablespace. Theres one   active undo tablespace for a single instance CDB or one active       undo tablespace for each instance of an oracle RAC CDB.
 
A CDB contains a set of system data files for each container and a set of user-created data files for each PDB. Also CDB contains a CDB resource manager plan that allows resources management among the PDBs in that CDB.

Entities Exclusive for PDBs:

  • Tablespaces for the applications tables and indexes These application tablespaces that you’ll create are specific to a PDB and aren’t shared with other PDBs, or the central CDB. The data files that are part of these tablespaces constitute the primary physical difference between a CDB and a non-CDB. Each data file is associated to a specific container.
  • Local temporary tablespaces Although the temporary tablespace for the CDB is common to all containers, each PDB can also create and use its own temporary tablespaces for its local users.
  • Local users and local roles Local users can connect only to the PDB where the users were created. A common user can connect to all the PDBs that are part of a CDB.
  • Local metadata The local metadata is specific to each application running in a PDB and therefore isn’t shared with other PDBs.
  • PDB Resource Manager Plan These plans allow resource management within a specific PDB. There is separate resource management at the CDB level.
The PDB containers have their own SYSTEM and SYSAUX tablespaces. However, they store only user metadata in the SYSTEM tablespace and not Oracle metadata. Data files are associated with a specific container. A permanent tablespace can be associated with only one container. When you create a tablespace in a container, that tablespace will always be associated with that container.

  

Monday, 28 December 2015

Making backup with RMAN

Making backup with RMAN: Page 211

First connect to the target database before backing it up.

The database must be in mount or open state.

Open state, if the database is running in archivelog mode.

Mount state, if the database is running in noarchivelog mode.

* The backup command lets you make two types of RMAN backups:
   
    - Backup Sets.
    - Image Copies.

By default, all RMAN backups are in the form of backup sets.

Each backup set contains one or more backup pieces, Which are files in an RMAN-Specific format.




RMAN Catalog database


RECOVERY CATALOG:

A recovery catalog is an optional database schema consisting of tables and views, and RMAN uses it to store its repository data.


Controlfile to store the RMAN metadata. 
However the recovery catalog provides a large storage capacity, thus enabling access to a longer history of backups, and it is an ideal solution when dealing with a large number of databases.

We can create and store RMAN scripts in the recovery catalog. 
Recovery catalog based stored scripts aren't the same as operating system scripts that invoke RMAN or RMAN command scripts, which are available only if the RMAN client can access the filesystem.

NOTE: Even when you choose to use a recovery catalog, backup information will continue to be stored in the control file as well by default. 
The recovery catalog contains information about both RMAN backups and the target database.

* RMAN Configuration settings.
* RMAN stored scripts that you create.
* Target database tablespace and datafile information.
* Information pertaining to datafile and archived redo log backup sets and backup pieces, as well as datafile and archived redo log copies.

--------------------------------------------------------------

We must create it manually. Since the recovery catalog instance is a regular oracle database like any other, You must also regularly backup this critical database.

Some times you may have to export and import or restore and recover the recovery catalog.

Recovery Catalog Creation:

Below are some steps for Creating and Configuring Recovery Catalog for our Database which might be production,test or Development:

Step 1:Create a new database for RMAN – Recovery catalog database->'CATDB' Database created using dbca.

Note: We can create a small database with minimal sizes of tablespaces and others, and we can name the database as CATDB for naming convention and to avoid the confusion between our production and rman databases.

Step 2:Create a new tablespace in the new database (CATDB)

$ sqlplus /nolog

CONNECT SYS/welcome@catdb AS SYSDBA;

CREATE TABLESPACE rman
DATAFILE 'D:\rafi\catdb\rman\rman01.dbf' size 200m;


Step 3:Create the Recovery Catalog Owner in the new database (CATDB)

CREATE USER rman IDENTIFIED BY rman
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;

Step 4:Grant the necessary privileges to the schema owner

SQL> GRANT recovery_catalog_owner TO rman;

Here the role "RECOVERY_CATALOG_OWNER" provides the user with all privileges required to maintain and query the recovery catalog.RECOVERY_CATALOG_OWNER role has the CREATE_SESSION privileges so need to grant separately.

Step 5:Creating the Recovery Catalog

Connect to the database which will contain the catalog as the catalog owner. 
For example:'rman' user is catalog owner in our example.

On Linux(UNIX):
------------------
Run the 'CREATE CATALOG' command to create the catalog

$ rman target / catalog rman/rman@catdb

RMAN> CREATE CATALOG;

recovery catalog created

On Windows:
--------------------

C:\Windows\system32>rman target / catalog rman/rman@catdb

Recovery Manager: Release 11.2.0.1.0 - Production on Tue Jun 21 00:44:02 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to recovery catalog database

RMAN> CREATE CATALOG;

recovery catalog created


Step 6:Registering a Database in the Recovery Catalog

Connect to the target database and recovery catalog database.

$ export ORACLE_SID=ORCL
$ rman target / catalog rman/rman@catdb

Recovery Manager: Release 11.2.0.1.0 - Production on Sun July 1 14:25:30 2007
Copyright (c) 1982, 2010, Oracle. All rights reserved.
connected to target database: ORCL (DBID=3677528376)
connected to recovery catalog database

windows:
-------------
C:\Windows\system32>set ORACLE_SID=ORCL

C:\Windows\system32> rman target / catalog rman/rman@catdb

Recovery Manager: Release 11.2.0.1.0 - Production on Tue Jun 21 01:03:19 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1280115002)
connected to recovery catalog database

Step 7:Register and Verify after connection

RMAN> REGISTER DATABASE;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

On Windows (OR) Linux Environment:
-------------------------------

RMAN> REGISTER DATABASE;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

Register the target Database using 'REGISTER DATABASE' command as seen above.

Make sure that the registration is successful by running REPORT SCHEMA:

RMAN> REPORT SCHEMA;

Report of database schema

RMAN> REPORT SCHEMA;

Report of database schema for database with db_unique_name ORCL

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 690 SYSTEM YES D:\APP\RAFIALVI\ORADATA\ORCL\SYSTEM01.DBF
2 570 SYSAUX NO D:\APP\RAFIALVI\ORADATA\ORCL\SYSAUX01.DBF
3 55 UNDOTBS1 YES D:\APP\RAFIALVI\ORADATA\ORCL\UNDOTBS01.DBF
4 6144 USERS NO D:\APP\RAFIALVI\ORADATA\ORCL\USERS01.DBF
5 100 EXAMPLE NO D:\APP\RAFIALVI\ORADATA\ORCL\EXAMPLE01.DBF

List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 20 TEMP 32767 D:\APP\RAFIALVI\ORADATA\ORCL\TEMP01.DBF

Note:

In Windows:
---------------
For Connecting to recovery catalog Database 'CATDB' with rman for Target Database 'ORCL':
C:\Windows\system32>set ORACLE_SID=ORCL
C:\Windows\system32>rman target / CATALOG rman/rman@CATDB

In Linux:
-----------------
For Connecting to recovery catalog Database 'CATDB' with rman for Target Database 'ORCL':
$export ORACLE_SID=ORCL
$rman target / CATALOG rman/rman@CATDB

Rest all steps are prety much similar in both environment.....


***************************************************************

How to differ out put of list backup summary from target controlfiel or catalog database info.

From target database controlfile:

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
13      B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T230953
14      B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235112
15      B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235119
16      B  0  A DISK        24-DEC-15       1       1       NO         TAG20151224T235230
17      B  0  A DISK        24-DEC-15       1       1       NO         TAG20151224T235230
18      B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235457
19      B  A  A DISK        24-DEC-15       1       1       NO         TAG20151224T235502
20      B  A  A DISK        24-DEC-15       1       1       NO         TAG20151224T235502
21      B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235503
22      B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235505
23      B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235832
24      B  F  A DISK        25-DEC-15       1       1       NO         TAG20151225T000007

RMAN> host

From catalog database repository:

RMAN> list backup summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
131     B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T230953
132     B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235112
133     B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235119
134     B  0  A DISK        24-DEC-15       1       1       NO         TAG20151224T235230
135     B  0  A DISK        24-DEC-15       1       1       NO         TAG20151224T235230
136     B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235457
137     B  A  A DISK        24-DEC-15       1       1       NO         TAG20151224T235502
138     B  A  A DISK        24-DEC-15       1       1       NO         TAG20151224T235502
139     B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235503
140     B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235505
141     B  F  A DISK        24-DEC-15       1       1       NO         TAG20151224T235832
142     B  F  A DISK        25-DEC-15       1       1       NO         TAG20151225T000007

RMAN> 

********************************************************************************

How to unregister  the catalog database:

connect to target database with catalog:

$rman target / catalog rmancat/rmancat@euro

RMAN> unregister database;

********************************************************************
Resync catalog from target database controlfile:

export ORACLE_SID=prod

$rman target / catalog rmancat/rmancat@catdb

RMAN>startup mount;

RMAN>resync catalog;

To update the recovery catalog using the current control file information, rman will first create a snapshot controlfile. It will then compare the contents of the recovery catalog to the contents of the snapshot control file and update the recovery catalog by adding the missing information and modifying the changed backup and schema related records.

*******************************************************************

DROPING RECOVERY CATALOG:

$rman catalog rmancat/rmancat@catdb

RMAN>drop catalog;

RMAN> drop catalog; -- For confirmation.

When we drop the base recovery catalog, you lose the backup information for all databases registered in the base recovery catalog.

******************************************************************

Viewing backup information from cagtalog db:

Connect catalog database:

export ORACLE_SID=catdb

sqlplus "/as sysdba"

SQL> connect rmancat/rmancat@euro

SQL>

SQL> select dbid from rc_database where dbid='&dbid';
Enter value for dbid: 1322113438
old   1: select dbid from rc_database where dbid='&dbid'
new   1: select dbid from rc_database where dbid='1322113438'

      DBID
----------
1322113438

SQL> select db_key from rc_database where dbid='&dbid';
Enter value for dbid: 1322113438
old   1: select db_key from rc_database where dbid='&dbid'
new   1: select db_key from rc_database where dbid='1322113438'

    DB_KEY
----------
         1

SQL> 

Some important RC views:

RC_STORED_SCRIPT - lists info abour rman scripts stored in the recover catalog.

RC_UNUSABLE_BACKUPFILE_DETAILS - Shows the unusable backup files recorded in the recovery catalog.

RC_RMAN_STATUS - Similar to the V$RMAN_STATUS view and shows the status of all RMAN operations. This view doesn't contain information about any operations that are currently executing.

RC_RMAN_CONFIGURATION - The view provides information about persistent configuration settings.

RC_DATAFILE -  The view shows all datafiles registered in the recovery catalog.

RC_DATABASE - The view shows the databases registered in the recovery catalog.

RC_ARCHIVED_LOG - the view provides historical information on both archived as well as unarchived redo log.

********************************************************************

RMAN Preparation (IQ)

RMAN Preparation:

We can delete all expired archive log files by issuing the following command.

RMAN> delete expired archivelog all; 

RMAN> change archivelog all crosscheck;
    
[OR]  crosscheck archivelog all; -- Which is useful for whether all of the registered archive logs are still exist. if not, the RMAN changes their status in the metadata as EXPIRED.

CMDFILE : Which is useful to execute the .rcv files from rman prompt as in the following example.

$ rman target / cmdfile script.rcv

Or else we can execute the script as follow.

$rman target / @script.rcv

We can connect the all three types of databases (Such as target, catalog, auxiliary) when we are in operating system prompt. But, we can not connect when we are on RMAN prompt we have to connect individually. For example as showing the below.

When we are on operating system prompt:

$rman target / catalog sys/passwd@catdb auxiliary sys/passwd@aux

when we are on RMAN prompt:

RMAN> connect target /
RMAN> connect catalog sys/passwd@catdb
RMAN> connect auxiliary sys/passwd@aux

We can use a run block when we want to execute a set of rman commands. For example as follow.

run
{
allocate channel ch1 device type disk format '/u02/backup/%F.bak;
backup database;
backup archivelog all;
delete all archivelogs completed before 'sysdate-1';
release channel;
}

We can't issue a SELECT command from within RMAN client.

We can issue the CREATE command from within RMAN and as well we can issue the PLSQL blocks from within RMAN client.

We can issue the STARTUP and SHUTDOWN databases from within RMAN client as showing the follow.

RMAN> startup
RMAN> startup upgrade
RMAN> startup mount
RMAN> startup nomount

RMAN> Shut immediate
RMAN> Shut abort
RMAN> alter database open;
RMAN> alter database open resetlogs;

By using the following command we can check the syntax of rman commands before executing the rman command from within RMAN.

$rman checksyntax @/tmp/scripts.cmd


NOTE: We cant use the CHECKSYNTAX argument after starting the RMAN client.

Identifying RMAN Server sessions:

RMAN performs all its backup and recovery tasks by using server sessions.

Number Sessions = C+N+2

Here, C is the no.of channels allocated.
      N is the no.of CONNECT options used in the allocate channel commands (if no connect options are used, N has the value of 1)its means default connection means target database.

If we are connecting catalog database as well along with target database, there are two sessions.

We can find out exactly who is currently running the RMAN client by issuing a following command.

$ps -ef|grep rman

oracle 9255 9012 0 mar18 pts/4 00:00:01 rman target /
oracle 6068 6032 0 mar18 pts/2 00:00:01 rman target /

Having a list of RMAN client sessions like this, you can pick one in which you're interested. Say that you're intersted in the session for process ID 9255.
You can then issue the following command, which will find all the child processes associated with that instance of the client:

$ps -ef|grep 9255

To identify the Oracle Session ID of the RMAN session, Look for the following types of messages in the RMAN log:

channel ch2: sid-12 devtype=SBT_TAPE

By using RMAN sessions we can kill the unwanted RMAN sessions by using 'KILL' command or alter system kill session;

How to drop the database within RMAN Prompt?

However we can drop a database within sql*plus prompt but we cant drop database including backup within sql*plus prompt.

Steps to drop a database within RMAN prompt:

RMAN> startup mount;

RMAN> drop database;

Now drop the database including backup by issuing following command:

RMAN> drop database including backups;

NOTE: RMAN will ensures that all datafiles, redo logfiles and controlfiles belonging to the database are removed from the operating system file system.

Optionally, we can specify all the archived logs, backups and copies belong to the database to be removed as well.


CONFIGURING THE RMAN ENVIRONMENT:

Configure settings are stored in target database CONTROLFILE regardless of whether you are using RECOVERY CATALOG Database.

If you are using one recovery catalog db, contains the configurations for all databases that are registered in the recover catalog database.

If you are not using catalog database the configuration settings stored in the their target database contolfiles.

RMAN> show all;

When we issuing the SHOW command queries the target database controlfile to retrieve the RMAN Configuration settings. 

Your database must be MOUNTED or OPEN when issuing the SHOW command quires. Since the RMAN configuration settings stored in the controlfile. 

controlfile autobackup - which is useful, whether the control file backup is automated or not.

Ex: configure controlfile backup on/off #default OFF

retention policy - Shows the policy of datafiles and controlfiles backups and copies that RMAN marks as obsolete.

Note: By using the V$RMAN_CONFIGURATION view we can find out the nondefault RMAN Configure settings.

SQL> select * from V$RMAN_CONFIGURATION;

* CONFIGURE RETENTION POLICY TO REDUNDANCY 1 - Means that RMAN retains only one set of backup copies.

* CONFIGURE BACKUP OPTIMIZATION OFF - Means that by default RMAN won't skip the backing up unchanged data blocks in the datafiles.

* CONFIGURE DEFAULT DEVICE TYPE TO DISK - Means that by default RMAN sends backup output to a disk drive.

* CONFIGURE CONTROLFILE AUTOBACKUP OFF - Means that by default RMAN doesn't automatically backup the control files when it performs a backup task.

* CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET - Means that the default RMAN backup type is a backup set (and not an image copy) and the degree of parallelism is 1.

Here, Parallelism is 1 means 

* CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1 - Means that by default RMAN doesn't make multiple copies of a backup file.

* CONFIGURE MAXSETSIZE TO UNLIMITED - Means that there's no limit on the size of a backpset by default.

* CONFIGURE ENCRYPTION FOR DATABASE OFF - Means that by default RMAN backups aren't encrypted.


If we want to change the value of parameter to default value that you can change by using the following command.

RMAN> configure ....clear;

Example :

RMAN> configure backup optimization clear;

NOTE: 
Once you configure automatic control file backup, RMAN Will automatically backup your target database controlfile, as well as the current binary server parameter file (SPFILE)if you're using, when any of the following events occurs:

* Successful completion of either a BACKUP or COPY command.
* After a CREATE CATALOG command from the RMAN prompt is successfully completed.
* Any structural changes to the database modify the contents of the controlfile.

First two events - If you are using one - is successfully updated, RMAN will then backup the controlfile to its own backup piece.

In third event, Any changes to the pysical structure of your database, even if they are made through SQL*PLUS, will trigger a control file autobackup. 

For example, The following actions will trigger an autobackup of the controlfile: 

Adding a tablespace or Datafile, Dropping a datafile, Placing a tablespace offline or online, adding an online redo log, and renaming a datafile).

When automatic backup is triggered by a structural change, and oracle server process (not an RMAN Process) will automatically create the autobackup of your control file.

NOTE: Automatic control file back ups that occur after a datafile backup can be created on disk or tape.

*****************************************************************
RMAN Can recover a database if you have autobackup of controlfile, even if the current conrolfile, the recovery catalog, and the server parameter file all turn out to be inaccessible.

The following are the steps RMAN takes in recovering the database:

* RMAN Will first restore the SPFILE from the location where the file was automatically backed up to by RMAN.

*RMAN will start the instance with the help of SPFILE it restored in first step.

* RMAN Will restore the control file from the same autobackup.

* Once the control file is MOUNTED, RMAN will connect to the target database in the NOCATALOG mode and use the RMAN repository available in the control file to restore the data files and then recover the database.

* At this point, you may re-create a new recovery catalog and register your target databases in it.

* Finally, RMAN will copy all the RMAN repository records from the target database control files to the new recovery catalog.


NOTE: Diff bw CATALOG and NOCATALOG arguments.

The default for the "RMAN" command is nocatalog. If the catalog database is not available.

If catalog db is created, RMAN will store all repository into the catalog db schema.

When ever we need to restore and recover the database we want to connect the rman client along with the catalog argument. then it will connect the target database along with the catalog db as well.

********************************************************************
Snapshot control file:

We can configure the backup of snapshot controlfile by using the following command.

RMAN> configure snapshot controlfile name to '/backup/snct.ctl';

The default location of the snct file is $ORACLE_HOME/dbs.
we can find the default location of the snct file by issuing following command.

SQL>show snapshot controlfile name;

The snapshot controlfile is a temporary file copy of the controlfile.
Which is used by RMAN to synchronize the information with the current controlfile.

Snapshot controlfile is created when resynchronizing with the recover catalog and when making a backup of the current controlfile.


Basically the snapshot controlfile is created when oracle needs a "read-consistent verion" of the controlfile.

*******************************************************************

Retention period of the RMAN Backup details (RMAN Repository in controlfile):


If you are using on controlfile to store the RMAN repository and there is no recovery catalog database to store the repository of the RMAN. In this case we want to change the length of time for which the oracle server will retain history data in the controlfile before overwriting it. for this we have to follow the below.

We can set the retention period by using the below parameter.

parameter is CONTROL_FILE_RECORD_KEEP_TIME

>ALTER SYSTEM SET CONTROL_FILE_RECORD_KEEP_TIME=15;

Now the control file saved the 15 days of historical RMAN backup and recover activities.

By default, the control file saved 7 days details and range of the value is 0 to 365 days.

If you set the retention time to zero, it means the reusable sections of the control file will not expand when there aren't any more empty reusable records and the database starts over writing the existing records as and when it needs them.
The controlfile contains two types of sections: 

Reusable and nonreusable.

control_file_record_keep_time parameter applies only to the reusable section of the controlfile.

If RMAN needs to add new backup and recovery-related records to the control file, any records that expired as per the parameter are overwritten.

If there are no expired records to be overwritten, the reusable section of the controlfile (and therefore the control file itself) expands.

The V$CONTROLFILE_RECORD_SECTION view provides information about the control file record sections.

********************************************************************************

CONFIGURE BACKUP OPTIMIZATION ON;

We can skip the previously backed up files.
*************************************************************

Creating a backup retention policy:

We want to create a retention policy to optimize storage space and other expenses involved in retaining backups.

Backup retention policy based on a Recovery Window:

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

After the specified number of days, RMAN will mark the backups as obsolete, making the eligible for deletion.

Backup retention policy based on redundancy:

CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

By default, RMAN keeps a single copy of each backed-up datafile and controlfile. However, we can specify that RMAN retain more than a single copy of a backed up datafile or controlfile by using REDUNDANCY parameter of the configure retention policy command.

RMAN marks any backups that fail to meet the backup retention policy constraints as OBSOLETE BACKUPS.

We can find the obsolete files by issuing the following command.

RMAN> report obsolete;

and we can delete the files by issuing the following commond.

RMAN> delete obsolete;

We can also query the V$BACKUP_FILES views to check on obsolete backups.

************************************************************

Configure archived log deletion policy:

CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO SBT;

******************************************************************