Wednesday, 23 December 2015

Oracle 12c New Features

Oracle 12c new features

1. New background processes - LREG (Listener Registration), SA (SGA Allocator), RM.


Listener Registration Process (LREG)

The listener registration process (LREG) registers information about the database instance and dispatcher processes with the Oracle Net listener (see "The Oracle Net Listener"). When an instance starts, LREG polls the listener to determine whether it is running. If the listener is running, then LREG passes it relevant parameters. If it is not running, then LREG periodically attempts to contact it.

Note:

In releases before Oracle Database 12c, PMON performed the listener registration.”

2. Like sysdba, sysoper & sysasm, we have new privileges, in Oracle 12.1.0.


sysbackup for Backup operations
sysdg for Data Guard operations
syskm for Key management

3. Running SQL commands and executing PLSQL procedures in RMAN without SQL keyword.


RMAN> select * from v$session;

4. No need to shutdown database for changing archive log mode.


5. Table level restoration i.e object level.


6. Oracle Database 12c introduces a new redo transportation method which omits the acknowledgement (to primary) of the transaction on the standby. This feature is called "Fast Sync" redo transport.


7. Data Guard Broker commands have been extended. The "validate database" command to checks whether the database is ready for role transition or not.


8. In 12c it is possible to cascade a Standby Database in Real Time, that is, the first Standby Database can send Redo from the Standby RedoLogs to the cascaded Standby Database.


9. Pluggable Database:


In Oracle 12c, in a pluggable database environment, we can create a single database container, and plug multiple databases into this container. All these databases then share the exact same oracle server/background processes and memory, unlike the previous versions where each database has its own background processes and shared memory. This helps in database consolidation and reduces the overhead of managing multiple desperate databases.


Container Database (CDB): Are the core data dictionary objects that come after an Oracle database installation.

Pluggable Database (PDB): Data dictionary objects and data related to the application. We can have many PDB plugged into a single CDB.

10. New Views in Oracle 12c Release1


dba_pdbs
v$pdbs
cdb_data_files 

********************************************************************

http://allthingsoracle.com/oracle-database-12c-new-features-part-i/


https://nitishanandsrivastava.wordpress.com/oracle-12c-miscellaneous/12c-new-features-global-temporary-table-undo/

********************************************************************
1. Online rename and relocation of an active data file:

-------------------------------------------------------

Unlike in the previous releases, a data file migration or renaming in Oracle database 12c R1 no longer requires a number of steps i.e. putting the tablespace in READ ONLY mode, followed by data file offline action. In 12c R1, a data file can be renamed or moved online simply using the ALTER DATABASE MOVE DATAFILE SQL statement. While the data file is being transferred, the end user can perform queries, DML and DDL tasks. Additionally, data files can be migrated between storages e.g. from non-ASM to ASM and vice versa.


Rename a data file:


ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cora12c/test.dbf' TO '/u01/app/oracle/oradata/cora12c/test_01.dbf';


Migrate a data file from non-ASM to ASM:


ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO '+DG_DATA';


Migrate a data file from one ASM disk group to another:


ALTER DATABASE MOVE DATAFILE '+DG_DATA/DBNAME/DATAFILE/users_01.dbf ' TO '+DG_DATA_02';


Overwrite the data file with the same name, if it exists at the new location:


ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO '/u00/data_new/users_01.dbf' REUSE;


Copy the file to a new location whilst retaining the old copy in the old location:


ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO '/u00/data_new/users_01.dbf' KEEP;


You can monitor the progress while a data file being moved by querying the v$session_longops dynamic view. Additionally, you can also refer the alert.log of the database where Oracle writes the details about action being taken place.


By default, Oracle automatically deletes old data file after moving them and prevents the user from overwriting
an existing file.

When we move a datafile, Oracle first makes a copy of the datafile.
Then, when the file is successfully copied, pointers to the datafile are updated and the old file is removed from the file system.
This is why the operation requires twice the size of the files to be copied as free space.

we can use KEEP clause to keep the old datafile used to make a copy of the file. Also REUSE clause
can be used to overwrite an existing file.

  * From file system to file system
  * From file system to ASM
  * From ASM to ASM

  * From ASM to ASM

********************************************************************
2. Multiple indexes on the same column:

---------------------------------------

Pre Oracle 12c, you can’t create multiple indexes either on the same column or set of columns in any form. For example, if you have an index on column {a} or columns {a,b}, you can’t create another index on the same column or set of columns in the same order. In 12c, you can have multiple indexes on the same column or set of columns as long as the index type is different. However, only one type of index is usable/visible at a given time. In order to test the invisible indexes, you need to set the optimizer_use_use_invisible_indexes=true.


SQL> CREATE INDEX emp_ind1 ON EMP(ENO,ENAME);

SQL> CREATE BITMAP INDEX emp_ind2 ON EMP(ENO,ENAME) INVISIBLE;

********************************************************************
3. DDL logging:

---------------

There was no direction option available to log the DDL action in the previous releases. In 12cR1, you can now log the DDL action into xml and log files. This will be very useful to know when the drop or create command was executed and by who. The ENABLE_DDL_LOGGING initiation parameter must be configured in order to turn on this feature. The parameter can be set at the database or session levels. When this parameter is enabled, all DDL commands are logged in an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location. An xml file contains information, such as DDL command, IP address, timestamp etc. This helps to identify when a user or table dropped or when a DDL statement is triggered.


To enable DDL logging


SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;


The following DDL statements are likely to be recorded in the xml/log file:


CREATE|ALTER|DROP|TRUNCATE TABLE

DROP USER
CREATE|ALTER|DROP PACKAGE|FUNCTION|VIEW|SYNONYM|SEQUENCE

********************************************************************
4. Backup specific user privilege:

----------------------------------

In 11g R2, SYSASM privilege was introduced to perform ASM specific operations. Similarly, backup and recovery tasks specific privilege SYSBACKUP has been introduced in 12c to execute backup and recovery commands in Recovery Manager (RMAN). Therefore, you can create a local user in the database and grant the SYSBACKUP privilege to perform any backup and recovery related tasks in RMAN without being granting the SYSDBA privilege.


$ ./rman target "username/password as SYSBACKUP"


********************************************************************
5. How to execute SQL statement in RMAN:

----------------------------------------

In 12c, you can now execute any SQL and PL/SQL commands in RMAN without the need of a SQL prefix: you can execute any SQL and PLS/SQL commands directly from RMAN. How you can execute SQL statements in RMAN:


RMAN> SELECT username,machine FROM v$session;

RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;

********************************************************************
6. Table or partition recovery in RMAN:
---------------------------------------
Oracle database backups are mainly categorized into two types: logical and physical. Each backup type has its own pros and cons. In previous editions, it was not feasible to restore a table or partition using existing physical backups. In order to restore a particular object, you must have logical backup. With 12c R1, you can recover a particular table or partition to a point-in-time or SCN from RMAN backups in the event of a table drop or truncate.

When a table or partition recovery is initiated via RMAN, the following action is performed:


Required backup sets are identified to recover the table/partition

An auxiliary database will be configured to a point-in-time temporarily in the process of recovering the table/partition
Required table/partitions will be then exported to a dumpfile using the data pumps
Optionally, you can import the table/partitions in the source database
Rename option while recovery
An example of a table point-in-time recovery via RMAN (ensure you already have a full database backup from earlier):

RMAN> connect target "username/password as SYSBACKUP";

RMAN> RECOVER TABLE username.tablename UNTIL TIME 'TIMESTAMP…'
AUXILIARY DESTINATION '/u01/tablerecovery'
DATAPUMP DESTINATION '/u01/dpump'
DUMP FILE 'tablename.dmp'
NOTABLEIMPORT    -- this option avoids importing the table automatically.
REMAP TABLE 'username.tablename': 'username.new_table_name';    -- can rename table with this option.

Important notes:



Ensure sufficient free space available under /u01 filesystem for auxiliary database and also to keep the data pump file

A full database backup must be exists, or at least the SYSTEM related tablespaces
The following limitations/restrictions are applied on table/partition recovery in RMAN:

SYS user table/partition can’t be recovered

Tables/partitions stored under SYSAUX and SYSTEM tablespaces can’t be recovered
Recovery of a table is not possible when REMAP option used to recovery a table that contains NOT NULL constraints

********************************************************************
7. Restricting PGA size:

-----------------------

Pre Oracle 12c R1, there was no option to limit and control the PGA size. Although, you set a certain size to PGA_AGGREGATE_TARGET initialization parameter, Oracle could increase/reduce the size of the PGA dynamically based on the workload and requirements. In 12c, you can set a hard limit on PGA by enabling the automatic PGA management, which requires PGA_AGGREGATE_LIMIT parameter settings. Therefore, you can now set the hard limit on PGA by setting the new parameter to avoid excessive PGA usage.


SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; --disables the hard limit

Important notes:


When the current PGA limits exceeds, Oracle will automatically terminates/abort the session/process that holds the most untenable PGA memory.


*******************************************************************
8. Data Pump enhancements:

--------------------------
This part of the section will focus on the important enhancements introduced in data pumps. There are quite a few useful additions, such as converting view into a table while exporting and turning off logging while import.

Turn off redo log generation

The new TRANSFORM option introduced in data pumps import provides the flexibility to turn off the redo generation for the objects during the course of import. When DISABLE_ARCHIVE_LOGGING values is specified with the TRANSFORM option, redo generation for the objects in the context will be turned off during the entire import duration. This feature provides a great relief when importing large tables, and reduces the excessive redo generation, which results in quicker imports. This attribute applies to tables and indexes.

This example demonstrates this feature:


$ ./impdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log TRANSFORM=DISABLE_ARCHIVE_LOGGING


******************************************************************
9. Gathering  statistics concurrently on multiple tables:

---------------------------------------------------------

In previous Oracle database editions, whenever you execute a DBMS_STATS procedure to gather table, index, schema or database level statistics, Oracle used to collect stats one table at a time. If the table is big enough, then increasing the parallelism was recommended. With 12c R1, you can now collect stats on multiple tables, partitions and sub partitions concurrently.  Before you start using it, you must set the following at the database level to enable the feature:


SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN='DEFAULT_MAIN';

SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=4;

SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT', 'ALL');


SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SCOTT');


******************************************************************

GLOBAL TEMPORARY TABLE UNDO:
----------------------------

We all know that the operations such as INSERT, UPDATE, MERGE, and DELETE on global temporary tables generated redo. Just to clarify this for newbies:-

A Global Temporary Table stores its data in a Temporary Tablespace where the data is retained for the duration of either a single transaction or the lifetime of a session. Performing DML on a Global Temporary Table doesn’t generate redo, because the data is stored in a temporary tablespace, but it does generate undo which in turn will generate redo since the undo is stored in a permanent tablespace.

Now the question is that as this REDO isn’t required for recovery purposes whats the point of writing all that stuff to redo logs i.e. why does the changes in temporary tablespace generate undo ? If Oracle wrote a huge code to log these changes there would be some reason, right ? Any guesses ?

The reason is that the undo for global temporary tablespace is required as Oracle needs it in cases an application issues a roll back and Oracle has to provide a read consistent image. For situations in which a developer inserts some information into a global temporary table and then issues a SELECT statement against it, followed by an UPDATE or a DELETE, the rules of read consistency state that the SELECT statement cannot see the effects of the UPDATE or DELETE. To make that possible, the database needs that undo.

So a modification of a global temporary table needs to generate undo, and the undo tablespace must be protected by redo.

From 12c onwards, temporary undo can be stored in the temporary tablespace and undo for permanent objects can be stored in the undo tablespace. What this effectively means is that operations on temporary tables will no longer generate redo. If you have large batch operations that utilize global temporary tables, you may well discover that the amount of redo you generate decreases by a large amount. Furthermore, you’ll be generating less undo in your undo tablespace. And that means you’ll be able to support a longer undo_retention time with a smaller undo tablespace.

Now as we understood the benefits behind having separate undo for changes in global temporary tables, lets dig in to see how we could use this feature:-

How undo is generated in Oracle Database 12c for global temporary tables is controlled by a new init.ora parameter: temp_undo_enabled. It has two settings: TRUE and FALSE. By default, this parameter is set to FALSE and undo will be generated in the same fashion it was in the past. For example:-

SQL> alter session set temp_undo_enabled = false;

Session altered.

SQL> insert into gtt select * from all_objects;
87310 rows created.

Statistics
———————————————————————————

     566304  redo size


SQ> update gtt set object_name = lower(object_name);
87310 rows updated.

Statistics
————————————————————————————
… 
    8243680  redo size

As you can see, the INSERT generates about half a megabyte of redo (566,304 bytes) while the UPDATE generates upwards of 8 MB of redo (8,243,680 bytes).

Now if I enable temporary undo, :-

SQL> alter session set temp_undo_enabled = true;

Session altered.

SQL> insert into gtt select * from all_objects;

87310 rows created.

Statistics
———————————————————————————————

        280  redo size


SQL> update gtt set object_name = lower(object_name);
87310 rows updated.
Statistics
———————————————————————————————

          0  redo size

the redo is either trivial or nonexistent.

Hence you can see that we can have significant improvements using this feature especially in case of UPDATEs and DELETEs.

*******************************************************************

No comments:

Post a Comment