Wednesday, December 30, 2009

Shared Pool

Shared Pool contains two very important memory structures :


a) Library Cache


b) Data Dictionary Cache


c) Control Structure


Shared Pool can be sized by by using SHARED_POOL_SIZE initialization parameter but it can not be greater than SGA_MAX_SIZE.


a) Library Cache :



Stores information about the most recently used SQL and PL/SQL statements


The Library Cache is divided into Namespaces (or Libraries). These can be found through v$librarycache


SQL> select namespace from v$librarycache;

NAMESPACE
---------------
SQL AREA
TABLE/PROCEDURE
BODY
TRIGGER
INDEX
CLUSTER
OBJECT
PIPE
JAVA SOURCE
JAVA RESOURCE
JAVA DATA

11 rows selected.


consists of two structures :


1) Shared SQL Area:


    stores and share the execution plan and parse tree for statement is run, to take the advantage of already prepared execution plans but to take the advantage of already created execution plan in Shared Sql Area the text,schema and bind variables must be exactly the same.


2) Share PL/SQL Area:


 Most recently executed PL/SQL statements.


b) Data Dictionary Cache :



The dictionary cache stores “metadata” (data about your tables and indexes) and it’s also known as the row cache. It is used to cache data dictionary related information in RAM for quick access. The dictionary cache is like the buffer cache, except it’s for Oracle data dictionary information instead of user information.


c) Control Structures - Common control structure information, for example, lock information

Monday, December 21, 2009

/etc/passwd and /etc/shadow file in UNIX

With shadow passwords, the ``/etc/passwd'' file contains account information, and looks like this:

devenv:x:571:571:verma deven:/home/devenv:/bin/bash

Each field in a passwd entry is separated with ":" colon characters, and are as follows:

* Username, up to 8 characters. Case-sensitive, usually all lowercase

*An "x" in the password field. Passwords are stored in the ``/etc/shadow'' file.

*Numeric user id. This is assigned by the ``adduser'' script. Unix uses this
field, plus the following group field, to identify which files belong to the user.
*Numeric group id. Red Hat uses group id's in a fairly unique manner for enhanced file security. Usually the group id will match the user id.

* Full name of user. I'm not sure what the maximum length for this field is, but try to keep it reasonable (under 30 characters).

* User's home directory. Usually /home/username (eg. /home/smithj). All user's personal files, web pages, mail forwarding, etc. will be stored here.

* User's "shell account". Often set to ``/bin/bash'' to provide access to the bash shell (my personal favorite shell).


The ``/etc/shadow'' file contains password and account expiration information for users, and looks like this:

devenv:Ep6mckrOLChF.:10063:0:99999:7:::

As with the passwd file, each field in the shadow file is also separated with ":" colon characters, and are as follows:

* Username, up to 8 characters. Case-sensitive, usually all lowercase. A direct match to the username in the /etc/passwd file.

* Password, 13 character encrypted. A blank entry (eg. ::) indicates a password is not required to log in (usually a bad idea), and a ``*'' entry (eg. :*:) indicates the account has been disabled.

*The number of days (since January 1, 1970) since the password was last changed.

* The number of days before password may be changed (0 indicates it may be changed at any time)

*The number of days after which password must be changed (99999 indicates user can keep his or her password unchanged for many, many years)

* The number of days to warn user of an expiring password (7 for a full week)

* The number of days after password expires that account is disabled

* The number of days since January 1, 1970 that an account has been disabled

* A reserved field for possible future use

Monday, December 14, 2009

ARCHIVELOG AND NOARCHIVELOG MODE

All transactions are recorded in the online redo log file.
If the database is configured for NORACHIVELOG mode, no redo history is saved to archive log files and recovery operations are limited and a loss of transaction work may occur.

Configure your database in ARCHIVELOG mode ,so that a history of redo information is maintained in archived files.By default the database is in NOARCHIVELOG mode.

Changing the Archiving Mode:

Alter database [archivelog | noarchivelog]

To change the archiving more follow the steps
1) shutdown immediate
2) startup mount
3) alter database archivelog;
4) alter database open;

After database is set in ARCHIVELOG mode you can choose the mode of archiving i.e. automatic or manual archiving . In automatic archiving , the ARCn process is enabled and copy the redo files as they are filled.In manual archiving you have to use SQL *Plus or Oracle Enterprise Manager to copy the files.

You can specify upto 10 ARCn processes by

LOG_ARCHIVE_MAX_PROCESSES parameter

Start or Stop Additional Processes

Alter system set log_archive_max_processes=4;

Enabling Automatic Archiving at Instance Startup

LOG_ARCHIVE_START= [true | false]

Enabling Automatic Archiving After Instance Startup

Alter system archive log start;

Disabling Automatic Archiving

Alter system archive log stop

Manually Archiving Online Redo Log Files

Alter system archive log current;

Archive Log Destination

Use LOG_ARCHIVE_DEST_n to specify up to ten archival destinations.

Additional Options

Log_archive_dest_1=”LOCATION=/archive/ MANDATORY REOPEN”
Log_archive_dest_2=”SERVICE=standby_mydb MANDATORY REOPEN=600”
Log_archive_dest_3=”LOCATION =/archive001/ OPTIONAL”

MANDATORY implies that archiving to this destination must complete successfully before an online redo log file can be overwritten.

OPTIONAL implies that an online redo log file can be reused even if it has not been successfully archived to this destination. This is default.

REOPEN defines whether archiving to a destination must be reattempted inc ase of failure.The default is 300 seconds.If REOPEN is not specified ,errors at optional destination are recorded and ignored.

Log_archive_dest_state_1= DEFER

To stop arching to a mandatory location temporarily when an error has occurred ,the state of that destination can be set to DEFER.
To enable it

Log_archive_dest_state_1= ENABLE

Monday, December 7, 2009

Creating New User in Oracle 9i

example:

CREATE USER user1
IDENTIFIED BY pass
DEFAULT TABLESPACE mytab
TEMPORARY TABLESPACE temp1
QUOTA 10M ON mytab
QUOTA 5M ON temp1
PASSWORD EXPIRE;


where:

user1: name of the user
BY pass:pass is set as password
DEFAULT TABLESPACE:set the tablespace where segment that are created by the user are stored
QUOTA: Defines the max space allowed for objects owned by the user in the tablespace.
PASSWORD EXPIRE : Forces the user to reset the password when the user logs on to the database.

Changing User Quota

ALTER USER user1
QUOTA 0 ON mytab;


Dropping a User

DROP USER user1

Use the CASCADE clause to drop all objects in the chema if the schema contains objects.

DROP USER user1 CASCADE


User Information

to obtain user information query following views

A) DBA_USERS
B) DBA_TS_QUOTAS

Saturday, December 5, 2009

Creating Views

View
A view is a logical table based on a table or another view.A view contains no data of its own but is like a window through which data from a tables can be viewed or changed.The tables on which a view is based are called base tables.


Advantages of Views
1) To restrict data access
2) To make complex queries easy
3) To provide data independence
4) To present different views of the same data

Creating a View
create [or replace][FORCE|NOFORCE] VIEW view
[(alias[, alias....)]
As subquery
[WITH CHECK OPTION [CONSTRAINT constraint]]
[WITH READ ONLY [CONSTRAINT constraint]];

OR REPLACE re-create the view if it already exits
FORCE creates the view regardless of whether or not the base table exist
NOFORCE creates the view only if the base tables exist(This is default)
view is the name of the view
WITH CHECK OPTION specifies that only rows accessible to the view can be inserted or updated
WITH READ ONLY ensures that no DML operation can be performed on this view

creating a view
create view myview
as select emp_id,emp_name,emp_sal
from employee
where dept_id=10


Retrieving Data from a View

select * from myview

Data Access Using Views

1)It retrieves the view definition from the data dictionary table USER_VIEWS.
2)It checks access privileges for the view base table.

Rules for Performing DML Operations on a View
1) You can perform DML operations on simple views.

2) You can not remove a row f the view contains the following:
a) GROUP fucntions
b) A GROUP BY clause
c) The DISTINCT keyword
d) The pseudo column ROWNUM keyword

3) You can not modify data in a view if it contains:
a) GROUP functions
b) A GROUP BY clause
c) The DISTINCT keyword
d) The pseudocolumn ROWNUM keyword
e) Columns defined by expressions

4) You can not add data through a view if the view includes:
a) Group functions
b) A GROUP By clause
c) The DISTINCT keyword
d) The pseudocolumn ROWNUM keyword
e) Columns defined by expressions
f) NOT NULL columns in the base tables that are not selected by the view.

5) WITH CHECK Option

The WITH CHECK OPTION clause specifies that INSERTS and UPDATES performed through the view can not create rows which the view can not select and therefore it allows integrity constraints and data validation check to be enforced on data being inserted or updated.

6)Denying DML Operations
You can ensure that no DML operations occur on your view by creating it WITH READ ONLY option

6)Removing a View

DROP VIEW view;

DROP View myview;

Monday, November 30, 2009

GROUP BY clause

GROUP BY clause

You can use the GROUP BY clause to divide the rows in a table into groups

syntax:

select column,group_function(column)

from table

[where condition]

[GROUP BY group_by_expression]

[ORDER BY column];

Guidelines:


* Using a where clause ,you can exclude rows before dividing then into groups.

* You can not use a column alias in the GROUP BY clause

* BY default,rows are sorted by ascending order of the columns included in the GROUP BY list. You can override this by using the ORDER BY clause


example:

select department_id,AVG(salary)

from employees

group by department_id;


* The GROUP BY column does not have to be in the select clause


example:

select AVG(salary)

from employees

group by department_id;

* grouping by more than one column

* sometimes we need to see results for groups within groups.

for example if you want to see total salary being paid to each job title,within each department.


select department_id,job_id,SUM(salary)

from employees

GROUP BY department_id,job_id;

Illegal Queries Using Group Functions


* Any column or expression in the select list that is not an aggregate function must be in the GROUP BY clause.


select department_id,COUNT(last_name)

from employees;

select department_id,count(last_name)

*
ERROR at line 1:

ORA-00937: not a single-group group function

* Whenever you use a mixture of individual items(DEPARTMENT_ID) and group functions(COUNT) in the same SELECT statement,you must include a GROUP BY clause that specifies the individual items(in this case, DEPARTMENT_ID).


select department_id,count(last_name)

from employees

group by department_id;


* Any column or expression in the SELECT list that is not an aggregate function must be in the GROUP BY clause.


* You can not use group function in the where clause


* Use the HAVING clause to restrict groups


select department_id,avg(salary)
from employees
having avg(salary)>8000
group by department_id;

Thursday, November 26, 2009

Oracle Proprietary Joins Part I

Use a join to query data from more than one table .Rows in one table can be joined to rows in another table according common values existing in corresponding columns. That is usually primary and foreign key columns. 


select table1.column,table2.column


from table1 , table 2


where table1.column1 = table2.column2;


1)Equijoin


equijoins are aslo called simple joins or inner joins.


for example there are two tables



  1. employees with columns employee_id,first_name,last_name,department_id

  2. departments with columns department_id,department_name,location_id

retrieving records with Equijoin


select e.employee_id,e.last_name,e.department_id,d.department_id,d.location_id


from employees e,departmnets d


where e.employee_id=d.department_id



  1. The select clause specifies the columns names to retirive


  • employee last name,employee id and department id wich are columns in the Employees table

  • department id,department name and location id which are columns in departmnets table

  2. the from clause specifies the two tables that the database must access



  • employees table

  • department table

the where clause specifies how the tables are to be joined


e.department_id=d.department_id


becouse the department_id column is common to both tbales it must be prefixed by the table name to avoid ambuguity

Monday, November 16, 2009

Undotablespace in oracle 9i

Oracle Database keeps records of actions of transaction, before they are committed and Oracle needs this information to rollback or Undo the Changes to the database. These records in Oracle are called Rollback or Undo Records. These records are used to Rollback transactions when a rollback statement is issued or during recovery of a database or to provide a read consistent view of data.

Starting with 9i, the rollback segment way is referred to as Manual Undo Management Mode and the new Undo Tablespaces method as the Automatic Undo Management Mode. Although both rollback Segments and Undo Tablespaces are supported in Oracle 9i, Both modes cannot be used. System Rollback segment exists in both the modes. Since we are all familiar with the manual mode, lets look at the features of the Automatic Undo Management (Undo Tablespaces )

Init.ora Parameters for Automatic Undo Management

UNDO_MANAGEMENT : This parameter sets the mode in which oracle manages the Undo Information.

The default value for this parameter is MANUAL so that all your old init.ora files can be used without any changes. To set the database in an automated mode, set this value to AUTO.

( UNDO_MANAGEMENT = AUTO)

UNDO_TABLESPACE : This parameter defines the tablespaces that are to be used as Undo Tablespaces. If no value is specified oracle grabs the first available Undo Tablespace or if there are none present, Oracle will use the system rollback segment to startup. This value is dynamic and can be changed online ( UNDO_TABLESPACE = undo_tbs1 )

UNDO_RETENTION : This value specifies the amount of time, Undo is kept in the tablespace. This applies to both committed and uncommitted transactions since the introduction of FlashBack Query feature in Oracle needs this information to create a read consistent copy of the data in the past. Default value is 900 Secs ( UNDO_RETENTION = 500)

UNDO_SUPRESS_ERRORS : This is a good thing to know about in case your code has the alter transaction commands that perform manual undo management operations. Set this to true to suppress the errors generated when manual management SQL operations are issued in an automated management mode.

Creating and Managing Undo Tablespaces :

Undo tablespaces use syntax that is similar to regular tablespaces except that they use the keyword UNDO. These tablespaces can be created during the database creation time or can be added to an existing database using the create UNDO Tablespace command

Create DATABASE uday controlfile ...........
UNDO Tablespace undo_tbs0 datafile '/vol1/data/uday/undotbs0.dbf' ...

Create UNDO Tablespace undo_tbs1
datafile '/vol1/data/uday/undotbs1.dbf' size 25m autoextend on;

All operations like Renaming a data file, Adding a datafile, Online /Offline Swith or Start Backup / End Backup Switch can be made using the regular alter tablespace command. All other operations are managed by Oracle in the automated management mode.

Monitoring :

v$UNDOSTAT : This view contains statistics for monitoring the effects of transaction execution on Undo Space in the current instance. These are available for space usage, transaction concurrency and length of query operations. This view contains information that spans over a 24 hour period and each row in this view contains data for a 10 minute interval specified by the BEGIN_TIME and END_TIME.


.......Reference DBAsupport.com


Oracle Managed Files(OMF) oracle 9i

Oracle Managed Files (OMF), a new feature introduced in Oracle9i, enables the Oracle Server to automatically create and delete database files using standard operating system interfaces. This
feature hugely simplifies the job of the DBA, as the DBA doesn't need to interact with the underlying operating system to create and delete files.

For OMF to work, the DBA can specify a single directory to hold data and temp files, and up to 5 directories for multiplexed redo log and control files. All files will be created with unique names, 100 MB in size, with properties AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED. The creation and removal of trace files, audit files, alert files, and core files are not affected by this feature.

Advantages:

  • DBAs don't need to specify file names, locations and sizes when creating a tablespace or database
  • Automatic removal of files when a tablespace or log file is dropped
  • Increased script portability as OS specific file names do not need to be hard coded
  • Simplified creation of test and development systems

Disadvantages:

  • Can only be used with file systems, not with RAW Volumes
  • Generated file names and locations might not be according to the site's naming standards
  • Limited scope for file placement and I/O tuning may impact performance (although locations can be altered dynamically)

Enabling Oracle Managed Files:

Set the following initialization parameters in your INIT.ORA or SPFILE and restart the database, or set them on-line using the "ALTER SYSTEM SET parm = value" command:

DB_CREATE_FILE_DEST =

specifies where data files and temp files should be created. On-line redo logs and control files will also be created here if db_create_online_log_dest_n is not set.

DB_CREATE_ONLINE_LOG_DEST_n =

specifies where on-line redo logs and control files should be created. You can specify up to 5 locations (n=1..5).

STANDBY_FILE_MANAGEMENT =

can be set to MANUAL or AUTO. If AUTO is specified, Oracle will automatically create files and drop tablespaces on the standby database as the redo log stream is applied to the standby database.

Naming Standard Used for Generated File Names:

The Oracle Server will generate file names according to the following rules.

  • OMF file names start with "o1_mf_"; and
  • OMF file names will have one of the following extensions ".dbf", ".tmp", ".log" or ".ctl"; and
  • OMF file names will have a "_" character immediately preceding the extension.

File name formats for different types of files:

  • Format for Data Files: o1_mf_%t_%u_.dbf (Example: o1_mf_system_wo94n2xi_.dbf)
  • Format for Temp Files: o1_mf_%t_%u_.tmp (Example: o1_mf_tempts_4n2xwo9i_.tmp)
  • Format for Control Files: o1_mf_%u_.ctl (Example: o1_mf_94n2xiwo_.ctl)
  • Format for Log Files: o1_mf_%g_%u_.log (Example: o1_mf_1_dygh80r6_.log)

Legend:

  • %u = An 8-character string used to guarantee uniqueness
  • %g = Online Redo Log File Group number
  • %t = Tablespace Name

To check what directories and file names are used by a given database, execute the following SQL statements:

SQL> select * from sys.v_$controlfile;
SQL> select * from sys.v_$logfile;
SQL> select * from sys.v_$datafile;
SQL> select * from sys.v_$tempfile;

Managing Data and Temp Files using OMF:

Data and TEMP files will be created in the directory specified by the DB_CREATE_FILE_DEST parameter. Data files will have a *.dbf extension and TEMP files a *.tmp extension. Some examples:

-- Create a new tablespace with 1 file, 100M in size and autoextend on
SQL> create tablespace ts1;

-- Specify non-default file size
SQL> create tablespace ts2 datafile size 400m autoextend off;

-- Limit file size to 800M
SQL> create tablespace ts3 datafile autoextend on maxsize 800m;

-- Create tablespace with 3 files - 60M
SQL> create tablespace ts4 datafile size 20m, size 20m, size 20m;
-- Create TEMPORARY tablespace with 1 x TEMP
SQL> create temporary tablespace temp_ts; file

-- Add second TEMP file to tablespace
SQL> alter tablespace temp_ts add tempfile;
-- Create UNDO tablespace
SQL> create undo tablespace undo_ts;

Look at this CREATE DATABASE example:

sqlplus /nolog
SQL> CONNECT SYS/ password AS SYSDBA
SQL> startup nomount pfile=init.ora;
SQL> create database ORCL
datafile size 400m -- size of datafile for the SYSTEM tablespace
default temporary tablespace temp_ts tempfile size 10m
undo tablespace undo_ts datafile size 10m;

Managing Redo Log Files using OMF:

One can specify up to 5 directories for multiplexed redo log files. See parameter DB_CREATE_ONLINE_LOG_DEST_n. Example:

SQL> alter database add logfile;

Managing Control Files using OMF:

Oracle will create a control file in each or the log file directories specified (DB_CREATE_ONLINE_LOG_DEST_n). If none was specified, Oracle will create a single control file in the data file destination (DB_CREATE_FILE_DEST).

When using SPFILEs, Oracle will even set the CONTROL_FILES= parameter. One needs to set this parameter manually when using a normal INIT.ORA parameter file.

Removing files:

When removing a tablespace or log file, Oracle will automatically delete the associated OMF operating system files. All non-OMF files will remain on disk and must be manually deleted by the DBA. For example: these operations will delete the related objects, with their OMF datafiles from the operating system:

SQL> drop tablespace tbs_1;

SQL> alter database tempfile temp_ts drop;

SQL> alter database drop logfile member '/oradata/o1_mf_3_wo94n2xi_.log';
SQL> alter database drop logfile group 3;

Note, one can also request Oracle to delete non-OMF files when specifying the "including contents and datafiles" option. Look at this example:

SQL> drop tablespace NON_OMS_TS including contents and datafiles;

Convert OMF-files to NON-OMF Files and vice versa:

One can switch data files between OMF and non-OMF mode by renaming them. Look at this example converting a non-OMF file to OMF:

define f=/oradata/x.dbf
define omf_f=/oradata/o1_mf_test_00000000_.dbf

-- Create a new tablespace using non-OMF files...
create tablespace test datafile '&&f' SIZE 1M;

-- Rename non-OMF file to OMF standard name...
alter database datafile '&&f' offline;
! mv &&f &&omf_f
alter database rename file '&&f' to '&&omf_f';
recover datafile '&&omf_f';
alter database datafile '&&omf_f' online;

-- Drop tablespace - Oracle will delete the tablespace's files!!!
drop tablespace test;

OMF Demonstration 1: Create a tablespace and add files to it

This example will create a new tablespace called TEST, and add a second datafile in another directory:

alter system set db_create_file_dest='/oradata1;
create tablespace test;
alter system set db_create_file_dest='/oradata2;
alter tablespace test add datafile;

OMF Demonstration 2: Create a new database

This example will create a tiny Oracle database for testing purposes. Before starting set the ORACLE_SID environment varable to the database name ("test" in the example below). After that, create an INIT.ORA file with single entry: "DB_NAME=test".

connect / as sysdba

-- Startup instance
startup nomount pfile=./init.ora

-- Create SPFILE from the INIT.ORA file...
create spfile from pfile='./init.ora';

-- Restart the Database with new SPFILE...
startup nomount force

-- Set DB location in SPFILE and MEMORY...
alter system set DB_CREATE_FILE_DEST="/oradata";

-- Create new database, SPFILE will be updated with CONTROL_FILES...
create database;



1. Oracle9i Release 2 (9.2) Database Administrator's Guide


Chapter 3: Using Oracle-Managed Files