登录  
 加关注
查看详情
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

飞哥的技术博客

世上无难事,只怕有心人!

 
 
 

日志

 
 
 
 

Oracle9i RAC on HP-UX(METALINK上下载的)  

2009-07-06 16:28:45|  分类: Oracle |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |

Oracle9i Real Application Clusters 

(RAC) on HP-UX

Authors:        Rebecca Kühn, Rainer Marekwia

                       HP/Oracle Cooperative Technology Center (

Co-Authors:  Sandy Gruver, HP Alliance Team US

                        Troy Anthony, Oracle RAC Development

This module focuses on what Oracle9i Real Application Clusters (RAC) is and how it can be properly configured on HP-UX to tolerate failures with minimal downtime. Oracle9i Real Application Clusters is an important Oracle9i feature that addresses high availability and scalability issues.

Upon completion of this module, you should be able to: What Oracle9i Real Application Clusters is and how it can be used Understand the hardware requirements Understand how to configure the HP Cluster and create the raw devices Examine and verify the cluster configuration Understand the creation of the Oracle9i Real Application Clusters database using DBCA

2. Overview: What is Oracle9i Real Applications Clusters?

Oracle9i Real Application Clusters is a computing environment that harnesses the processing power of multiple, interconnected computers. Oracle9i Real Application Clusters software and a collection of hardware known as a "cluster," unites the processing power of each component to become a single, robust computing environment. A cluster generally comprises two or more computers, or "nodes."

In Oracle9i Real Application Clusters (RAC) environments, all nodes concurrently execute transactions against the same database. Oracle9i Real Application Clusters coordinates each node's access to the shared data to provide consistency and integrity.

Oracle9i Real Application Clusters serves as an important component of robust high availability solutions. A properly configured Oracle9i Real Application Clusters environment can tolerate failures with minimal downtime.

Oracle9i Real Application Clusters is also applicable for many other system types. For example, data warehousing applications accessing read-only data are prime candidates for Oracle9i Real Application Clusters. In addition, Oracle9i Real Application Clusters successfully manages increasing numbers of online transaction processing systems as well as hybrid systems that combine the characteristics of both read-only and read/write applications.

Harnessing the power of multiple nodes offers obvious advantages. If you divide a large task into sub-tasks and distribute the sub-tasks among multiple nodes, you can complete the task faster than if only one node did the work. This type of parallel processing is clearly more efficient than sequential processing. It also provides increased performance for processing larger workloads and for accommodating growing user populations. Oracle9i Real Application Clusters can effectively scale your applications to meet increasing data processing demands. As you add resources, Oracle9i Real Application Clusters can exploit them and extend their processing powers beyond the limits of the individual components.

From a functional perspective RAC is equivalent to single-instance Oracle. What the RAC environment does offer is significant improvements in terms of availability, scalability and reliability.

In recent years, the requirement for highly available systems, able to scale on demand, has fostered the development of more and more robust cluster solutions. Prior to Oracle9i, HP and Oracle, with the combination of Oracle Parallel Server and HP ServiceGuard OPS edition, provided cluster solutions that lead the industry in functionality, high availability, management and services. Now with the release of Oracle 9i Real Application Clusters (RAC) with the new Cache Fusion architecture based on an ultra-high bandwidth, low latency cluster interconnect technology, RAC cluster solutions have become more scalable without the need for data and application partitioning.

 

The information contained in this document covers the installation and configuration of Oracle Real Application Clusters in a typical environment; a two node HP cluster, utilizing the HP-UX operating system.

Oracle 9i cache fusion utilizes the collection of caches made available by all nodes in the cluster to satisfy database requests. Requests for a data block are satisfied first by a local cache, then by a remote cache before a disk read is needed. Similarly, update operations are performed first via the local node and then the remote node caches in the cluster, resulting in reduced disk I/O. Disk I/O operations are only done when the data block is not available in the collective caches or when an update transaction performs a commit operation.

Oracle 9i cache fusion thus provides Oracle users an expanded database cache for queries and updates with reduced disk I/O synchronization which overall speeds up database operations.

However, the improved performance depends greatly on the efficiency of the inter-node message passing mechanism, which handles the data block transfers between nodes.

The efficiency of inter-node messaging depends on three primary factors: The number of messages required for each synchronization sequence. Oracle 9i?s Global Cache Manager GCM (Global Cache Services GCS/ Global Enqueue Services GES) coordinates the fast block transfer between nodes with two inter- node messages and one intra-node message. If the data is in a remote cache, an inter-node message is sent to the Lock Manager Services (LMS) on the remote node. The GCM and Cache Fusion processes then update the in-memory lock structure and send the block to the requesting process. The frequency of synchronization (the less frequent the better). The cache fusion architecture reduces the frequency of the inter-node communication by dynamically migrating locks to a node that shows a frequent access pattern for a particular data block. This dynamic lock allocation increases the likelihood of local cache access thus reducing the need for inter-node communication. At a node level, a cache fusion lock controls access to data blocks from other nodes in the cluster. The latency of inter-node communication. This is a critical component in Oracle 9i RAC as it determines the speed of data block transfer between nodes. An efficient transfer method must utilize minimal CPU resources, support high availability as well as highly scalable growth without bandwidth constraints.

   HyperFabric

HyperFabric is a high-speed cluster interconnect fabric that supports both the industry standard TCP/UDP over IP and HP?s proprietary Hyper Messaging Protocol (HMP). HyperFabric extends the scalability and reliability of TCP/UDP by providing transparent load balancing of connection traffic across multiple network interface cards (NICs) and transparent failover of traffic from one card to another without invocation of MC/ServiceGaurd. The HyperFabric NIC incorporates a network processor that implements HP?s Hyper Messaging Protocol and provides lower latency and lower host CPU utilization for standard TCP/UDP benchmarks over HyperFabric when compared to gigabit Ethernet. Hewlett-Packard released HyperFabric in 1998 with a link rate of 2.56 Gbps over copper. In 2001, Hewlett-Packard released HyperFabric 2 with a link rate of 4.0 Gbps over fiber with support for compatibility with the copper HyperFabric interface. Both HyperFabric products support clusters up to 64-nodes.HyperFabric Switches

Hewlett-Packard provides the fastest cluster interconnect via its proprietary HyperFabric switches, the latest product being HyperFabric 2, which is a new set of hardware components with fiber connectors to enable low-latency, high bandwidth system interconnect. With fiber interfaces, HyperFabric 2 provides faster speed ? up to 4Gbps in full duplex over longer distance ? up to 200 meters. HyperFabric 2 also provides excellent scalability by supporting up to 16 hosts via point-to-point connectivity and up to 64 hosts via fabric switches. It is backward compatible with previous versions of HyperFabric and available on IA-64, PA-RISC servers.

  Hyper Messaging Protocol (HMP)

Hewlett-Packard, in cooperation with Oracle, has designed a cluster interconnect product specifically tailored to meet the needs of enterprise class parallel database applications. HP?s Hyper Messaging Protocol significantly expands on the feature set provided by TCP/UDP by providing a true Reliable Datagram model for both remote direct memory access (RDMA) and traditional message semantics. Coupled with OS bypass capability and the hardware support for protocol offload provided by HyperFabric, HMP provides high bandwidth, low latency and extremely low CPU utilization with an interface and feature set optimized for business critical parallel applications such as Oracle 9i RAC.

5. HP/Oracle Hardware and Software Requirements

For additional information and latest updates please refer to the Oracle9i Release Note Release 1 (9.0.1) for HP 9000 Series HP-UX (Part Number A90357-01) at Each node uses the HP-UX 11.x operating system software. Issue the command "$uname -r" at the operating system prompt to verify the release being used. Oracle9i RAC is only available in 64-bit flavour. To determine if you have a 64-bit configuration on an HP-UX 11.0 installation, enter the following command: $/bin/getconf KERNEL_BITS Oracle9i 9.0.1 RAC is supported by ServiceGuard OPS Edition 11.09 and 11.13. Starting with ServiceGuard OPS Edition 11.13 16 nodes 9i RAC cluster is supported with SLVM. Software mirroring with HP-UX MirrorDisk/UX with SLVM is supported in a 2-node configuration only. Support for the HP Hyperfabric product is provided. A total of 127 RAC instances per cluster is supported. RAM Memory allocation: Minimum 256 MB. Use the following command to verify the amount of memory installed on your system

$ /usr/sbin/dmesg | grep "Physical:"Swap Space: Minimum 2 x RAM or 400 MB, whichever is greater. Use the following command to determine the amount of swap space installed on your system:

$ /usr/sbin/swapinfo -a CD-ROM drive: A CD-ROM drive capable of reading CD-ROM disks in the ISO 9660 format with RockRidge extensions. Temporary Disk Space: The Oracle Universal Installer requires up to 512 MB of space in the /tmp directory Operating System: HP-UX version 11.0. or 11i (11.11). To determine if you have a 64-bit configuration on an HP-UX 11.0 or HP-UX 11i installation, enter the following command: To determine your current operating system information, enter the following command: JRE: Oracle applications use JRE 1.1.8. JDK: Oracle HTTP Server Powered by Apache uses JDK 1.2.2.05. Due to a known HP bug (Doc.id. KBRC00003627), the default HP-UX 64 operating system installation does not create a few required X-library symbolic links. These links must be created manually before starting Oracle9i installation. To create these links, you must have superuser privileges, as the links are to be created in the /usr/lib directory. After enabling superuser privileges, run the following commands to create the required links:

$ ln -s /usr/lib/libX11.3 libX11.sl

$ ln -s /usr/lib/libXIE.2 libXIE.sl

$ ln -s /usr/lib/libXext.3 libXext.sl

$ ln -s /usr/lib/libXhp11.3 libXhp11.sl

$ ln -s /usr/lib/libXi.3 libXi.sl

$ ln -s /usr/lib/libXm.4 libXm.sl

$ ln -s /usr/lib/libXp.2 libXp.sl

$ ln -s /usr/lib/libXt.3 libXt.sl

$ ln -s /usr/lib/libXtst.2 libXtst.sl

X Server system software. (Refer to the installation guide for more information on the X Server System and emulator issues.)

  

5.3 HP-UX Operating System PatchesSep 2001 HP-UX patch bundle hyperfabric driver: 11.00.12 (HP-UX 11.0) (Required only if your system has an older hyperfabric driver version) Sep 2001 HP-UX patch bundle

Optional Patch: For DSS applications running on machines with more than 16 CPUs, we recommend installation of the HP-UX patch PHKL_22266. This patch addresses performance issues with the HP-UX Operating System.

HP provides patch bundles at

Individual patches can be downloaded from

To determine which operating system patches are installed, enter the following command:

To determine if a specific operating system patch has been installed, enter the following command:

$ /usr/sbin/swlist -l patch patch_number

To determine which operating system bundles are installed, enter the following command:

$ /usr/sbin/swlist -l bundle

Defines the system wide limit of queued signal that can be allocated. 

Refers to the maximum data segment size for 32-bit systems. Setting this value too low may cause the processes to run out of memory. 

Refers to the maximum data segment size for 64-bit systems. Setting this value too low may cause the processes to run out of memory. 

Defines the maximum stack segment size in bytes for 32-bit systems. 

Defines the maximum stack segment size in bytes for 64-bit systems. 

Defines the maximum number of swap chunks where SWCHUNK is the swap chunk size (1 KB blocks). SWCHUNK is 2048 by default. 

Defines maximum number of user processes. 

Defines the maximum number of message map entries. 

Defines the number of message queue identifiers. 

Defines the number of segments available for messages. 

Defines the number of message headers. 

Defines the maximum number of pending timeouts. 

((8 * NPROC + 2048) + VX_NCSIZE) 

Defines the Directory Name Lookup Cache (DNLC) space needed for inodes. 

VX_NCSIZE is by default 1024. 

Defines the maximum number of open files. 

Defines the maximum number of files locks available on the system. 

Defines the maximum number of open inodes. 

Defines the maximum number of kernel threads supported by the system. 

Defines the maximum number of processes. 

Defines the maximum number of semaphore map entries. 

Defines the maximum number of semaphore sets in the entire system. 

Sets the number of semaphores in the system. The default value of SEMMNS is 128, which is, in most cases, too low for Oracle9 i software. 

Defines the number of semaphore undo structures. 

Defines the maximum value of a semaphore. 

Defines the maximum allowable size of one shared memory segment. 

The SHMMAX setting should be large enough to hold the entire SGA in one shared memory segment. A low setting can cause creation of multiple shared memory segments which may lead to performance degradation. 

Defines the maximum number of shared memory segments in the entire system. 

Defines the maximum number of shared memory segments one process can attach. 

Defines the maximum System-Selected Page Size in kilobytes. 

Note: These are minimum kernel requirements for Oracle9i. If you have previously tuned your kernel parameters to levels equal to or higher than these values, continue to use the higher values. A system restart is necessary for kernel changes to take effect.

1. Create the /dev/async character device

$ /sbin/mknod /dev/async c 101 0x0

2. Configure the async driver in the kernel using SAM

=> Kernel Configuration

    => Kernel

         => the driver is called 'asyncdsk'

Generate new kernel

Reboot

3. Set HP-UX kernel parameter max_async_ports using SAM. max_async_ports limits the maximum number of processes that can concurrently use /dev/async. Set this parameter to the sum of 'processes' from init.ora + number of bakground processes. If max_async_ports is reached, subsequent processes will use synchronous i/o.

 

4. Set HP-UX kernel parameter aio_max_ops using SAM. aio_max_ops limits the maximum number of asynchronous i/o operations that can be queued at any time. Set this parameter to the default value (2048), and monitor over time using glance

6. Configure the HP/Oracle 9i Real Application Cluster

6.1 Hardware configuration (Hardware planning, Network and disk layout)

In order to provide a high level of availability, a typical cluster uses redundant system components, for example two or more systems and two or more independent disk subsystems. This redundancy eliminates single points of failure.

The nodes in an Oracle9i RAC cluster are HP 9000 systems with similar memory configuration and processor architecture. A node can be any Series 800 model. It is recommended that both nodes be of similar processing power and memory capacity. Rredundant high-speed interconnect between the nodes (e.g. Hyperfabric, Hyperfabric switches) Redundant network components (Primary and Standby LAN) Redundant disk storage or RAID0/1 configuration for disk mirroring A dedicated heartbeat LAN (heartbeat traffic is also carried on the primary and standby LAN)

Draw a diagram of your cluster using information gathered from these two sets of commands. You?ll use this information later in configuring the system, the logical volumes and the cluster.

        $ lanscan

        $ ifconfig lanX, and

        $ netstat

to determine the number of LAN interfaces on each node and the names and addresses of each LAN card and subnet information.

2. Use the IO command

        $ ioscan ?fnCdisk

to find the disks connected to each node. Note the type of disks installed. List the hardware addresses and device file names of each disk. Also note which are shared between nodes.

Minimally, a 9i RAC cluster requires three distinct subnets:

Dedicated cluster heartbeat LAN

Dedicated Global Cache Management (GCM) LAN

User/Data LAN, which will also carry a secondary heartbeat

Because the GCM is now integrated into the Oracle9i kernel, the GCM will use the IP address associated with the default host name.

The network should be configured in the /etc/rc.config.d/netconf file. Any time you change the LAN configuration, you need to stop the network and re-start it again:

        $ /sbin/rc2.d/S340net stop

        $ /sbin/rc2.d/S340net start

GCM requires a high speed network to handle high bandwidth network traffic. In the Oracle literature this is referred to as the host interconnect. We recommend using either Hyperfabric or Gigabit Ethernet for this network.

Remote copy(rcp) needs to be enabled for both the root and oracle accounts on all nodes to allow remote copy of cluster configuration files.

There are two ways to enable rcp for root. You can choose the one that fits your site?s security requirements. Include the following lines in either the .rhosts file in root?s home directory or in the /etc/cmcluster/cmclnodelist file:

        node1name root

        node2name root

To enable remote copy (rcp) for Oracle include the following lines in the .rhosts file in the oracle user?s home directory:

        node1name oracle

        node2name oracle

where node1name and node2name are the names of the two systems in the cluster and oracle is the user name of the Oracle owner. The rcp only works if for the respective user a password has been set (root and oracle).

6.2 Configure logical volumes

When disk drives were 1 or 2-GB at maximum the usual wisdom was to do the following:

· Place redo logs and database files onto different drives

· Ensure that data and indexes were on separate spindles

· Spread the I/O load across as many disk devices as possible

Today with the greatly increased capacity of a single disk mechanism (maximum 181Gb drives on an XP512) and much faster I/O rates or transfer speeds, these rules must be revisited.

The real reason for these rules of thumb was to make sure that the I/O load resulting from an Oracle database would wind up being fairly well spread across all the disk mechanisms.  Before the advent of large capacity disk drives housed in high performance storage systems, if the same disk drive wound up hosting two or more fairly active database objects, performance could deteriorate rapidly, especially if any of these objects needed to be accessed sequentially.

Today, in the era of huge disk arrays, the concept of ?separate spindles? is a bit more vague, as the internal structure of the array is largely hidden from the view of the system administrator.  The smallest independent unit of storage in an XP array is substantially larger than 1 or 2 GB, which means you have far fewer ?spindles? to play with, at a time when there are more database objects (tables, indexes, etc) to ?spread?, so it won?t be possible to keep all the objects separate. The good news is that the architecture of the XP array is much more tolerant of multiple simultaneous I/O streams to/from the same disk mechanism than the previous generation of individual small disks.

Given all these advances in the technology, we have found it best to use a simple method for laying out an Oracle database on an XP array (under HP-UX)  with volume manager striping of all of the database objects across large numbers of disk mechanisms. The result is to average out the I/O to a substantial degree. This method does not guarantee the avoidance of disk hotspots, but we believe it to be a reasonable ?first pass? which can be improved upon with tuning over time. It?s not only a lot faster to implement than a customized one-object-at-a-time layout, but we believe it to be much more resistant to the inevitable load fluctuations which occur over the course of a day, month, or year.

The layout approach that we are advocating might be described as ?Modified Stripe-Everything- -Across-Everything?. Our goal is to provide a simple method which will yield good I/O balance, yet still provide some means of manual adjustment. Oracle suggests the same strategy.  Their name for this strategy is SAME (Stripe and Mirror Everything).

XP basics: an XP512 can be configured with one to four pairs of disk controller modules (ACPs).  Each array group is controlled by only one of these ACP pairs (it is in the domain of only one ACP pair).  Our suggestion is that you logically ?separate? the XP?s array groups into four to eight sets. Each set should have array groups from all the ACP domains. Each set of array groups would then be assigned to a single volume group. All LUNs in the XP array will have paths defined via two distinct host-bus adapters; the paths should be assigned within each volume group in such a fashion that their primary path alternates back and forth between these two host-bus adapters. The result of all this: each volume group will consist of space which is ?stripable? across multiple array groups spread across all the ACP pairs in the array, and any I/O done to these array groups will be spread evenly across the host-bus adapters on the server.

1. Disks need to be properly initialized before being added into volume groups by the pvcreate command. Do the following step for all the disks (LUNs) you want to configure for your 9i RAC volume group(s):

$ pvcreate ?f /dev/rdsk/cxtydz ( where x=instance, y=target, and z=unit)

2. Create the volume group directory with the character special file called group:

        $ mkdir /dev/vg_rac

$ mknod /dev/vg_rac/group c 64 0x060000

Note: The minor numbers for the group file should be unique among all the volume groups on the system.

3. Create PV-LINKs and extend the volume group:

$ vgcreate /dev/vg_rac /dev/dsk/c0t1d0 /dev/dsk/c1t0d0

$ vgextend /dev/vg_rac /dev/dsk/c1t0d1 /dev/dsk/c0t1d1

Continue with vgextend until you have included all the needed disks for the volume group(s).

4. Create logical volumes for the 9i RAC database with the command

$ lvcreate ?i 10 ?I 1024 ?L 100 ?n Name /dev/vg_rac

-i: number of disks to stripe across

-I: stripe size in kilobytes

-L: size of logical volume in MB

5. Logical Volume Configuration

It is necessary to define raw devices for each of the following categories of files. The Oracle Database Configuration Assistant (DBCA) will create a seed database expecting the following configuration:

An undo tablespace per instance

Two ONLINE redo log files per instance 

Note: Automatic Undo Management requires an undo tablespace per instance therefore you would require a minimum of 2 tablespaces as described above.

By following the naming convention described in the table above, raw partitions are identified with the database and the raw volume type (the data contained in the raw volume). Raw volume size is also identified using this method. Note : In the sample names listed in the table, the string db_name should be replaced with the actual database name, thread is the thread number of the instance, and lognumb is the log number within a thread.

It is recommended best practice to create symbolic links for each of these raw files on all systems of your RAC cluster.

6. Check to see if your volume groups are properly created and available:

 

7. Change the permission of the database volume group vg_ops to 777, change the permissions of all raw logical volumes to 660 and the owner to oracle:dba.

$ chmod 777 /dev/vg_rac

$ chmod 660 /dev/vg_rac/r*

$ chown oracle:dba /dev/vg_rac/r*

 

8. Export the volume group:

De-activate the volume group

Create the volume group map file:

Copy the mapfile to all the nodes in the cluster:

$ rcp mapfile system_name:target_directory

$ rcp map_ops nodeB:/tmp/scripts

$ chown oracle:dba /dev/vg_rac/r*

9. Import the volume group on the second node in the cluster

Create a volume group directory with the character special file called group:

    $ mkdir /dev/vg_rac

$ mknod /dev/vg_rac/group c 64 0x060000

Note: The minor number has to be the same as on the other node.

Check to see if devices are imported:

$ strings /etc/lvmtab

6.3 Configure HP ServiceGuard Cluster

After all the LAN cards are installed and configured, and all the OPS volume groups and the cluster lock volume group(s) are configured, you can start the cluster configuration. The following sequence is very important. However, if the RACvolume groups are unknown at this time, you should be able to configure the cluster minimally with a lock volume group.

At this time, the cluster lock volume group should have been created. Since we

only configured one volume group for the entire OPS cluster vg_ops, we used vg_ops for the lock volume as well.

1. Create a cluster configuration template:

2. Edit the cluster configuration file (rac.asc).

The cluster configuration file should be set with both DLM and GMS enabled clauses to NO, since neither need to be configured with Oracle9i RAC. Global Cache management is now handled transparently by the Oracle kernel. For compatibility with older versions of Oracle, the cluster configuration file still contains a section for DLM and GMS.

Make the necessary changes to this file for your cluster. For example, change the ClusterName, and adjust the heartbeat interval and node timeout to prevent unexpected failovers due to GCM traffic.

3. Check the cluster configuration:

$ cmcheckconf -v -C rac.asc

4. IMPORTANT! Activate the lock disk on the configuration node ONLY. Lock volume can only be activated on the node where the cmapplyconf command is issued so that the lock disk can be initialized accordingly.

5. Create the binary configuration file and distribute the cluster configuration to all the nodes in the cluster:

$ cmapplyconf -v -C rac.asc

Note: the cluster is not started until you run cmrunnode on each node or cmruncl.

6. De-activate the lock disk on the configuration node after cmapplyconf

$ vgchange -a n /dev/vg_rac

7. Start the cluster and view it to be sure its up and running. See the next section for instructions on starting and stopping the Cluster. After testing the cluster, shut it down in order to make changes later to the HP-UX kernel parameters.

Start the cluster from any node in the cluster

Make all RAC volume groups and Cluster Lock volume groups sharable and cluster aware (not packages) from the cluster configuration node:

On all the nodes, activate the volume group in shared mode in the cluster:

$ vgchange ?a s /dev/vg_rac

Check the cluster status:

Shut down the 9i RAC instances (If up and running)

On all the nodes, deactivate the volume group in shared mode in the cluster:

Halt the cluster from any node in the cluster

$ cmhaltcl ?v

Check the cluster status:

6.4 Create a user who will own the Oracle RAC software

Complete root user set-up tasks:

1. Log in as the root user.

2. Create database administrator groups by using the System Administrator's Menu (SAM).

The OSDBA group, typically dba.

The optional OSOPER, group, typically oper.

The optional ORAINVENTORY group, typically oinstall.

Grant the OSDBA group RTSCHED, RTPRIO and MLOCK privileges.

A new HP scheduling policy called SCHED_NOAGE enhances Oracle9i's performance by scheduling Oracle processes so that they do not increase or decrease in priority, or become preempted.

The RTSCHED and RTPRIO privileges grant Oracle the ability to change its process scheduling policy to SCHED_NOAGE and also tell Oracle what priority level it should use when setting the policy. The MLOCK privilege grants Oracle the ability to execute asynch I/Os through the HP asynch driver. Without this privilege, Oracle9i generates trace files with the following error message: "Ioctl ASYNCH_CONFIG error, errno = 1".

If it does not already exist, create the /etc/privgroup file. Add the following line to the file:

        dba  MLOCK RTSCHED RTPRIO

Use the following command syntax to assign these privileges:

        $ setprivgrp -f /etc/privgroup

In the preceding command, groupname is the name of the group that receives the privileges, and privileges are the privileges that are granted to the group.

3. Set system environment variables.

If it does not already exist, create a local bin directory, such as /usr/local/bin or /opt/bin. Set and verify that this directory is included in each user's PATH statement, and that users have execute permissions on the directory.

Determine if your X Window system is working properly on your local system. On the system where you will run the Oracle Universal Installer, set DISPLAY to that system's name, or the IP address, X server, and screen.

Use the database server's name, or the IP address, X server, and screen only if you are performing the installation from your database server's X Window console. If you are not sure what the X server and screen should be set to, use 0 (zero) for both.

Set a temporary directory path for the TMPDIR variable with at least 512 MB of free space where the installer has write permission. Example: /var/tmp.

4. Set Oracle environment variables by adding an entry similar to the following example to each user startup .profile file for the Bourne or Korn shells, or .login file for the C shell.

# Oracle Environment

export ORACLE_BASE=/opt/

export ORACLE_HOME=$ORACLE_BASE/oracle/product/9.0.1

export ORACLE_SID=rac2

export ORACLE_TERM=xterm

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOME/rdbms/lib

export LD_LIBRARY_PATH

SHLIB_PATH=ORACLE_HOME/lib32:$ORACLE_HOME /rdbms/lib32

export SHLIB_PATH

# Set shell search paths:

export PATH=$PATH:$ORACLE_HOME/bin

#CLASSPATH must include the following JRE locations:

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib

Create the /var/opt/oracle directory and make it owned by the oracle account. After installation, this directory will contain a few small text files that briefly describe the Oracle software installations and databases on the server. These commands will create the directory and give it appropriate permissions:

        $ mkdir /var/opt/oracle

        $ chown oracle:dba /var/opt/oracle

        $ chmod 755 /var/opt/oracle

A user needs to be created, on all nodes, to manage the installation and administration of the Oracle software. The username oracle is used in this example, but it need not be. The user account and associated group entries must be defined on all nodes of the cluster

Oracle9i is supplied on multiple CD-ROM disks. During the installation process it is necessary to switch between the CD-ROMS. OUI will manage the switching between CDs, however if the working directory is set to the CD device OUI cannot unmount it. To avoid this problem do NOT change directory to the CD-ROM device prior to starting the OUI process. Conversely, the information contained on the CD-ROMs can be copied to a temporary staging area, prior to starting the OUI process. For example, using a directory structure as follows allows OUI to detect the contents of each CD, and will not have to prompt to change CDs.

    /Disk1/<contents of disk1>

    /Disk2/<contents of disk2>

    /Disk3/<contents of disk3>

To install the Oracle Software, perform the following:

Start-up the cluster and activate the volume groups in shared mode as described in 6.3 section

After starting up the the the Oracle Universal Installer by issuing the command: 

      $ ./<cdrom_mount_point>/runInstaller

At the OUI Welcome screen, click Next. 

A prompt will appear for the Inventory Location (if this is the first time that OUI has been run on this system). This is the base directory into which OUI will install files. The Oracle Inventory definition can be found in the file. Click OK . 

Verify the UNIX group name of the user who controls the installation of the Oracle9i software. If an instruction to run appears, the pre-installation steps were not completed successfully. Typically, the /var/opt/oracle directory does not exist or is not writeable by oracle. Run /tmp/orainstRoot.sh to correct this, forcing Oracle Inventory files, and others, to be written to the ORACLE_HOME directory. Once again this screen only appears the first time Oracle9i products are installed on the system.Click Next. 

The File Location window will appear. Do NOT change the Source field. The Destination field defaults to the ORACLE_HOME environment variable. Click Next. 

Select the Products to install. In this example, select the Oracle9i Server then click Next. 

Select the installation type. Choose the Enterprise Edition option. The selection on this screen refers to the installation operation, not the database configuration. The next screen allows for a customized database configuration to be chosen. Click Next. 

Select the configuration type. In this example you choose the Advanced Configuration as this option provides a database that you can customize, and configures the selected server products. Select Customized and click Next . 

Select the other nodes on to which the Oracle RDBMS software will be installed. It is not necessary to select the node on which the OUI is currently running. Click Next. 

Specify the raw partition in to which the Oracle9i Real Application Clusters (RAC) configuration information will be written. It is recommended that this raw partition is a minimum of 100MB in size, called srvmconfig.

Enter the Database Identification Global Database Name. 

Choose the JDK home directory. 

The Summary screen will be presented. Confirm that the RAC database software will be installed and then click Install. The OUI will install the Oracle9i software on to the local node, and then copy this information to the other nodes selected. 

If you would like to create the database later using the Database Configuration Assistent (DBCA) , you have to select Utilities/Assistents.

Deselect OLAP services, otherwise an additional tablespace will be created. In addition, a long-running script (cwmlite.sql) will be executed.. 

Once Install is selected, the OUI will install the Oracle RAC software on to the local node, and then copy software to the other nodes selected earlier. This will take some time. During the installation process, the OUI does not display messages indicating that components are being installed on other nodes - I/O activity may be the only indication that the process is continuing.

6.6 Create a Database using the Oracle Database Configuration Assistant (DBCA)

With Oracle 9.0.1 we recommend the creation of the database using SQL scripts (for example created by the DBCA) instead of using DBCA only.

DBCA provides three primary processing phases: Verification of shared disk configuration (for non-cluster file system platforms) Configure the Oracle network services

When instalingl the Oracle Software with the SQL scripts, remeber the following::

Run the scripts $ORACLE_HOME/rdbms/admin/catclust.sql after DB creating

During the creation of the database set the parameter UNDO_TABLESPACE = FALSE 

Specify a tablespace name as the UNDO_TABLESPACE, not a filename.

To install the Oracle Software with the DBCA, perform the following:

DBCA will launch as part of the installation process, but can be run manually by executing the command dbca from the $ORACLE_HOME/bin directory on UNIX platforms. Choose Oracle Cluster Database option and select Next. 

The Operations page is displayed. Choose the option Create a Database and click Next. 

The Node Selection page appears. Select the nodes that you want to configure as part of the RAC database and click Next. 

The Database Templates page is displayed. The templates other than New Database include datafiles. Choose New Database and then click Next . 

The Show Details button provides information on the database template selected. 

DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID) . The Global Database Name is typically of the form name.domain.

Note: Please specify a TABLESPACE as the UNDO_TABLSPACE, not a file name. During database creating set UNDO_MANAGEMENT=FALSE. 

The Database Options page is displayed. Select the options you wish to configure and then choose Next. Note: If you did not choose New Database from the Database Template page, you will not see this screen. 

The Additional database Configurations button displays additional database features. Make sure both are checked and click OK. 

Select the connection options desired from the Database Connection Options page. Note: If you did not choose New Database from the Database Template page, you will not see this screen. Click Next . 

DBCA now displays the Initialization Parameters page. This page comprises a number of Tab fields. Modify the Memory settings if desired and then select the File Locations tab to update information on the Initialization Parameters filename and location. Then click Next . 

The option Create persistent initialization parameter file is selected by default. Enter raw device name for the location of the server parameter file (spfile). Then click Next . 

The button File Location Variables? displays variable information. Click OK. 

The button All Initialization Paramters? displays the Initialization Parameters dialog box. This box presents values for all initialization parameters and indicates whether they are to be included in the spfile to be created through the check box, included (Y/N). Instance specific parameters have an instance value in the instance column. Complete entries in the All Initialization Parameters page and select Close . Ensure all entries in the Initialization Parameters page are complete and select Next. 

DBCA now displays the Database Storage Window. This page allows you to enter file names for each tablespace in your database. 

The file names are displayed in the Datafiles folder, but are entered by selecting the Tablespaces icon, and then selecting the tablespace object from the expanded tree. Any names displayed here can be changed. Complete the database storage information and click Next. 

The Database Creation Options page is displayed. Ensure that the option Create Database is checked and click Finish . 

The DBCA Summary window is displayed. Review this information and then click OK. 

Once the Summary screen is closed using the OK option, DBCA begins to create the database according to the values specified. 

A new database now exists. It can be accessed via Oracle SQL*PLUS or other applications designed to work with an Oracle RAC database.

 发表于: 2006-09-11,修改于: 2006-09-11 09:20 已浏览2532次,有评论0条 推荐 投诉

引文来源  Oracle9i RAC on HP-UX(METALINK上下载的) - ORACLE On Unix - UNIX+ORACLE

  评论这张
 
阅读(844)| 评论(0)

历史上的今天

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2018