This note is about upgrading a virtualized ODA from 12.2.1.4 to 18.3.
#1 Cleanup from previous upgrade.
The MOSĀ 2502972.1 describes a potential issue if the system had been previously upgraded from 12.1 to 12.2.
Look for xml files located in the directories /u01/app/grid/crsdata/xxx/crsconfig u01/app/grid/crsdata/<node>/crsconfig and make sure the CRS_HOME variables are pointing to the 12.2 clusterware instead of 12.1. If they do not, it is suggested to rename these files.
The error returned would be otherwise:
2019/03/01 12:11:03 CLSRSC-697: Failed to get the value of environment variable 'TZ' from the environment file '/u01/app/12.1.0.2/grid/crs/install/s_crsconfig_<node>_env.txt'
#2 Ensure ora.cvu is running
To prevent the clustware to fail with the error below, make sure ora.cvu is enabled before running the upgrade. If not, add then enable the resource on both nodes.
PRCR-1001 : Resource ora.cvu does not exist 2019/03/01 14:14:21 CLSRSC-180: An error occurred while executing the command '/u01/app/12.2.0.1/grid/bin/srvctl disable cvu' 2019/03/01 14:14:27 CLSRSC-564: Failed to disable the CVU resource during upgrade. Died at /u01/app/18.0.0.0/grid/crs/install/crsupgrade.pm line 2095.
#3 Download the update patch 28864520
#4 Upload the patch to the two ODA_BASE servers in /tmp/patch
#5 From the 2 nodes, run the oakcli command below
# oakcli unpack -package /tmp/patch/p28864520_183000_Linux-x86-64_1of3.zip # oakcli unpack -package /tmp/patch/p28864520_183000_Linux-x86-64_2of3.zip # oakcli unpack -package /tmp/patch/p28864520_183000_Linux-x86-64_3of3.zip
Make sure if there is enough disk space before proceeding, 12GB is a strict minimum, 15GB preferred on /, /tmp and /u01
#6 Remove the bundle zip files from both node, and make sure there is enough space on /tmp
#7 Validate the system from the first oda_base node:
# oakcli validate -a
#8 From both ODA_BASE nodes, verify the OS patch
# oakcli validate -c ospatch -ver 18.3.0.0.0
#9 From both ODA_BASE nodes verify the list of components to be patched
# oakcli update -patch 18.3.0.0.0 --verify
#10 From the first ODA_BASE node to apply the server patch. This patch will also upgrade the clusterware.
# oakcli update -patch 18.3.0.0.0 --server
This operation should last 2 hours and reboot the servers.
This is the portion that should cause the more troubles. There is no magic bullet if things start going south specially that in this release, the –local option does not seem to be supported anymore. One option left is to restart the grid upgrade, if the issue occurs in this step, by runningĀ /u01/app/18.0.0.0/grid/rootupgrade.sh.
#11 From the first ODA_BASE nodes apply the storage patch:
# oakcli update -patch 18.3.0.0.0 --storage
Note, this should not be necessary moving from 12.2 to 18.3
#12 From the first ODA_BASE nodes apply the database patch:
# oakcli update -patch 18.3.0.0.0 --database
The upgrade will probably not work. If it fails, download and install the latest dbhome patch, for example 19520042 for 12cR1, then move all existing database to the new ORACLE_HOME.
# oakcli unpack -package /tmp/p19520042_183000_Linux-x86-64.zip # oakcli create dbhome -version 12.1.0.2.180717
Warning: before installing a new ORACLE_HOME, it is wise to stop the emagent that may other be locking the /u01/app/oraInventory/locks directory.
#13 From the two ODA_BASE nodes, very again the components versions:
# oakcli update -patch 18.3.0.0.0 --verify
# oakcli show version -detail
#14 Once the upgrade has completed, maybe it is a good idea to
- Rename the old grid_home, restart the oda_base, then plan to delete it in a few days
- Migrate all database to the newly patch ORACLE_HOME, if applicable