
I’ve had an issue with the 19.26 release update (RU) on Oracle Database 19c, and I thought I would mention it here in case it helps anyone else.
This is a summary of the process…
Not every database
Let me start by saying we have not had this issue on every database. Just on the databases on three servers. They all happen to be Oracle Linux 9 (OL9), but we have other OL9 installations that haven’t had this issue. So far I don’t have an exact pattern, but I can tell you we have successfully patched 19c databases running on OL7, OL8 and OL9, so I’m not saying 19.26 breaks everything.
With that out of the way, let’s get to the interesting stuff.
First signs of trouble
We had patched a load of systems with no drama, but then I got to a group of three servers that weren’t happy once the 19.26 RU was applied. Loads of objects in the user-defined PDBs were left in an invalid state, which meant datapatch was unable to complete the patching. The root container and the seed PDB were fine. I tried to recompile the invalid objects using “utlrp.sql”, but they wouldn’t recompile.
Later I tried running the “catalog.sql” and “catproc.sql” files in the PDBs, hoping it would resolve things, but it didn’t.
Manual recompilations of some of the invalid objects gave weird errors like this.
ORA-04020: deadlock detected while trying to lock object SYS.AQ$_REG_INFOThis was not the only object where this happened. After banging my head against a brick wall for a while I opened a service request (SR) on Oracle Support.
Service Request 1
I went through the normal process of opening a service request, which is a lot of repeats of things I had already tried.
- Use “utlrp.sql” to recompile everything. Failed.
- Run the “catalog.sql” and “catproc.sql” files in the PDBs. Failed.
- Manually recompile objects and check for failure messages. No change.
The SR was opened as a patching issue, since the invalid objects only appeared after applying the 19.26 patch. If I recovered the VM everything was fine and working as expected on 19.25. Apply the 19.26 patch again and loads of invalid objects appeared in the PDBs again.
Eventually the person handling the SR spawned a new SR against the RDBMS Data dictionary team.
Service Request 2
The second SR started with a bit of fact finding. Eventually they suggested running “catupgrd.sql” in the PDBs, recompile everything, then run datapatch again. Basically I did this. Notice I’m excluding the root and the seed from the upgrade.
sqlplus / as sysdba <<EOF alter pluggable database all close; alter pluggable database all open upgrade; exit; EOF $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl -d \ $ORACLE_HOME/rdbms/admin -C 'CDB$ROOT PDB$SEED' -l $ORACLE_BASE catupgrd.sql sqlplus / as sysdba <<EOF alter pluggable database all open read write; exit; EOF $ORACLE_HOME/perl/bin/perl \ -I$ORACLE_HOME/perl/lib \ -I$ORACLE_HOME/rdbms/admin \ $ORACLE_HOME/rdbms/admin/catcon.pl \ -l /tmp/ \ -b postpatch_${ORACLE_SID}_recompile \ -C 'PDB$SEED' \ $ORACLE_HOME/rdbms/admin/utlrp.sql cd $ORACLE_HOME/OPatch ./datapatch -verbose $ORACLE_HOME/perl/bin/perl \ -I$ORACLE_HOME/perl/lib \ -I$ORACLE_HOME/rdbms/admin \ $ORACLE_HOME/rdbms/admin/catcon.pl \ -l /tmp/ \ -b postpatch_${ORACLE_SID}_recompile \ -C 'PDB$SEED' \ $ORACLE_HOME/rdbms/admin/utlrp.sql sqlplus / as sysdba <<EOF shutdown immediate; startup; show pdbs; EOFRunning “catupgrd.sql” in the user-defined PDBs seemed to fix the compilation issue, which meant I was able to run datapatch without errors.
Job done.
So what happened?
I have a workaround for these three servers now, but I really don’t know why these three servers had a problem in the first place.
My first thought was that I must have screwed up something during a previous upgrade, but that felt unlikely as previous patches had applied without a hitch.
I tried running the “catupgrd.sql” script against one of the PDBs on 19.25 and it said it didn’t need upgrading, so it feels like this wasn’t an issue with a previous upgrade. When I applied the 19.26 patch it failed in the same way, and was only fixed by running catupgrd.sql again.
Conclusion
We use the same build scripts to build almost everything, and the patching is automated, so it’s one size fits all. It’s unlikely I’ve introduced human error, but never say never.
I thought I would share this in case something weird like this happens to you during any patching. Hopefully this was caused by something stupid I did and you’ll never see it, but I have no idea what that could be. 🙂
Cheers
Tim…