Ok, management wisely decided to afford the Oracle 11g training for me and my primary support colleagues (thanks, thanks, thanks!), so this week we find ourselves sitting in a meeting room with Juergen Kirschner (yes, THE J. Kirschner) both as the victims and witnesses of his very own special way to make sure every attendee understands how Oracle works.
Of course, it’s not meant to be a pure entertaining treat for the database support team. Instead, it is expected from us that we do in fact remember a thing or two after the course so that we can do some actual support on Oracle 11g databases.
Rumors are that customers start to use it… 🙂
Thus, I decided to do as Napoleon did – writing down the stuff that you want to remember. And while I’m at it I can even just post it.
And here we go:
The first thing I remember from today was that Oracle again managed to increase the number of database parameters.
Don’t ask me for the exact numbers but including the undocumented parameters, 11g easily reaches >2500 parameters.
Combined with an approach to release bug fixes with an on-off-switch defaulting to off (that is, you have to set the parameter e.g. _fix_control-something to activate the fix) this is a near guarantee to make the list of required/recommended parameters even longer.
A nice change in that area is the work they did on the “event”-parameter – now you don’t have to specify the miraculous ‘trace name context forever, level xyz’ anymore. Instead, you just write the event number and – if required – the level.
event= ‘10053’ would be totally sufficient to start the CBO trace. Nice, isn’t it?
Using SPFILES also got a bit handier.
When resetting a parameter you now can omit that “SID =’*'” part that was required with earlier versions.
And it is even possible to change the order of the keywords in the ALTER SYSTEM statement.
As usual with new database versions, some parameters get outdated and deprecated.
Now, what does Oracle do when you specify such parameters in your instance profile and startup the database?
- It starts up the instance but not without
- Printing out some nasty ORA-32004: obsolete or deprecated parameter(s) specified for string instance
message and changing the return code of SQLPLUS to a non-zero value.
If that does not make you nervous (well, I was quite relaxed up to here…), you surely don’t expect to find such parameters like REMOTE_OS_AUTHENT on the list of deprecated parameters.
SAP NetWeaver heavily relies on the OPS$-logon feature, and the standard scripts’n’tools like “startsap” and “brconnect” tend to interpret non-zero return codes from sqlplus as errors … that’s going to guarantee some fun!
You’ll already find SAP notes on this one and a proper solution for this is currently being worked on – so hopefully, this won’t be an issue for too long.
Ah – while we’re at it: the new parameter recommendation note is
1431798 Oracle 11.2.0: Database Parameter Settings
Currently searching for ‘11g + parameter‘ doesn’t find it, but I guess this can be easily fixed.
While Oracle 10g feature presentation slides explained the advantages of automatic SGA tuning (sga_target sga_max_size), Oracle 11g slides now advertise the automatic tuning of the whole Oracle memory, SGA AND PGA (memory_target, memory_max_size).
However, to be on the safe side in your production system, you still would need to set reasonable minimum sizes for SGA and PGA to prevent your database going down due to some extreme memory consuming statement.
And as soon as you do that, all the nice automatic management advantages go out of the window as you have to decide on reasonable memory sizing yet again.
Datafile Space Management
Quite the same is true for some datafile space management features.
There’s a new background process SMCO (Space Management Coordinator Process) that extrapolates the space requirements in your database and automatically extends the datafiles for you.
That way, if you’re using autoextent data files, your data loading transactions won’t have to wait until the file extension had been made.
Good or evil feature?
Well, if you’re giving out the allocation of filesystem space out to some automatic feature – remember, this filesystem space has to be paid for (twice or more … for your QA and Testsystem as well!) – then you might get into trouble earlier or later.
Besides, there’s no automatic “drop me just that datafile” feature yet – so reclaiming the filesystem space might become a bit more work.
Unrelated side-note: the space management bitmap was now enlarged for all newly created datafiles to the size of the bigfile datafiles. This will lead to a bit more ‘wastage’ in your data files; so be informed you bean-counters out there!
Also unrelated: it’s now supported to choose a larger blocksize for the online redolog files. While this might improve write I/O performance for your storage solution, it might be worth a double thought, since the redo log I/O block size is the minimal container for writing out LOG data.
You change a row, generate 12 bytes of LOG data and commit?
You just wrote out 4K to the online redo logs (assuming you set the maximum block size).
Another new background process is the “Virtual Keeper of Time Process” (VKTM).
As the demonstration via strace easily showed, this process does two things:
- sleeping and
- issue gettimeofday() system calls.
Now, what’s this for you might ask – as I did as well.
The idea behind it goes something like this:
gettimeofday() is a rather expensive system call and in some cases, the precision of it is not required.
Sometime the current second would do.
In such cases, the other Oracle processes now can just read out the result of VKTMs last system call and us this instead of calling the system API themselves.
Where and when this is used?
No bloody idea… (a.k.a. confidential!).
Back at datafile space management there some minor changes that only partly make sense.
You can now provide a maximum segment size on tablespace level.
So if your DBA goes nuts and decides that segments larger than 20 MB are evil and thus not allowed, then you cannot create such segments.
Even resumable space allocation gets you out of this, because… yeah, well, because it just doesn’t do it (the real reason is, even more, Homer Simpsons DUH!-like, so don’t ask!).
The next feature, deferred segment creation is important to know, since it’s enabled per default.
The effect of it is that when you create a table then no segment will be created for it as long as you don’t insert any data. The same is true for indexes on this table.
Given that you typically find thousands of empty tables in SAP systems this feature alone might save you quite some space as until now even empty tables allocated at least 8 blocks plus another 8 blocks for each index, lobindex, lob column etc.
Currently, this feature does not work for partitioned tables (should be changed ‘soon’) and will crap up the output of your monitoring scripts of you just join DBA_TABLES and DBA_SEGMENTS without an OUTER JOIN.
In the worst case, you’ll look at your script output and find thousands of tables missing… In that case, breathe in, relax and check the DBA_TABLES column SEGMENT_CREATED to figure out whether there should be a segment at all.
The CBO, however, will treat these tables as if they would have their segments.
So it just keeps on relying on the CBO stats and does not play clever tricks with the knowledge of having really empty tables. 🙁
While digging into this feature it came up that the famous Oracle dictionary scripts that helped us Oracle hackers so much to learn about the inner structures in the past have changed as well.
SQL.BSQ now is split up in multiple sub *.BSQ files – but still these are worth the dive in from time to time!
Before I totally get too tired to remember anything more, let’s move to tablespace encryption.
Just like the TDE that was available with Oracle 10g it’s required to create an encryption key in form of a ‘wallet‘.
This can either be manually opened after instance startup or an automatic opening can be configured.
As the encryption setting is now done on tablespace level, the handling got a lot easier than before.
Still, the major impact on security is there to the backup situation.
Of course, the production database will have the wallet open all the time and automatic – so few additional security on your productive instance. The backups, however, even the ones done via filesystem snapshots, are now secure and useless to everybody who does not have the correct wallet.
Ok, that’s all that I can wring out from my short-term memory today.
(note to me: maybe I should consider taking notes… ?) 😉
See you tomorrow, so long