2021-04 PBG Partition Pain

One of the things I dearly love about Db2 is that I never ever stop learning. I have been working with Db2 since V1.3 and I was completely surprised by what I just learnt!

PBG What is it?

For those that do not know, PBG is a Partitioned-by-Growth Universal Tablespace (UTS) that came into existence in Db2 V9 as well as its twin – Partitioned-by-Range (PBR). IBM have repeatedly said that all new developments will focus on tables/indexes etc that are backed by UTS and so deprecated all the others types (Simple, Segmented, Classic Partitioned). When you create a PBG you specify how many partitions it can get to using the MAXPARTITIONS clause and it starts by allocating just the very first partition.

First Part – First Problem

Quickly after GA the first problems with this data model started appearing. First up were the people who used 4096 as MAXPARTITIONS and then started running out of memory as all the internal control blocks were being allocated for 1000’s of partitions that would most probably never exist!

Next problem – Copy problems

Then users of DSN1COPY and other cloning methods hit the next really nasty problem. This was: What happened if in subsystem A you had three partitions but in subsystem B only one and you use DSN1COPY to copy the data? This problem was unsolvable until another new DDL parameter was quickly rolled out: NUMPARTS. This controls how many partitions are physically allocated at DDL creation time. This should naturally *only* be used if you intend to cross-clone datasets where the number of partitions do not match.

Running out of space?

Various people have written monitoring software over the years that intercept IFCIDs from Db2 and audits the data within. Our very own SpaceAssuranceExpert (SAX) does this to warn you if you are about to hit any one of the many Db2 limits of space management. Naturally one of these limits is when you are nearing, or have allocated, the last partition, in other words you are soon reaching or have indeed reached MAXPARTITIONS. This means you are about to run out of partitions and, if in the last partition, SAX monitors how full it is to warn the DBA group of possible -904’s coming up. All well and good so far!

Newest problem – Death by descending key

It then came to my attention that Db2 12 does not, in fact, work that way in one special case. If the DDL has the same number for MAXPARTITIONS and NUMPARTS (So all possible partitions are allocated at creation) and you have a CLUSTER defined index where all columns are also DESCENDING then the following happens:

First data is inserted in partition one and extents are taken as required until the partition is full (So DSSIZE has been reached) then the inserts are targeted at the *last* partition! Once this partition is full it then goes to MAXPARTITIONS – 1 and fills that one up!

This naturally kills any attempt to see how full the dataset is online. Obviously you can do a spacemap analysis to review exactly how much free space is there, or hope that the Real-time Statistics table can help you, but doing this instantly for every IFCID is just crazy!

Is that a must?

Thankfully you can toggle off this very strange behaviour by setting MAXPARTITIONS to one more than NUMPARTS. Then Db2 behaves itself and goes back to 1 , 2 , 3 etc.

An ALTER to the rescue?

But what if you already have such a definition? If you already have MAXPARTITIONS 3 and NUMPARTS 3, what about doing an ALTER like:


Nope. Then Db2 gets *very* dumb! It then inserts to partition one until it is full then it skips to the *old* MAXPARTITIONS partition three and fills it up and then goes backwards until partition two is full before then jumping to partition four! Whoever thought this up must have been crazy! In a space with three partitions which is then ALTERed to have a MAXPARTITIONS of four you then fill partitions 1, 3, 2, 4 in that order… In my opinion this is a bug in Db2 as this space should be handled exactly as if it was defined with MAXPARTITIONS 4 and NUMPARTS 3!

What about an Index ALTER?

BINGO! Doing an ALTER on the index like:


Then gets the inserts going „back“ into the normal partitioning sequence. All very odd, confusing and misleading!

There must be reason!

I have thought long and hard about „Why?“ and I cannot think of one answer that makes *any* sense! If Db2 started inserting in the last partition I could sort of get it – It might reduce the unclustered insert counts – but only doing this *after* filling the first partition just makes it all seem so pointless…and then after an ALTER it just gets worse!

One „pointer“ found

While looking around the web for any help I did stumble across this little gem:

The new V12 looping through partition function, can be either ascending or descending order. If the search is in descending order, then it is possible to skip a partition between the original target and last partition of the table space. Therefore, a sanity check of 04E-00C90101 LOC=DSNIDM DSNISGNS ERQUAL50C4 will be encountered, to catch this situation.

In APAR PI68087 from 2016.

I think this undocumented behavior should be either corrected or, even better, removed as soon as possible!

As usual, any questions or comments please feel free to mail me!


Roy Boxwell