2019-12 Fun with Db2 12 PBR RPN

I have recently enhanced our SpaceAssuranceExpert (SAX) product to automatically help out if partitions, or partitioned indexes, are getting too big for their boots in a productive system.

SAX – What is it?

The idea behind SAX, is to stop any and all chances of getting a dreaded SQLCODE -904 in production, especially “out of the blue”.

Our idea was, in Db2 12, with the new PBR RPN to do an on-the-fly ALTER to the DSSIZE, just like SAX does now with the SECQTY to avoid running out of extents.

RTFM Time

A quick look in the manual tells you that :


it is an immediate ALTER (as long as you make the new DSSIZE larger than the old one!) and there are no package invalidations or REORGS required.


This is fantastic!

So I created a nice little PBR and then ran a horrible Cartesian join SPUFI to flood the first partition with data. This join had a TIME card of (,1) to limit it to one second of CPU before getting an Abend S322.

SAX Monitor

The SAX monitor reacted perfectly and did the TP alter to 513 GB DSSIZE (I had an increment size of 512 as a test), and got an SQLCODE -666 (I just love that SQLCODE…), as the INSERT was still running.

This ALTER was then internally queued to be attempted later, like in any other failure case.


All well and good.

Wham! Nasty errors !

Then I did the same for a DPSI on my PBR RPN… Oh dear!I got a nasty IO Error and then an even nastier ROLLBACK loop, meaning I had to cancel the IRLM to stop Db2… (There is an APAR for this problem PH18977.)

Rollback loop?

The ROLLBACK loop was caused by me choosing to use NOT LOGGED as a tablespace attribute. We have a finite amount of log space, and when the transaction was S322’d after one second of CPU,

the rollback could not find one of the archive logs and then we had to cold start Db2

– Not pretty!

Proper test!

I then wrote a couple of little test programs that actually COMMITted after 5000 inserts and then the ALTERs all worked as designed.


Do I worry too much about extended format and extended accessibility?

Next, I worried about the ominous “extended format and extended addressability“ attributes in the DATACLASS for a PBR RPN, and wondered what would happen if a customer has SAX running and it happily ALTERs a TP to say 6 GB when they can only address 4 GB…


IBM to the rescue!

Luckily for us, Db2 development had thought about this!

  • If you attempt to create a PBR RPN (even with a very small size) and your DATACLASS does not have the two attributes set, you get an error message 00D70008 telling you this detail.

  • If you ALTER an existing tablespace to be a PBR RPN and your DATACLASS does not have the two attributes set, then the ALTER works fine. But remember, this is a pending alter and you *must* do a REORG at the TS level with inline TP level copies. This REORG then fails – also with 00D70008.

So, in other words, SAX cannot hurt you here!


What about PBGs?

After all this we also considered PBGs. They have a limit as well – MAXPARTITIONS in their case. So we added an ability to also ALTER MAXPARTITIONS as well. Here you must be more careful though, as these ALTERs are still immediate *but* they invalidate any referring packages!

ABIND YES or NO?

If you work with ABIND set to YES all is good as the ALTER comes in, Db2 invalidates your packages and the auto rebind happens so fast that you do not even notice it happening, however, if you work with ABIND NO then any packages, even the package actually running and doing the inserts, will fail! If you are just using dynamic SQL then it is 100% OK otherwise – Buyer beware!

Aha!

I have opened an Analytics Idea (DB24ZOS-I-1057) to try and get this loophole closed, as I cannot see what access path change could be affected by going from 32 to, say, 36 MAXPARTITIONS.

Db2 keeps adding the parts dynamically and nothing happens then… Go figure… Anyway, if you would like it – Please vote for it!


There are already some nice comments attached to it:

  • DP commented

    this is just a limit in the catalog.  So how could access path be affected?  The actual growth of a partition doesn’t invalidate the package so how just changing the limit do so ?
  • BW commented

    I opened a Case on this asking why packages are being invalidated in this situation and it is still open waiting for a reply.
  • BD commented

    With only impact to catalog, not sure how Access Path would be impacted.  Seems wasteful and counter productive to invalidate packages.

Remember – You never stop learning!

As always I would be pleased to hear from you!

TTFN,
Roy Boxwell
Senior Architect

2019-09 A DBA has got to know his limitations

Sorry, but I could not resist the Dirty Harry quote there…and yes I know it really should be “his/her” but that would not then be a quote, so please forgive me!

I have spoken/blogged about limitations in Db2 a few times over the years and we have a nice little tool called Space AssuranceExpert (SAX) that stops a whole bunch of nasty events (read -904’s) at sites around the world.

It grows and grows

SAX started out “just” monitoring the size and number of Extents/Linear Datasets that Db2 was allocating. It then either did a dynamic ALTER of the SECQTY to allow the dataset to get to its physical maximum size before running out of extents, or it warned people that the number of physical datasets was running out.


First enhancements


– SMS Storage groups Monitoring

We pretty quickly enhanced it to then also monitor SMS Storage groups as when your copypool runs out of space it can be a tad embarrassing!

At the same time, we changed the logic to work with the -1 -1 style allocations when IBM brought out their sliding scale logic.

Universal support

When PBG Tablespaces came out, we again enhanced the product due to the special processing limits of these Universal Tablespaces.

Think MAXPARTITIONS here!

Db2 12

Along came Db2 12 with its new RPN format and the ability to have a different DSSIZE for each partition, and each index got its own DSSIZE.

Sequences and Identities

Then we were asked if we could also monitor Sequences and Identities that are defined as non-cyclic. A lot of these beasts are defined so that they just “hit the buffers” at a certain number. So we added that to the system as well.

Numeric Primary Key support

Next came a request to monitor Numeric Primary Key columns. These were actually sequences before sequences were even invented! You have, typically, a SMALLINT field which keeps being incremented – Fine for the first 32,767 but after that?


No time to rest on our Laurels!

Are we now finished? No! There is *always* more to do!

  • What if you are nearing the column limit, or worse yet, the Index column limit?
  • What if the size of your index is getting dangerously close to the absolute physical maximum?
  • Talking about absolute physical maximums – What if you are running out of DBIDs?
  • Or, even worse, running out of OBIDs within a database – Do you even check these?
  • What happens when you try to ALTER it? Or create that new index?
  • Or even just a view with an INSTEAD of function?

BOOM! Is what happens!

SAX saves the day again!

We have enhanced SAX yet again, so that

you can now run a batch job to review all of these limits.

This gives you plenty of time to take corrective action before the bridges are burning…  


What are the limits?

In the Db2 SQL Reference Guide in the Appendix “Limits in Db2 for z/OS” it lists them out, but here are the important ones for you all:


750 Columns in a table.

Actually it can be less, depending on the complexity of any VIEWs, but to keep it simple this value is used as the upper limit.


64 Columns in an index. This includes INCLUDE columns.

For a partitioning index (That is an “old style” index that is actually used to partition the data) you get a maximum size for PADDED indexes of 255 – n bytes and, if NOT PADDED, then 255 – n – 2m – 2d bytes.

For any other indexes you get a maximum size for PADDED indexes of 2000 – n bytes and, if NOT PADDED, then 2000 – n – 2m – 2d bytes.

Where:

  • n is the number of columns which are NULLable
  • m is the number of varying length columns
  • d is the number of DECFLOAT columns

An FTB (News from the Labs 2019-07) is only possible for indexes that are 64 bytes or less. If you ALTER add a one byte column to a 64 byte index then you have broken this limit and that index is no longer eligible for FTB processing.

65,217 Databases may be defined. This includes any widow databases of course (Typically DSNnnnnn style empty databases where the implicitly created tablespace has been dropped when the table was dropped, but the database was “forgotten”).

32,767 OBIDs within a single database. Remember, that the number of objects (OBIDs) within a database is not a simple 1:1 relationship. Each tablespace, index or referential relationship takes two, whereas each table, check constraint, aux for LOB, XML for XML, trigger or view with INSTEAD OF takes one.


Stay up to date

If Db2 development bring any new ones out, think Agile here, then our SAX will be enhanced quickly to check for it!

What is the plan?

Simply schedule the SAX limit checker to run once a week or so.

  • It will report all its findings and, if required,
  • issue WTOs to alert you to any dangers before hitting any of these limits.

And you?

What do you do at your shop now? Do you monitor these hard limits? Does anyone care?

Go ahead, make my day and install a SAX trial!


For a SAX trial please email to : techsupport@seg.de

Consult our Space AssuranceExpert (SAX) page


As always I would be pleased to hear from you!

TTFN,
Roy Boxwell
Senior Architect

Machine Learning for Db2 z/OS: Artificial Intelligence – Hype or Reality?

Now that the Db2 Optimizer has gone all Artificial Intelligent (AI from now on) on us, I thought it would be interesting to review where we are in this brave new world!

Keeps you out of a BIND

My firm created our BindImpactExpert (BIX) software many moons ago to compare access paths, typically before a program non SQL change or a REBIND after RUNSTATS etc. Just to warn you not to do the REBIND or whatever you were going to do, as it would change the access plan to something you may well not really want to have. This means not just no-brainers like index access to tablespace scan, but also different index usage or table access order changes.

Apples and Oranges

So, what has that got to do with AI?
Well, we used a marketing slogan of trying to compare oranges and apples to try and make it clear :

how tough it really is to try and match SQL statements across different program releases and to compare the results.

ML and AI – The new buzzwords

Now in Db2 12, the optimizer has got machine learning (ML from now on) to help it study which host variables are used the most and how many rows are usually returned by a given query.

Do you “know” your data?

When the optimizer “knows” this info it can of course make much better “guesses” at good repeatable access plans that help everyone! Now the usage of AI and ML always gets me interested because of all the fun you can have with them.


There was recently a great article in my favourite science magazine “New Scientist” all about how to fool image recognition AI:

https://www.newscientist.com/article/mg24232270-200-machine-mind-hack-the-new-threat-that-could-scupper-the-ai-revolution/


Now you should see the connection back to our BindImpactExpert slogan. So much for Oranges and Apples – AI cannot even see that an Orange is an Orange…

this is not good or even perhaps “suboptimal”…

Adversarial Images and the fun you can have with them…


From 2018 this article

https://www.wired.co.uk/article/artificial-intelligence-hacking-machine-learning-adversarial

Within this article is a great paper all about graffiti changed road signs! Here is a direct link to the article https://arxiv.org/pdf/1707.08945.pdf


Fooling Deep Learning

Another great paper where Deep Learning AIs jump off the deep end:

http://www.evolvingai.org/fooling


Disappearing Humans

and finally this joyous article and a link to a You Tube video:

https://www.theverge.com/2019/4/23/18512472/fool-ai-surveillance-adversarial-example-yolov2-person-detection


But do not wear this picture as a T-Shirt down the street anytime soon!!!

Would you sit in a driverless car?

What this all boils down to is that I, for sure, will never sit in a driverless car within the next 40 years!
Why?

  • Firstly, I believe a true AI does not exist yet and will not exist for at least another 40 years.
  • Secondly, when I see an Orange I see an Orange because I “know” what an Orange looks like! As you have seen AIs have no concept of what an Orange *is* and so can easily see a drill instead or mistake skiers for dogs…
  • Thirdly, I am a programmer and someone somewhere is writing and testing – I hope! – the code… need I say more? Don’t even get me started with Agile or KANBAN.

Artificial Intelligence - Machine Learning for Db2 z/OS
extract from © 2019 – „We need to talk, AI” – Dr. Julia Schneider und Lena Kadriye Ziyal
A Comic Essay on Artificial Intelligence

https://weneedtotalk.ai


Room full of CTO’s

Have you heard about the hopefully apocryphal story of a room full of CTO’s at a software conference when they were asked “Would you fly in a computer controlled aircraft?” All their hands went up. Then they were asked “Would you fly if it was your software teams that had written the flight control software?” all their hands went down!

Flying is strangely different

While talking about flying my answer would be different?

Why?
Because in the air you have no-one else apart from other highly automated devices anywhere near you! This is the one hope that I can see for the whole driverless car stuff – Driving on the Interstate, Motorway or Autobahn. In the air, the environment is much more rigid. No lights, no crossings, no opposite traffic, no foot traffic, no children chasing a ball across the road etc.

They will come…

I am sure that driverless cars will come soon and I am equally sure that people will die because of them – However, I am also sure that lots of accidents will be avoided as well. After all the AI will always react faster than a human.

With this comes the ethics of AI and ML of course which will also play a major role. If the car has to decide in a crash scenario :

  • who gets hurt, should it protect itself and its cargo above all others?
  • What about a group of playing children or a bus full of nuns?
  • How can it decide who lives and who dies?

Looking on the bright side at least it will not get drunk, or high, and drive! I would also seriously hope that AI does not get road rage!

But even with these “bonus” points, I still have massive doubts and serious worries about the quality of the software and the true image recognition potential of current AIs.

AI everywhere

We were recently asked by a customer to add AI and ML to our SpaceAssuranceExpert (SAX) software. This has been happily running for years, capturing dataset allocations and extent usage in Db2 and making sure they never ran out of space or extents without getting told about it well beforehand.

I had to do a real double take and think “How on earth could AI help here?”

You are approaching 32 Linear Datasets with your segmented tablespace. You have to ALTER and REORG before you use up all the space in the 32nd. Where can AI help here?
We already have extrapolation from current usage…

Don’t get me wrong

I am not against AI and ML…

but I think we *all* need to get a bit more skeptical about what can actually be delivered now.

On the one hand host variable contents and row counts – Yes indeed, but even here outliers will cause grief and then on the other hand the complete and utter science fiction of self-driving cars in our towns and cities.

The money being spent is also amazing, in 2019 it is estimated, by Syncsort,

that the AI budget will be $35.8 Billion 44% more than in 2018.


What do you think?


I am sure that autonomous cars will be a plague upon us – are you?

TTFN,
Roy Boxwell
Senior Architect

PS: I am looking forward to the first Adversarial Images for Db2 z/OS !!!

2018-11 db2 Numeric Primary Keys: Space, the final frontier?

What do Db2 numeric Primary Keys have to do with space management ?

SMALLINT  –  INTEGER  –  DECIMAL  –  BIGINT…

How to find out which sequences are nearing their physical limit ?

Recently I was at a customer site and was showing them our SpaceAssuranceExpert (SAX) software. It basically does all sorts of space management and checking, and has recently been extended to now also check for any numeric Primary Keys that are running out of space.

Traffic is needed

The simple way to show that the started task is up and running, and working as designed, is to generate some traffic. To do this, I normally just do a really awful Cartesian join insert into a dummy table.

First I create my target like so:

CREATE TABLE ROY.SYSTABLESPACE LIKE SYSIBM.SYSTABLESPACE ;

And then I simply do an INSERT from SELECT with no WHERE predicate in a batch SPUFI job:

INSERT INTO ROY.SYSTABLESPACE           
(SELECT A.* FROM SYSIBM.SYSTABLESPACE A 
                ,SYSIBM.SYSTABLESPACE B 
                ,SYSIBM.SYSTABLESPACE C 
                ,SYSIBM.SYSTABLESPACE D 
                ,SYSIBM.SYSTABLESPACE E 
                ,SYSIBM.SYSTABLESPACE F 
                ,SYSIBM.SYSTABLESPACE G 
                ,SYSIBM.SYSTABLESPACE H 
                ,SYSIBM.SYSTABLESPACE I 
                ,SYSIBM.SYSTABLESPACE J)
;

CCSID Problems

Now at this site, the first CREATE actually failed due to some odd sort of CCSID problem. The lead DBA said “No problem I have a little test table where we can do the same sort of thing.” This little table was used for the INSERT and we huddled over the keyboard waiting for the started task to start reporting on EXTENTS and other space data…

Nothing happened

We waited and waited and waited, and still a big zilch!

Trust is everything

Now, I know my software and I know that this works fine, so I was a little perplexed to say the least!

I thought and thought and thought and then reviewed their SQL again:

INSERT INTO ROY.DSN_COLDIST_TABLE       
(SELECT A.* FROM ROY.DSN_COLDIST_TABLE A
                ,ROY.DSN_COLDIST_TABLE B
                ,ROY.DSN_COLDIST_TABLE C
                ,ROY.DSN_COLDIST_TABLE D
                ,ROY.DSN_COLDIST_TABLE E
                ,ROY.DSN_COLDIST_TABLE F
                ,ROY.DSN_COLDIST_TABLE G
                ,ROY.DSN_COLDIST_TABLE H
                ,ROY.DSN_COLDIST_TABLE I
                ,ROY.DSN_COLDIST_TABLE J
                ,ROY.DSN_COLDIST_TABLE K
                ,ROY.DSN_COLDIST_TABLE L
                ,ROY.DSN_COLDIST_TABLE M
                ,ROY.DSN_COLDIST_TABLE N
                ,ROY.DSN_COLDIST_TABLE O)
;

Ouch! That hurt!  😯

And then it hit me!

Look at the SQL and you can see that it is trying to INSERT into itself…

What does Db2 do in this situation?

It inserts into the work database until the INSERT is finished and then it actually moves all the data into the table…

*duh*

Waiting for Godot

If we had waited long enough we would have seen messages like:

O2RT-SU04-006I: Extent activity for
DC10.DSNDBD.DSNDB07.DSN32K00.I0001.A001
                Number of extents : 00032 - EXTS threshold is    1        
O2RT-SU04-011I: 08:11:35 - Datasets will be processed now                 
O2RT-SU04-006I: Extent activity for 
DC10.DSNDBD.DSNDB07.DSN32K00.I0001.A001
                Number of extents : 00034 - EXTS threshold is    1

With hindsight everything is clear!

But, of course, we didn’t see that until much later… then I noticed another little detail that I though was quite funny – the tablespace in question was in COPY Pending:

DSNT360I -DC10 *********************************** 
DSNT361I -DC10 * DISPLAY DATABASE SUMMARY        * 
               *       GLOBAL                    *
DSNT360I -DC10 *********************************** 
DSNT362I -DC10 DATABASE = DSN00201 STATUS = RW 
DBD LENGTH = 4028 
DSNT397I -DC10 
NAME     TYPE PART  STATUS       PHYERRLO PHYERRHI CATALOG  PIECE
-------- ---- ----- ------------ -------- -------- -------- -----
DSNRCOLD TS   0001  RW,COPY 
DSNRCOLD TS 
******* DISPLAY OF DATABASE DSN00201 ENDED **********************
DSN9022I -DC10 DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION 
***

It appears that Db2 does not actually even check if it can complete the INSERT at the beginning…

Get it right   😀

The moment we changed the INSERT to be on a different table (No work file usage) and it was not in COPY Pending status all worked as usual:

O2RT-SU04-006I: Extent activity for 
DC10.DSNDBD.DSN00200.SYSTABLE.I0001.A001
                Number of extents : 00002 - EXTS threshold is    1         
O2RT-SU04-011I: 08:38:40 - Datasets will be processed now                  
O2RT-SU04-006I: Extent activity for DC10.DSNDBD.DSN00200.SYSTABLE.I0001.A001
                Number of extents : 00003 - EXTS threshold is    1         
O2RT-SU04-011I: 08:38:41 - Datasets will be processed now                  
O2RT-SU04-006I: Extent activity for DC10.DSNDBD.DSN00200.SYSTABLE.I0001.A001
                Number of extents : 00004 - EXTS threshold is    1         
O2RT-SU04-011I: 08:38:42 - Datasets will be processed now

Much better data!

Why Primary Keys?

If you are wondering what numeric Primary Keys have to do with space management just think back to the good ol’ days before Sequences and Identity columns. You created keys with numeric types

  • SMALLINT,
  • INTEGER,
  • DECIMAL and
  • possibly even with the newer BIGINT.

These were, basically, “sequences”.


The challenge is to find out which of these is nearing its physical limit.
With SMALLINT it is not that big and DECIMAL can be very, very small indeed!


Catalog or Data?

Now this data is fetched from a couple of catalog tables and so is very dependent on the quality of your RUNSTATS.

so, in the next stage, it will be extended to actually read the User Data to see what the value currently really is.

Cool stuff, huh?

Feel free to send me your comments and ask questions.

TTFN,
Roy Boxwell
Senior Architect

2017-02 Why SIZE still matters in Db2 12

What has changed for space management in Db2 12?

How to avoid SIZE limits in Db2 12 like in previous Db2 Releases?

How to monitor the maximum possible SIZE of table and index spaces and table and index partitions?

SIZE in Db2 12: Now that Db2 12 has gone GA, I thought it would be nice to do a quick re-recap of Space management and its problems over the releases. My “old” newsletter 2014-05: Why SIZE matters for Db2 still receives a lot of hits on our website, so I know that this is a big topic of interest for many of you. Some nifty things have been introduced in Db2 12 to make space a lot easier to use and manage.

In the beginning…secondary allocation for tablespaces and indexes since Db2 V7, Db2 V8…

Since Db2 V8, the DBAs of this world have all had the ability to forget about PQTY and SQTY in the DDL for Tablespaces and Indexes. At first, nearly no-one trusted the sliding scale algorithm, and SOFTWARE ENGINEERING’s product Space AssuranceExpert (aka SAX) monitored and reacted instantly to secondary allocations.

However, we now have Db2 12, and I thought it would be interesting to review what was done in Db2 V7 (when our SAX was launched), and the difference nowadays in the Db2 12 world.

IFCID issuing for space extents

Every time a secondary allocation is done in Db2, it can be made to spit out an IFCID. SAX runs as a started task, active 24×7, from Db2 start up until just before Db2 shut down. It catches all of these IFCIDs thrown by Db2, and performs an analysis with six basic questions:

1Can this dataset reach its maximum physical size *before* running out of physical extents? (The actual size is dependent on the “geometry” of the object of course!)
2Will this object run out of datasets? (The number of datasets an object can have is, once again, dependent on the “geometry” of the object)
3Is this partition nearing its maximum size?
4Did Db2 ask for one extent but got more back?
5Are any of my SMS disk storage pools running out of space?
6Are there any SEQUENCES that are about to hit the wall?

(Numbers five and six are actually triggered by a timer, naturally.)

Can this dataset reach its maximum size before running out of extents?

Remember, back in those old days of Db2 V7? We only had 255 extents and 254 partitions, but datasets could still get pretty big pretty fast.

The problem lots of shops had, was that an important dataset would “hit the buffers” of maximum number of extents *way* before it ever ran out of physical space. Thus causing grief, wailing and gnashing of teeth! SAX stopped all this by giving WTO “heads-up” style messages in two flavors. First, a warning message, and then a critical message. This gave DBAs and space managers much needed time to plan for the outage and the, inevitably, long running REORG to actually action the required ALTER, or perhaps even any DROP/RECREATE that had to be done.

IBM also noticed this problem and so introduced in Db2 V8 the “sliding scale” of secondary allocations, as long as the ZPARM OPTIMIZE EXTENT SIZING field (MGEXTSZ) was set to YES (this is the default from Db2 9, by the way). Of course, to really use this, you then had to ALTER all of the existing spaces PQTY and SQTY to be -1, and then remember to delete all PRIQTY and SECQTY lines in your DDL and also rely on the TSQTY and IXQTY ZPARMs giving a big enough “first default”. (By the way, defaults for these two ZPARMS are 0, which is actually translated to be 720k or one cylinder for normal spaces and 7200k or 10 cylinders for LOB spaces). This all probably explains why the take up of this great feature has not been that spectacular and, in fact, Listserv *still* gets questions about “How good is this feature?” This also explains why the primary reason for having SAX is still valid at most shops today!

However, most shops these days tend to ignore the extents problem and only REORG when over 1000 extents have been allocated. This is no problem for SAX, as it knows the SECQTY and the MGEXTSZ ZPARM settings and can decide to “ignore” an IFCID for extent and ALTER SECQTY processing if the SECQTY is -1 and the MGEXTSZ is YES.

Will this object run out of datasets?

Now the problem of running out of datasets is very, very evil indeed… For a non-partitioned space, you can have up to 32 datasets.  Db2 will happily allocate away and you will never know, or even be informed, if, and when, the last possible dataset has just been allocated and, of course, you will not know that the 33rd one cannot be allocated until you get a -904 unavailable resource error! By definition this is “not good”, as you must do a HUGE REORG with a bunch of managers breathing down your neck and *not* make any mistakes with the new allocations. (Again, this is a very good reason to have SAX doing all the monitoring and triggering early warning “heads-up” style messages!)

Is this partition nearing its maximum size?

A partition running out of space is rare, but when it does happen it is, of course, a disaster! The idea in SAX, is to warn when “the end is near” for any of the partitions in use and thus, as before, allow time for the ALTER etc.

Did Db2 ask for one extent but got more back?

Degenerated extents are annoying as well. You have only 255 or 7,257 extents, Db2 requests one but gets up to five back! This is “wasting” your precious supply of extents and so SAX can also warn you if this starts happening. Remedial action can again be planned to correct the problem, (normally a volume defrag in this case). Now in z/OS 1.7 “Extent Constraint Removal” was introduced for the DATACLAS which, if set to “Y”, allows 7,257 extents but still limits you to 123 extents per volume and 59 volumes. So watch out if you are using huge “virtual” disks (E.g. MOD 54 or EAV), as you can end up wasting space because you still cannot exceed 123 extents per volume.

SAX also takes care of duplicate recording – This is where an Extent is registered but SMS “consolidates it into the primary/existing extent – This would normally get logged as an extent but SAX sees this and does not report it as an extent.

Are any of my SMS disk storage pools running out of space?

When an SMS Pool runs out of space, either for sort/work or image copy, it is *not* good! The idea here, is to also give a “heads-up” style alert. The DBA can trigger the space management people to have a look at the state and size of the SMS storage groups this time alerted by percentage used or GBs of space free.

Are there any SEQUENCES that are about to hit the wall?

The usage of SEQUENCES has taken off. Nowadays shops can run into the problem of SEQUENCES hitting the maximum/minimum number for a NOCYCLE defined sequence. SAX tests sequences at the same time as the SMS groups to warn about any encroaching problem with WTO/MSG and reporting.


What was new in Db2 V8?

Db2 V8 introduced a big change – Partitions went up to a maximum of 4,096, and the calculation about how many pieces your NPI can have got “a little bit complex” (see also my previous newsletter: “2014-04 Are you going to pieces”).


What was new in Db2 9?  PBG and UTS spaces

In Db2 9 the next major advance came with UTS spaces. The one that caused the most grief was, of course, PBG. Why? Well, the first problem was that some people went mad and used MAXPARTITIONS 4096 right from the get-go. They then found out this could not simply be changed and ended up being a huge problem. IBM came out with a bunch of fixes for these people, but the recommendation is still true today: “Use the number you expect to use!”

PBGs, however, came with a new set of space management problems:

1By definition every partition is full, and so a TP REORG is “dangerous” -especially if you have VARCHAR, and even more so if compressed.
2ALTER at TP level is not supported for PBG.
3Getting rid of empty partitions was not supported
4Adding partitions dynamically (by command) was not supported.
5What to do if the partition that is “in use” is growing and is

a – The last allowed Partition
b – MAXPARTITIONS is set to one?

Now these are “non trivial” because the Db2 catalog is so defined and you would not want an alert every time someone created a table or index!

The trick here, is to treat these conditions as if it was a normal space and so, instead of warning that you are using the last part, it waits until you are using, e.g. 80% of that part. Then, e.g. at 90% comes the critical threshold warning.

Big changes happened here in Db2 12.


What was new in Db2 10?

With Db2 10 came the ability to ALTER PBGs to add parts which made using DSN1COPY to clone data around a lot better!


What was new in Db2 11?

In Db2 11 the REORG utility can be used to remove any empty parts in PBGs by the use of the ZPARM REORG_DROP_PBG_PARTS being set to ENABLE (DISABLE is the default).


What is new in Db2 12?

Now in Db2 12 there is partition independence for DSSIZE. Before, all partitions had to have the same maximum size (DSSIZE). Now you can have different sizes for different parts. This requires either making a new tablespace (UTS Relative Page Numbering), or an ALTER and TS level reorg of an existing UTS space. The tablespace goes relative page numbering and the RID increases in size to seven bytes hence the need for a TS level REORG. The Partitioning indexes also get DSSIZE so they can vary in size as well. Once you are there, all of the Partitions can then be ALTERed up in size with no outage! This is really, really good!

REORG of a PBG can “spill” into a new partition. This is also really good, as it was the major problem with PBG TP level reorgs. The chance of LOB data going into COPYP during the log apply phase has been stopped – Thankfully! Finally, delete of empty partitions is controlled with a utility DROP PART syntax.


The SAX way for Space monitoring

The SAX tool way of processing all this info is neatly summarised in the help panel of the tool itself:

SUPERVISE LPS

Supervise linear pagesets. If specified, a warning is issued
in case of high allocated reaches this percentage of the
maximum data set size for partitioned objects.

For non-partitioned objects, a warning is issued for every
newly allocated data set as soon as the data set number
reaches this percentage of the maximum number of data sets:

Two different values may be entered for warning and critical
values with different message ids. This may be useful for
automation reasons (see below).

Object type: TABLESPACE      ! Maximum number of data sets
-----------------------------+----------------------------
LOB tablespaces              ! 254
-----------------------------+----------------------------
Non-partitioned tablespaces  ! 32
-----------------------------+----------------------------
Partitioned tablespaces      ! 1 (Percent used check)
-----------------------------+----------------------------
Partitioned By Growth        ! MAXPARTITIONS. LPS check if
tablespaces                  ! more than one. If on last
                             ! partition then percent used.
-----------------------------+----------------------------
Object type: INDEX           ! Maximum number of data sets
-----------------------------+----------------------------
Non-partitioned indexes on   ! MIN ( 4096 , 2 power 32 /
tablespace with LARGE,       !      ( DSSIZE / TS PGSIZE))
DSSIZE, or more than 64      ! Eg: 128 GB DSSIZE with
Partitions                   !       8 KB Tablespace Page
                             ! gives 256 Pieces (datasets)
                             ! Or    4 GB DSSIZE with
                             !       4 KB Tablespace Page
                             ! gives 4096 Pieces (datasets)
-----------------------------+----------------------------
Non-partitioned indexes      ! 32
otherwise                    !
-----------------------------+----------------------------
Partitioned indexes          ! 1 (Percent used check)
-----------------------------+----------------------------
To support automation based on WTO ids two different
thresholds may be specified:
Field (1) specifies a warning threshold using WTO ids
O2RTSU04 - 12W  (non-partitioned spaces)
O2RTSU04 - 14W  (partitioned spaces)
O2RTSU04 - 16W  (partition by growth spaces)
Field (2) specifies a critical threshold using WTO ids
O2RTSU04 - 13W  (non-partitioned spaces)
O2RTSU04 - 15W  (partitioned spaces)
O2RTSU04 - 17W  (partition by growth spaces)

AUDIT DEGENERATED XTS
Audit secondary quantity for de-generated extents. If
specified, a warning is issued in case of the last extent
does not reach this percentage of the SECQTY specified
in the Db2 catalog. If this field is left blank, no
auditing is performed.

AUDIT SMS STOGROUPS
Should the Space AssuranceExpert audit SMS stogroups. Y/N
If Y is entered, a pop-up window will allow you to enter up
to 24 SMS storage groups which will be audited.
If WARN IF % ALLOC > or WARN IF GB FREE < is specified and
exceeded, a warning (WTO) will be issued.

CHECK SYSSEQUENCES
Should the Monitor also check for SYSIBM.SYSSEQUENCES that
are running out of room every PING minutes?

N  - do nothing.  This is the default.
I  - check Identity Columns and Doc Ids for XML.
S  - check User Defined Sequences.
B  - do both.

PERCENT USED
If checking of SEQUENCES is desired then a threshold
percentage must be given from 1 to 99. If this percentage of
the available sequences is exceeded then an action is
triggered.

EXCEEDED ACTION
When a percentage is exceeded this specifies what type and
and which style of message should be externalized.

N  - do nothing.  This is the default.
W  - to write out a WTO.
M  - to write a message to the job log.
B  - do both.

To support automation based on WTO ids the following
messages are output:

O2RTS000 - 20W  (SEQUENCES MAXVALUE)
O2RTS000 - 21W  (SEQUENCES MINVALUE)
O2RTS000 - 22W  (IDENTITY MAXVALUE)
O2RTS000 - 23W  (IDENTITY MINVALUE)

So now you know why size still matters for Db2 12! The big question now is: “Are your space management and monitoring tools up-to-date, or are they still Db2 V7?”

As usual, if you have any comments or queries please feel free to drop me a line!

TTFN

Roy Boxwell

2016-10 Discovering hidden recovery problems in the SYSLGRNX

A query to read the SYSLGRNX

This month I want to have a look inside the SYSLGRNX – Not just for fun but actually to see if I, and you, have a serious RECOVER problem.

The Problem began with too many TP Image Copy steps

The problems all began when it was noticed by one of our customers that our DB2 Utility generation software (RTDX) was generating Image Copy jobs for “old” PBG partitions that were no longer being updated and they asked us why we were recommending an Image Copy.

RTDX and Image Copy

RTDX (RealTime DBAExpert for z/OS and SAX – Space AssuranceExpert) is threshold based and uses the Real-time Statistics (RTS) tables as the major driver of decisions about REORG, RUNSTATS, and IMAGE COPY. The normal rules for Image Copy are “If the RTS COPY counters have not been updated then there has not been an update and we do not need a Copy”. This is normally extended with a special copy rule “But generate a Full Image Copy every xx days even if no updates”. This is done to guarantee that you can perform a recovery even if your non-DB2 migration/back-up software automatically deletes datasets older than xx days.

Everything green?

So you think you are covered? Not so fast! When the RTS were first introduced back in DB2 V7 they were not that stable and, sometimes, the RTS DB was actually stopped. This meant that the numbers were sometimes “not that reliable” – Now for REORG and RUNSTATS it does not really matter because, at some point, the counters start to count again and all is OK. But for IMAGE COPY it would be a disaster to “miss” an update! RTDX handles this with an extra “belt and braces” or, for the Americans amongst us, “belt and suspenders” check: “CHECK SYSLGRNX? Y/N” setting. If set to “Y” then RTDX generates an extra job that extracts the SYSLGRNX data and loads it into a DB2 table. This can then be queried by RTDX to see when objects were opened for update (Just like DB2 does in RECOVER processing to get the required log ranges.)

Something Old Something New

So now you know how the system works I can explain what happened at the customer site. They use Partition By Growth tablespaces (PBGs) a lot as “containers” to just insert records into. They had PBGs with hundreds of partitions and they wondered why RTDX generated Image Copy steps for partitions 1 – 136 when they only held “old non-changed data”.

Check the code

First thing I did was to check the code for any silly mistakes. Everything was fine. I then got the SYSLGRNX extract file and saw that the Partitions were indeed in there… I then worked with the customer directly using log prints and tracing until we finally saw what was happening.

PBG doing the ripple!

When a new partition was added, all of the previous partitions header pages got an update “max no. of partitions”. Now this update counts as a system page and so is *not* in the RTS Counters as they only count data pages! RECOVER of course does not care! It would demand to look for the log ranges and what would happen if your logs have been deleted? Yep – Unrecoverable data!

APAR PI60104 fixes the problem

So you can apply the APAR and all is good? Not really. You still have to go and check if you have any potentially unrecoverable PBG datasets out there. How do you do that? Well – you ask me!

Reading SYSLGRNX is not so easy

The problem with SYSIBM.SYSLGRNX is that the data is nearly unusable when you want to join to the DB2 Catalog. E.g. Two byte character fields that actually contain SMALLINT values and, sometimes, with the high bit set…

Just creating a table and then a dumb INSERT INTO SELECT FROM dies because of the DSNDB01 Cursor rules. So first you’ll need to create your new version of SYSLGRNX:

-- CREATE NEW VERSION OF SYSLGRNX WITH SMALLINTS --
CREATE TABLE BOXWELL.SYSLGRNX2
   (LGRDBID          SMALLINT          NOT NULL
   ,LGRPSID          SMALLINT          NOT NULL
   ,LGRUCTS          TIMESTAMP         NOT NULL
   ,LGRSRBA          CHAR(10)          FOR BIT DATA NOT NULL
   ,LGRSPBA          CHAR(10)          FOR BIT DATA NOT NULL
   ,LGRPART          SMALLINT          NOT NULL
   ,LGRSLRSN         CHAR(10)          FOR BIT DATA NOT NULL
   ,LGRELRSN         CHAR(10)          FOR BIT DATA NOT NULL
   ,LGRMEMB          SMALLINT          NOT NULL
   ,LGRNEGPSID       CHAR(1)           NOT NULL)
;
COMMIT ;

Cross loader to the rescue!

After trying various tricks to get at the data I hit on the idea of abusing the cross loader to do it for me. Here is my JCL that you must tailor for your site to do the dirty deed:

//LOAD EXEC PGM=DSNUTILB,REGION=32M,PARM='DC10,LOAD' 
//STEPLIB  DD DISP=SHR,DSN=DSNC10.SDSNEXIT.DC10 
//         DD DISP=SHR,DSN=DSNC10.SDSNLOAD 
//SYSUT1   DD UNIT=SYSDA,SPACE=(CYL,(9,9)) 
//SORTOUT  DD UNIT=SYSDA,SPACE=(CYL,(9,9)) 
//SYSERR   DD UNIT=SYSDA,SPACE=(CYL,(9,9)) 
//SYSMAP   DD UNIT=SYSDA,SPACE=(CYL,(9,9)) 
//SYSDISC  DD UNIT=SYSDA,SPACE=(CYL,(9,9)) 
//UTPRINT  DD SYSOUT=* 
//SYSPRINT DD SYSOUT=* 
//SYSIN    DD * 
EXEC SQL 
DECLARE C1 CURSOR FOR 
SELECT COALESCE( 
       CASE WHEN ASCII(SUBSTR(A.LGRDBID, 1, 1)) * 256         
               + ASCII(SUBSTR(A.LGRDBID, 2, 1)) > 32767 THEN  
                (ASCII(SUBSTR(A.LGRDBID, 1, 1)) * 256         
               + ASCII(SUBSTR(A.LGRDBID, 2, 1))) - 32768      
            ELSE ASCII(SUBSTR(A.LGRDBID, 1, 1)) * 256         
               + ASCII(SUBSTR(A.LGRDBID, 2, 1))               
       END , 0)         AS LGRDBID                            
      ,COALESCE(                                              
       CASE WHEN ASCII(SUBSTR(A.LGRPSID, 1, 1)) * 256         
               + ASCII(SUBSTR(A.LGRPSID, 2, 1)) > 32767 THEN  
                (ASCII(SUBSTR(A.LGRPSID, 1, 1)) * 256         
               + ASCII(SUBSTR(A.LGRPSID, 2, 1))) - 32768      
            ELSE ASCII(SUBSTR(A.LGRPSID, 1, 1)) * 256         
               + ASCII(SUBSTR(A.LGRPSID, 2, 1))               
       END , 0)         AS LGRPSID                            
      ,TIMESTAMP(                                             
           CASE WHEN SUBSTR(A.LGRUCDT, 5 , 2 ) > '83' THEN '19'
                ELSE '20'                                     
           END CONCAT                                         
           SUBSTR(A.LGRUCDT, 5 , 2 ) CONCAT '-' CONCAT        
           SUBSTR(A.LGRUCDT, 1 , 2 ) CONCAT '-' CONCAT        
           SUBSTR(A.LGRUCDT, 3 , 2 ) CONCAT '-' CONCAT        
           SUBSTR(A.LGRUCTM, 1 , 2 ) CONCAT '.' CONCAT        
           SUBSTR(A.LGRUCTM, 3 , 2 ) CONCAT '.' CONCAT        
           SUBSTR(A.LGRUCTM, 5 , 2 ) CONCAT '.' CONCAT        
           SUBSTR(A.LGRUCTM, 6 , 2 ) CONCAT '0000'            
                 )      AS LGRUCTS                            
      ,A.LGRSRBA                                              
      ,A.LGRSPBA                                              
      ,A.LGRPART                                              
      ,A.LGRSLRSN                                             
      ,A.LGRELRSN                                             
      ,ASCII(SUBSTR(A.LGRMEMB, 2, 1)) AS LGRMEMB              
      ,CASE WHEN ASCII(SUBSTR(A.LGRPSID, 1, 1)) * 256         
               + ASCII(SUBSTR(A.LGRPSID, 2, 1)) > 32767 THEN ‘Y’
            ELSE ‘N’                                          
       END              AS LGRNEGPSID                         
FROM SYSIBM.SYSLGRNX A                                        
ENDEXEC                                                               
LOAD DATA INCURSOR(C1) RESUME NO SHRLEVEL NONE REPLACE                    
  INTO TABLE BOXWELL.SYSLGRNX2                                        
/*
The use of ASCII is to “translate” the character data into smallint and also to detect if we have negative numbers. This happens if you run out of DBIDs and also if you use CLONE objects. The requirement for the two COALESCEs is down to the cross loader syntax check on the CASE statements by the way…

This could load a lot of data of course!

Do not forget the INDEX

Now the Index that we need to speed up the processing (and the RUNSTATS of course!)

-- CREATE INDEX ON THE COLUMNS WE NEED --
CREATE INDEX BOXWELL.SYSLGRNX_IX
          ON BOXWELL.SYSLGRNX2
       (LGRDBID,LGRPSID,LGRPART,LGRMEMB)
        USING STOGROUP SYSDEFLT
             PRIQTY -1
             SECQTY -1
        CLUSTER
        CLOSE YES
;
COMMIT ;

And the RUNSTATS

RUNSTATS  TABLESPACE <yourdatabase>.<yourtablespace>
          TABLE (ALL)
          INDEX(ALL)
          FREQVAL NUMCOLS 1 COUNT 100 BOTH
          FREQVAL NUMCOLS 2 COUNT 100 BOTH
          FREQVAL NUMCOLS 3 COUNT 100 BOTH
          SHRLEVEL CHANGE REPORT NO UPDATE ALL HISTORY NONE

Now we can SQL So finally we have the SYSLGRNX data in a format that we can actually use! Here is my SQL that lists out any PBG spaces that have, according to the RTS, no updates but the last Image Copy timestamp is earlier than the SYSLGRNX record:

-- FINALLY THE QUERY TO SEE IF A PROBLEM EXISTS OR NOT --
--
-- SELECT ALL PBGS IN SYSTABLESPACE THAT HAVE DATA IN THE
-- SYSLGRNX WHICH ALSO HAVE AN ENTRY IN THE RTS WHERE THE
-- COUNTERS ARE NULL OR ZERO BUT THE COPYLASTTIME IS EARLIER
-- THAN THE SYSLGRNX TIMESTAMP.
--  THIS MEANS THAT THE RTS THINKS "NO REASON TO COPY"
--  BUT SYSLGRNX THINKS "SOMETHING WAS UPDATED"
--

SELECT A.DBNAME, A.NAME, B.LGRPART, A.DBID, A.OBID, A.PSID
      ,B.LGRUCTS, B.LGRSRBA, B.LGRSPBA, B.LGRSLRSN
      ,B.LGRELRSN, HEX(B.LGRMEMB) AS MEMBER
      ,C.COPYLASTTIME
      ,MAX(D.START_RBA) AS HIGHEST_IC_RBA
      ,MAX(D.TIMESTAMP) AS HIGHEST_IC_TS
FROM SYSIBM.SYSTABLESPACE A
INNER JOIN
    BOXWELL.SYSLGRNX2    B
 ON  A.DBID     = B.LGRDBID
 AND A.PSID     = B.LGRPSID
 AND A.INSTANCE = B.LGRINST
INNER JOIN
     SYSIBM.SYSTABLESPACESTATS C
 ON  C.DBID     = A.DBID
 AND C.PSID     = A.PSID
 AND C.PARTITION= B.LGRPART
 AND C.INSTANCE = A.INSTANCE
LEFT OUTER JOIN
     SYSIBM.SYSCOPY       D
  ON D.DBNAME    = A.DBNAME
 AND D.TSNAME    = A.NAME
 AND D.DSNUM   IN ( 0 , B.LGRPART)
WHERE A.TYPE   = 'G'
 AND (C.COPYUPDATEDPAGES = 0
  OR C.COPYUPDATEDPAGES IS NULL)
 AND (C.COPYCHANGES = 0
  OR C.COPYCHANGES IS NULL)
 AND (C.COPYLASTTIME < B.LGRUCTS
  OR C.COPYLASTTIME IS NULL)
 AND (D.ICTYPE = 'F'
  OR D.ICTYPE IS NULL)
GROUP BY A.DBNAME, A.NAME, B.LGRPART, A.DBID, A.OBID, A.PSID
     ,B.LGRUCTS, B.LGRSRBA, B.LGRSPBA, B.LGRSLRSN
     ,B.LGRELRSN, HEX(B.LGRMEMB)
     ,C.COPYLASTTIME
ORDER BY 1 , 2 , 3 , 7
WITH UR
;

And the winner is?

I hope you have zero records returned? Even if you do have some records, it is relatively easy to quickly get them image copied so that you can sleep at night. If zero, you still have a great copy of the SYSLGRNX that you can use for other purposes!

A customer replies

Here’s some real data back from a customer:

discovering hidden recovery problems in the SYSLGRNX; DB2 z/OS; PBG partition

The COUNT is the number of SYSLGRNX entries they have – So they have nine DB & TS groups and from a further analysis (removing the data sharing member duplicates) they have 1926 distinct DB, TS and Partitions!

They scheduled image copy processing for all of these and were then done.

I hope you liked this month’s topic.

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

2016-09 Solving SEQUENCE or space problems in Db2

What do you do if your critical production tablespace reaches its maximum of 32 datasets on a Saturday?

Could you actually get the REORG through before prime time starts on Monday morning?

I have recently been involved with trialing and testing our space monitor software (SpaceAssuranceExpert or SAX) on Db2 z/OS after some enhancements had been added. It was originally designed—many moons ago—to monitor the size of the secondary extents that Db2 was using and to dynamically issue an ALTER SECQTY to guarantee that the maximum size of the object was reached *before* users ran out of physical extents.

MGEXTSZ to the Rescue?

Now you might be wondering “What’s that got to do with the price of beef?” because, as we all know, Db2 V8 introduced a “sliding scale” to the size of the secondary extents so that it could also guarantee that a dataset hit its maximum size *before* you ran out of extents.

Extents are not everything

The “problem” is that extents are not everything. In fact, one major area of concern is the number of datasets. If it is 01:00 on a Saturday morning and your critical production tablespace has reached its maximum of 32 datasets – what are you going to do? Could you actually get the REORG through before prime time starts on Monday morning? Or what happens when Partition 26 completely fills up?

SAX to the rescue!

This is where our SAX tool saves the day. It is an STC that runs 24×7 catching the IFCIDs that Db2 throws whenever it issues an extent request for a dataset. Using the Db2 Catalog, SAX then determines the exact make-up (geometry) of the object being extended and can use two levels of warning percentages to start triggering alarm bells way, way before it all goes pear-shaped!

Here is my little “ready-reckoner” for Linear Dataset Allocations:
 Space and sequence problems in DB2 z/OS; degenerated extent

Object type: TABLESPACE        !   Maximum number of data sets 
-------------------------------+------------------------------- 
LOB tablespaces                !   254                         
-------------------------------+------------------------------- 
Non-partitioned tablespaces    !   32                          
-------------------------------+------------------------------- 
Partitioned tablespaces        !    1 (Percent used check)      
-------------------------------+------------------------------- 
Partitioned By Growth          !   MAXPARTITIONS. LPS check if 
tablespaces                    !   more than one. If on last   
                               !   partition then percent used.
-------------------------------+------------------------------- 
                                                                  
Object type: INDEX             !   Maximum number of data sets 
-------------------------------+------------------------------- 
Non-partitioned indexes on     !   MIN ( 4096 , 2 power 32 /   
tablespace with LARGE,         !       ( DSSIZE / TS PGSIZE)) 
DSSIZE, or more than 64        !   Eg: 128 GB DSSIZE with      
Partitions                     !         8 KB Tablespace Page  
                               !   gives 256 Pieces (datasets) 
                               !   Or    4 GB DSSIZE with      
                               !         4 KB Tablespace Page  
                               !   gives 4096 Pieces (datasets)
-------------------------------+------------------------------ 
Non-partitioned indexes        !   32                          
otherwise                      !                             
-------------------------------+------------------------------ 
Partitioned indexes            !   1 (Percent used check)      
-------------------------------+------------------------------

Here you can see that it is not as easy to calculate how many datasets are allowed as it used to be. You must also make sure you understand PBG space definitions. SAX allows two percentages and uses them in two different ways:

  1. The number of datasets that have been allocated
  2. The used space within a linear dataset

The second is also used if it is a PBG with MAXPARTITIONS 1, (e.g. The Db2 Catalog), or if the Partition being extended is the last allowable Partition.

How big can my PARTITION get?

There is a full description in the SQL guide all about the maximum size of a partition. Here is a little summary of this info:

Use the DSSIZE parameter to control how big a partition is (or for LOB spaces how big the LOB space can get):

1G1 Gigabyte
2G2 Gigabytes
4G4 Gigabytes
8G8 Gigabytes
16G16 Gigabytes
32G32 Gigabytes
64G64 Gigabytes
128G128 Gigabytes
256G256 Gigabytes

To specify a value greater than 4G, the data sets for the table space must be associated with a DFSMS data class that has been specified with extended format and extended addressability.

How does the number of partitions affect my size?

If NUMPARTS is used along with DSSIZE then the maximum size of each partition depends on the value of NUMPARTS, as shown in the following list. Otherwise, the maximum size of each partition defaults to 4G.

Value of NUMPARTS  Maximum partition size (default for DSSIZE)

1 to 164GB (4G)
17 to 322GB (2G)
33 to 641GB (1G)
65 to 254 4GB (4G)

How does my size affect the number of partitions?

If NUMPARTS is greater than 254, the maximum partition size (and the default for DSSIZE) then depends on the actual page size of the table space.

Page sizeMaximum partition size (default for DSSIZE)
4K4GB (4G)
8K8GB (8G)
16K16GB (16G)
32K32GB (32G)

If DSSIZE is explicitly specified, the maximum number of partitions that can be specified, or is the default, is limited by the maximum table space size. For example:

  • For a partitioned table space with a 4K page size, if DSSIZE 64GB is specified, the maximum NUMPARTS value is 256.
  • For a partitioned table space with an 8K page size, if DSSIZE 64GB is specified, the maximum NUMPARTS value is 512.
  • For a partitioned table space with a 32K page size, if DSSIZE 128GB is specified, the maximum NUMPARTS value is 1024.

Special rules for LOBs

For LOB table spaces, if DSSIZE is not specified, the default for the maximum size of each data set is 4GB. The maximum number of data sets is 254.

What about PBGs?

To use these UTS types you must specify the MAXPARTITIONS clause. It specifies that the table space is a partition-by-growth table space. The data set for the first partition is allocated unless the DEFINE NO clause is specified for the partition. The data sets for additional partitions are not allocated until they are needed. (Unless you use the NUMPARTS clause)

You specify the maximum number of partitions to which the table space can grow, which must be in the range of 1 to 4096, also depending on the corresponding values of the DSSIZE and page size clauses.

How does my MAXPARTITIONS affect my size?

The maximum value for MAXPARTITIONS is a function of DSSIZE and table space page size:

DSSIZE value4K page8K Page16K page32K page
1G – 4G4096409640964096
8G2048409640964096
16G1024204840964096
32G512102420484096
64G25451210242048
128G1282565121024
256G64128256512

WTO to Job Ticket

These warnings are issued as WTOs and can easily be picked up by system automation tools to open job tickets or send e-mails to alert DBAs—days or weeks before the system stops working.

For a warning SAX issues WTO ids:

O2RTSU04 - 12W (non-partitioned spaces)

O2RTSU04 - 14W (partitioned spaces)

O2RTSU04 - 16W (partition by growth spaces)

For a critical SAX issues WTO ids:

O2RTSU04 - 13W (non-partitioned spaces)

O2RTSU04 - 15W (partitioned spaces)

O2RTSU04 - 17W (partition by growth spaces)

This is not all that SAX does, in fact, it covers all of these problems:

  1. Can this data set reach its maximum physical size *before* running out of physical extents? (The actual size is dependent on the “geometry” of the object of course!)
  2. Will this dataset run out of datasets? (Again, how many datasets an object can actually have is dependent on the “geometry” of the object)
  3. Is this partition nearing its maximum size?
  4. Do I have a SEQUENCE/IDENTITY problem coming up?
  5. Did Db2 ask for one extent but got more back?
  6. Are any of my SMS disk storage pools running out of space?

New in SAX – SEQUENCE Support

Number four on that list is brand new. We have a customer who got bitten by a nasty problem. They had a SEQUENCE defined with the NO CYCLE parameter so it could not loop around and then finally they hit the last available number. Not good! They asked if SAX could be modified to also take care of this hidden nasty and we readily agreed so that all customers can benefit. The parameters panel got this set of new Parameters:
 Space and sequence problems in DB2 z/OS; degenerated extent
As mentioned in the screen shot above from our online help panel, it will check the SYSSEQUENCES every PING minutes which, by default, is 30. When the WARN SUPP INTVL is set, you can reduce the number of warnings issued so as not to overload your problem ticket System!

Degenerated Extent support

There is another WTO that can be issued by degenerated extents (Number five in the list)

O2RTSU04 – 10W (Audit SECQTY)

These occur when Db2 requests one extent but gets back, say, five. This is ok, but it eats through the number of extents quite quickly and it implies the need for a disk defragmentation to be scheduled.

Let’s talk about Extents

While talking about extents, what I have also seen, is that the number of extents is sometimes getting very large indeed. At one customer site they had numerous datasets with over 4,000 extents! Now we all know that no-one knows where data is really stored on the modern disk sub-systems, but still… I would schedule a reorg at say 1,000 extents. The number of extents changed a *long* time ago in z/OS 1.7 to raise it from 255 to 7,257, spread over 59 volumes, *but* still limited to 123 extents per volume. This little nugget of information is *very* important if you are thinking of going down the “one huge EAV volume for all my data” road, (these disks can have up to 262,668 cylinders or about 223GB), as the extents per volume limit is still there.

In comparison, the good old MOD-3s had 3,339 cylinders and 3GB of space.

SMS Storage Group Checks

Finally, SAX can also check and alert if your SMS storage groups start getting full. This is especially handy for your Db2 Catalog, Copy Pools and Work Pool SMS Storage groups. Depending on the thresholds you define you can get either

O2RTSU05 - 05W SMS STOGROUP XXXXXXXX: % ALLOC = XXXX

Or

O2RTSU05 - 05W SMS STOGROUP XXXXXXXX: GB FREE = XXXXXXXXXXXX

WTOs being issued. Not really normal DBA work, but very handy nevertheless!

The Future?

SAX will continue to be updated and enhanced for the new features and functionality that Db2 brings in future releases. For example: Relative Page Numbers in Db2 12, where all partitions can get their own DSSIZE with seven byte RIDs.

Of course, you could do all of this on your own too—but then you’d have to maintain it! And that’s enough of me being a sales guy. Back to what I really love the most: solving Db2 problems.

As usual any questions or comments are welcome,

TTFN Roy Boxwell