Midwest Database Users Group – March 2020

MWDUG – Midwest Database Users Group – IL, USA – March 2020

SEGUS & SOFTWARE ENGINEERING present:

Db2 for z/OS Know your Limits!

In this presentation I will show and explain the current number of limits within Db2 on z/OS.

Starting with relatively basic simple things like DSSIZE or PIECESIZE. Moving on to Linear Dataset limits (32 for segmented spaces, 254 for LOB). We then will have a quick look at how the COPYPOOL is looking before diving off to the world of PBGs, where you must be aware of MAXPARTITONS among other things. Then we will review SEQUENCES and Numeric Primary Key usage before finally delving into the abstract limit of Db2 within the Db2 engine including DBATs etc.

1. DSSIZE or PIECESIZE among other basic simple things
2. Linear Dataset limits (32 for segmented spaces, 254 for LOB)
3. COPYPOOL
4. PBGs & MAXPARTITIONS
5. SEQUENCES and Numeric Primary Key usage
6. Abstract limit of Db2 (DBATs etc.)


Our Space AssuranceExpert for Db2 z/OS monitors (critical) Db2 limitations in real-time


Speaker biography

Roy Boxwell has more than 34 years of experience in MVS, OS/390, and z/OS environments – 31 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

Wisconsin Db2 Users Group – March 2020

WDUG – Wisconsin Db2 Users Group, WI, USA – March 2020

SEGUS & SOFTWARE ENGINEERING present:

Db2 for z/OS Know your Limits!

In this presentation I will show and explain the current number of limits within Db2 on z/OS.

Starting with relatively basic simple things like DSSIZE or PIECESIZE. Moving on to Linear Dataset limits (32 for segmented spaces, 254 for LOB). We then will have a quick look at how the COPYPOOL is looking before diving off to the world of PBGs, where you must be aware of MAXPARTITONS among other things. Then we will review SEQUENCES and Numeric Primary Key usage before finally delving into the abstract limit of Db2 within the Db2 engine including DBATs etc.

1. DSSIZE or PIECESIZE among other basic simple things
2. Linear Dataset limits (32 for segmented spaces, 254 for LOB)
3. COPYPOOL
4. PBGs & MAXPARTITIONS
5. SEQUENCES and Numeric Primary Key usage
6. Abstract limit of Db2 (DBATs etc.)


Our Space AssuranceExpert for Db2 z/OS monitors (critical) Db2 limitations in real-time


Speaker biography

Roy Boxwell has more than 34 years of experience in MVS, OS/390, and z/OS environments – 31 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

Saint Louis Db2 Users Group – March 2020

STLDUG – Saint Louis Db2 Users Group, MO, USA – March 2020

SEGUS & SOFTWARE ENGINEERING present:

Db2 for z/OS Know your Limits!

In this presentation I will show and explain the current number of limits within Db2 on z/OS.

Starting with relatively basic simple things like DSSIZE or PIECESIZE. Moving on to Linear Dataset limits (32 for segmented spaces, 254 for LOB). We then will have a quick look at how the COPYPOOL is looking before diving off to the world of PBGs, where you must be aware of MAXPARTITONS among other things. Then we will review SEQUENCES and Numeric Primary Key usage before finally delving into the abstract limit of Db2 within the Db2 engine including DBATs etc.

1. DSSIZE or PIECESIZE among other basic simple things
2. Linear Dataset limits (32 for segmented spaces, 254 for LOB)
3. COPYPOOL
4. PBGs & MAXPARTITIONS
5. SEQUENCES and Numeric Primary Key usage
6. Abstract limit of Db2 (DBATs etc.)


Our Space AssuranceExpert for Db2 z/OS monitors (critical) Db2 limitations in real-time


Speaker biography

Roy Boxwell has more than 34 years of experience in MVS, OS/390, and z/OS environments – 31 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

New England DB2 Users Group December 2019

NEDB2UG – Old Sturbridge Village – MA, USA

SEGUS & SOFTWARE ENGINEERING present:

Db2 for z/OS Know your Limits!


In this presentation I will show and explain the current number of limits within Db2 on z/OS.

Starting with relatively basic simple things like DSSIZE or PIECESIZE. Moving on to Linear Dataset limits (32 for segmented spaces, 254 for LOB). We then will have a quick look at how the COPYPOOL is looking before diving off to the world of PBGs, where you must be aware of MAXPARTITIONS among other things. Then we will review SEQUENCES and Numeric Primary Key usage before finally delving into the abstract limit of Db2 within the Db2 engine including DBATs etc.

1. DSSIZE or PIECESIZE among other basic simple things
2. Linear Dataset limits (32 for segmented spaces, 254 for LOB)
3. COPYPOOL
4. PBGs & MAXPARTITIONS
5. SEQUENCES and Numeric Primary Key usage
6. Abstract limit of Db2 (DBATs etc.)


Our Space AssuranceExpert for Db2 z/OS monitors (critical) Db2 limitations in real-time


Speaker biography

Roy Boxwell has more than 33 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

2019-12 Fun with Db2 12 PBR RPN

I have recently enhanced our SpaceAssuranceExpert (SAX) product to automatically help out if partitions, or partitioned indexes, are getting too big for their boots in a productive system.

SAX – What is it?

The idea behind SAX, is to stop any and all chances of getting a dreaded SQLCODE -904 in production, especially “out of the blue”.

Our idea was, in Db2 12, with the new PBR RPN to do an on-the-fly ALTER to the DSSIZE, just like SAX does now with the SECQTY to avoid running out of extents.

RTFM Time

A quick look in the manual tells you that :


it is an immediate ALTER (as long as you make the new DSSIZE larger than the old one!) and there are no package invalidations or REORGS required.


This is fantastic!

So I created a nice little PBR and then ran a horrible Cartesian join SPUFI to flood the first partition with data. This join had a TIME card of (,1) to limit it to one second of CPU before getting an Abend S322.

SAX Monitor

The SAX monitor reacted perfectly and did the TP alter to 513 GB DSSIZE (I had an increment size of 512 as a test), and got an SQLCODE -666 (I just love that SQLCODE…), as the INSERT was still running.

This ALTER was then internally queued to be attempted later, like in any other failure case.


All well and good.

Wham! Nasty errors !

Then I did the same for a DPSI on my PBR RPN… Oh dear!I got a nasty IO Error and then an even nastier ROLLBACK loop, meaning I had to cancel the IRLM to stop Db2… (There is an APAR for this problem PH18977.)

Rollback loop?

The ROLLBACK loop was caused by me choosing to use NOT LOGGED as a tablespace attribute. We have a finite amount of log space, and when the transaction was S322’d after one second of CPU,

the rollback could not find one of the archive logs and then we had to cold start Db2

– Not pretty!

Proper test!

I then wrote a couple of little test programs that actually COMMITted after 5000 inserts and then the ALTERs all worked as designed.


Do I worry too much about extended format and extended accessibility?

Next, I worried about the ominous “extended format and extended addressability“ attributes in the DATACLASS for a PBR RPN, and wondered what would happen if a customer has SAX running and it happily ALTERs a TP to say 6 GB when they can only address 4 GB…


IBM to the rescue!

Luckily for us, Db2 development had thought about this!

  • If you attempt to create a PBR RPN (even with a very small size) and your DATACLASS does not have the two attributes set, you get an error message 00D70008 telling you this detail.

  • If you ALTER an existing tablespace to be a PBR RPN and your DATACLASS does not have the two attributes set, then the ALTER works fine. But remember, this is a pending alter and you *must* do a REORG at the TS level with inline TP level copies. This REORG then fails – also with 00D70008.

So, in other words, SAX cannot hurt you here!


What about PBGs?

After all this we also considered PBGs. They have a limit as well – MAXPARTITIONS in their case. So we added an ability to also ALTER MAXPARTITIONS as well. Here you must be more careful though, as these ALTERs are still immediate *but* they invalidate any referring packages!

ABIND YES or NO?

If you work with ABIND set to YES all is good as the ALTER comes in, Db2 invalidates your packages and the auto rebind happens so fast that you do not even notice it happening, however, if you work with ABIND NO then any packages, even the package actually running and doing the inserts, will fail! If you are just using dynamic SQL then it is 100% OK otherwise – Buyer beware!

Aha!

I have opened an Analytics Idea (DB24ZOS-I-1057) to try and get this loophole closed, as I cannot see what access path change could be affected by going from 32 to, say, 36 MAXPARTITIONS.

Db2 keeps adding the parts dynamically and nothing happens then… Go figure… Anyway, if you would like it – Please vote for it!


There are already some nice comments attached to it:

  • DP commented

    this is just a limit in the catalog.  So how could access path be affected?  The actual growth of a partition doesn’t invalidate the package so how just changing the limit do so ?
  • BW commented

    I opened a Case on this asking why packages are being invalidated in this situation and it is still open waiting for a reply.
  • BD commented

    With only impact to catalog, not sure how Access Path would be impacted.  Seems wasteful and counter productive to invalidate packages.

Remember – You never stop learning!

As always I would be pleased to hear from you!

TTFN,
Roy Boxwell
Senior Architect

2018-11 db2 Numeric Primary Keys: Space, the final frontier?

What do Db2 numeric Primary Keys have to do with space management ?

SMALLINT  –  INTEGER  –  DECIMAL  –  BIGINT…

How to find out which sequences are nearing their physical limit ?

Recently I was at a customer site and was showing them our SpaceAssuranceExpert (SAX) software. It basically does all sorts of space management and checking, and has recently been extended to now also check for any numeric Primary Keys that are running out of space.

Traffic is needed

The simple way to show that the started task is up and running, and working as designed, is to generate some traffic. To do this, I normally just do a really awful Cartesian join insert into a dummy table.

First I create my target like so:

CREATE TABLE ROY.SYSTABLESPACE LIKE SYSIBM.SYSTABLESPACE ;

And then I simply do an INSERT from SELECT with no WHERE predicate in a batch SPUFI job:

INSERT INTO ROY.SYSTABLESPACE           
(SELECT A.* FROM SYSIBM.SYSTABLESPACE A 
                ,SYSIBM.SYSTABLESPACE B 
                ,SYSIBM.SYSTABLESPACE C 
                ,SYSIBM.SYSTABLESPACE D 
                ,SYSIBM.SYSTABLESPACE E 
                ,SYSIBM.SYSTABLESPACE F 
                ,SYSIBM.SYSTABLESPACE G 
                ,SYSIBM.SYSTABLESPACE H 
                ,SYSIBM.SYSTABLESPACE I 
                ,SYSIBM.SYSTABLESPACE J)
;

CCSID Problems

Now at this site, the first CREATE actually failed due to some odd sort of CCSID problem. The lead DBA said “No problem I have a little test table where we can do the same sort of thing.” This little table was used for the INSERT and we huddled over the keyboard waiting for the started task to start reporting on EXTENTS and other space data…

Nothing happened

We waited and waited and waited, and still a big zilch!

Trust is everything

Now, I know my software and I know that this works fine, so I was a little perplexed to say the least!

I thought and thought and thought and then reviewed their SQL again:

INSERT INTO ROY.DSN_COLDIST_TABLE       
(SELECT A.* FROM ROY.DSN_COLDIST_TABLE A
                ,ROY.DSN_COLDIST_TABLE B
                ,ROY.DSN_COLDIST_TABLE C
                ,ROY.DSN_COLDIST_TABLE D
                ,ROY.DSN_COLDIST_TABLE E
                ,ROY.DSN_COLDIST_TABLE F
                ,ROY.DSN_COLDIST_TABLE G
                ,ROY.DSN_COLDIST_TABLE H
                ,ROY.DSN_COLDIST_TABLE I
                ,ROY.DSN_COLDIST_TABLE J
                ,ROY.DSN_COLDIST_TABLE K
                ,ROY.DSN_COLDIST_TABLE L
                ,ROY.DSN_COLDIST_TABLE M
                ,ROY.DSN_COLDIST_TABLE N
                ,ROY.DSN_COLDIST_TABLE O)
;

Ouch! That hurt!  😯

And then it hit me!

Look at the SQL and you can see that it is trying to INSERT into itself…

What does Db2 do in this situation?

It inserts into the work database until the INSERT is finished and then it actually moves all the data into the table…

*duh*

Waiting for Godot

If we had waited long enough we would have seen messages like:

O2RT-SU04-006I: Extent activity for
DC10.DSNDBD.DSNDB07.DSN32K00.I0001.A001
                Number of extents : 00032 - EXTS threshold is    1        
O2RT-SU04-011I: 08:11:35 - Datasets will be processed now                 
O2RT-SU04-006I: Extent activity for 
DC10.DSNDBD.DSNDB07.DSN32K00.I0001.A001
                Number of extents : 00034 - EXTS threshold is    1

With hindsight everything is clear!

But, of course, we didn’t see that until much later… then I noticed another little detail that I though was quite funny – the tablespace in question was in COPY Pending:

DSNT360I -DC10 *********************************** 
DSNT361I -DC10 * DISPLAY DATABASE SUMMARY        * 
               *       GLOBAL                    *
DSNT360I -DC10 *********************************** 
DSNT362I -DC10 DATABASE = DSN00201 STATUS = RW 
DBD LENGTH = 4028 
DSNT397I -DC10 
NAME     TYPE PART  STATUS       PHYERRLO PHYERRHI CATALOG  PIECE
-------- ---- ----- ------------ -------- -------- -------- -----
DSNRCOLD TS   0001  RW,COPY 
DSNRCOLD TS 
******* DISPLAY OF DATABASE DSN00201 ENDED **********************
DSN9022I -DC10 DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION 
***

It appears that Db2 does not actually even check if it can complete the INSERT at the beginning…

Get it right   😀

The moment we changed the INSERT to be on a different table (No work file usage) and it was not in COPY Pending status all worked as usual:

O2RT-SU04-006I: Extent activity for 
DC10.DSNDBD.DSN00200.SYSTABLE.I0001.A001
                Number of extents : 00002 - EXTS threshold is    1         
O2RT-SU04-011I: 08:38:40 - Datasets will be processed now                  
O2RT-SU04-006I: Extent activity for DC10.DSNDBD.DSN00200.SYSTABLE.I0001.A001
                Number of extents : 00003 - EXTS threshold is    1         
O2RT-SU04-011I: 08:38:41 - Datasets will be processed now                  
O2RT-SU04-006I: Extent activity for DC10.DSNDBD.DSN00200.SYSTABLE.I0001.A001
                Number of extents : 00004 - EXTS threshold is    1         
O2RT-SU04-011I: 08:38:42 - Datasets will be processed now

Much better data!

Why Primary Keys?

If you are wondering what numeric Primary Keys have to do with space management just think back to the good ol’ days before Sequences and Identity columns. You created keys with numeric types

  • SMALLINT,
  • INTEGER,
  • DECIMAL and
  • possibly even with the newer BIGINT.

These were, basically, “sequences”.


The challenge is to find out which of these is nearing its physical limit.
With SMALLINT it is not that big and DECIMAL can be very, very small indeed!


Catalog or Data?

Now this data is fetched from a couple of catalog tables and so is very dependent on the quality of your RUNSTATS.

so, in the next stage, it will be extended to actually read the User Data to see what the value currently really is.

Cool stuff, huh?

Feel free to send me your comments and ask questions.

TTFN,
Roy Boxwell
Senior Architect