2024-01 Happy New SQLCODE!

Aren’t I supposed to be wishing Happy New Year there? Not this month! One of my readers asked me a question about SQLCODEs and it opened up a quite fascinating can of worms!

What is the SQLCODE?

Anyone reading this blog should already know exactly what the SQLCODE is, however, for those of you out there using Google in 2045, the definition is, at least in COBOL:

01 SQLCA.
   05 SQLCAID     PIC X(8).
   05 SQLCABC     PIC S9(9) COMP-5.
   05 SQLCODE     PIC S9(9) COMP-5.
   05 SQLERRM.
      49 SQLERRML PIC S9(4) COMP-5.
      49 SQLERRMC PIC X(70).
   05 SQLERRP     PIC X(8).
   05 SQLERRD     OCCURS 6 TIMES
                  PIC S9(9) COMP-5.
   05 SQLWARN.
      10 SQLWARN0 PIC X.
      10 SQLWARN1 PIC X.
      10 SQLWARN2 PIC X.
      10 SQLWARN3 PIC X.
      10 SQLWARN4 PIC X.
      10 SQLWARN5 PIC X.
      10 SQLWARN6 PIC X.
      10 SQLWARN7 PIC X.
   05 SQLEXT.
      10 SQLWARN8 PIC X.
      10 SQLWARN9 PIC X.
      10 SQLWARNA PIC X.
      10 SQLSTATE PIC X(5).

The third field within the SQLCA is the SQLCODE, which is a four-byte signed integer and it is filled after every SQL call that a program/transaction makes.

All Clear So Far?

So far so good! In the documentation from IBM is this paragraph about how to handle and process SQLCODEs:

SQLCODE

Db2 returns the following codes in SQLCODE:

• If SQLCODE = 0, execution was successful.

• If SQLCODE > 0, execution was successful with a warning.

• If SQLCODE < 0, execution was not successful.

SQLCODE 100 indicates that no data was found.

The meaning of SQLCODEs, other than 0 and 100, varies with the particular product implementing SQL.

Db2 Application Programming and SQL Guide

So, every programmer I have ever talked to checks if the SQLCODE is 0 – Green! Everything is fine, if the SQLCODE is negative – Bad message and ROLLBACK, if the SQLCODE is +100 – End of cursor or not found by direct select/update/delete – normally 100% Ok, everything else issues a warning and is naturally very dependent on the application and business logic!

What’s Wrong With This Picture?

Well, sometimes zero is not really zero… I kid you not, dear readers! The sharp-eyed amongst you, will have noticed the last bytes of the SQLCA contain the SQLSTATE as five characters. Going back to the documentation:

An advantage to using the SQLCODE field is that it can provide more specific information than the SQLSTATE. Many of the SQLCODEs have associated tokens in the SQLCA that indicate, for example, which object incurred an SQL error. However, an SQL standard application uses only SQLSTATE.

So, it still seems fine, but then it turns out that SQLCODE 0 can have non-all zero SQLSTATEs!

New in the Documentation

At least for a few people this is a bit of a shock. From the SQLCODE 000 documentation:

SQLSTATE

  • 00000 for unqualified successful execution.
  • 01003, 01004, 01503, 01504, 01505, 01506, 01507, 01517, or 01524 for successful execution with warning.

Say What?

Yep, this is simply stating that you have got an SQLCODE 0 but up to nine different SQLSTATEs are possible… This is not good! Most error handling is pretty bad, but now having to theoretically add SQLSTATE into the mix makes it even worse!

What are the Bad Guys Then?

01003 Null values were eliminated from the argument of an aggregate function.

01004 The value of a string was truncated when assigned to another string data type with a shorter length.

01503 The number of result columns is larger than the number of variables provided.

01504 The UPDATE or DELETE statement does not include a WHERE clause.

01505 The statement was not executed because it is unacceptable in this environment.

01506 An adjustment was made to a DATE or TIMESTAMP value to correct an invalid date resulting from an arithmetic operation.

01507 One or more non-zero digits were eliminated from the fractional part of a number used as the operand of a multiply or divide operation.

01517 A character that could not be converted was replaced with a substitute character.

01524 The result of an aggregate function does not include the null values that were caused by evaluating the arithmetic expression implied by the column of the view.

Not Good!

From this list the 01004, 01503, 01506 and especially 01517 just jump right out and scream at you! Here in Europe, we have a right to have our names or addresses correctly written and, in Germany with all the umlauts, it can get difficult if you then have a 01517 but SQLCODE 0 result!

I hope you don’t find this newsletter too unsettling as, after all, Db2 and SQL normally works fine, but I do think that these SQLSTATEs should really have warranted a positive SQLCODE when they were first created…

What do you all think?

TTFN,

Roy Boxwell

Update:

One of my readers wonders how practicle these „errors“ are. A good point, and so here is a nice and easy recreate for the 01003 problem:

CREATE TABLE ROY1 (KEY1   CHAR(8) NOT NULL,                 
                   VALUE1 INTEGER ,                         
                   VALUE2 INTEGER )                         
;                                                           
INSERT INTO ROY1 (KEY1)               VALUES ('A') ;        
INSERT INTO ROY1 (KEY1)               VALUES ('AA') ;       
INSERT INTO ROY1 (KEY1,VALUE1)        VALUES ('B', 1) ;     
INSERT INTO ROY1 (KEY1,VALUE1,VALUE2) VALUES ('BB', 1 , 1) ;
INSERT INTO ROY1 (KEY1,VALUE1)        VALUES ('C', 2) ;     
INSERT INTO ROY1 (KEY1,VALUE1,VALUE2) VALUES ('C', 2 , 2) ; 
SELECT * FROM ROY1                                          
;                                                           
CREATE VIEW ROYVIEW1 AS                                     
(SELECT AVG(VALUE1) AS AVGVAL1, AVG(VALUE2) AS AVGVAL2      
 FROM ROY1)                                                 
;                                                           
SELECT * FROM ROYVIEW1                                      
;                                                           

This set of SQL ends up with these outputs:

---------+---------+---------+--------
KEY1           VALUE1       VALUE2    
---------+---------+---------+--------
A         -----------  -----------    
AA        -----------  -----------    
B                   1  -----------    
BB                  1            1    
C                   2  -----------    
KEY1           VALUE1       VALUE2    
---------+---------+---------+--------
C                   2            2    
DSNE610I NUMBER OF ROWS DISPLAYED IS 6

---------+---------+---------+---------+---------+---------+-----
    AVGVAL1      AVGVAL2                                                
---------+---------+---------+---------+---------+---------+-----
          1            1                                                
DSNT400I SQLCODE = 000,  SUCCESSFUL EXECUTION                           
DSNT418I SQLSTATE   = 01003 SQLSTATE RETURN CODE                        
DSNT415I SQLERRP    = DSN SQL PROCEDURE DETECTING ERROR                 
DSNT416I SQLERRD    = 0 0  0  -1  0  0 SQL DIAGNOSTIC INFORMATION       
DSNT416I SQLERRD    = X'00000000'  X'00000000'  X'00000000'  X'FFFFFFFF'
         X'00000000'  X'00000000' SQL DIAGNOSTIC INFORMATION
DSNT417I SQLWARN0-5 = W,,W,,, SQL WARNINGS                              
DSNT417I SQLWARN6-A = ,,,,   SQL WARNINGS                               
DSNE610I NUMBER OF ROWS DISPLAYED IS 1                                  

This is not so good if you ask me…

2023-11 Are we really living in an agile world?

This month I want to run through all the purely SQL changes that we have received in Db2 12 and in Db2 13 right up until Db2 13 FL504. I was reading about agile development and how fast the deliveries are and so I wondered how many purely SQL changes have we, as SQL Users, actually received over the last six to seven years?

Back to the facts…

Ok, in Db2 12 FL501 we got LISTAGG which was very cool indeed, apart from the fact that you could not use ORDER BY which irritated all the SQL developers I know quite a bit! IBM also created a whole bunch of Accelerator only „pass-thru“ functions that I cannot ever use as I do not know whether or not any of my customers actually has an Accelerator… so, for me, they do not really count. In total 28 BiF’s either got pass-thru or extra Accelerator support so if you *have* an accelerator good news indeed! This was all enabled over Db2 12 FL Levels 504 and 507 as well as APAR PH48480.

Uniwhat?

A bunch of, for me, weird things were the UNI_60 and UNI_90 support added to LOWER, TRANSLATE and UPPER for both Db2 12 and 13. There must be a use case out there but I am lucky enough not to have found it yet!

Super MERGE

MERGE got a major overhaul in Db2 12 with the addition of DELETE support but the story of MERGE did not end there! It still had a major performance problem if any of the index columns got updated as then it was forced to fallback to a tablespace scan. With Db2 13 FL504 (APAR PH47581) this problem was solved – nearly…

The following conditions must be met to enable Db2 to use an index for a MERGE operation when index key columns are being updated:
– The MERGE statement contains a corresponding predicate in one of the
following forms, for each updated index key column:
index-key-column = literal-value, where literal-value is a constant or any
expression that can be treated as a literal, including a host variable, parameter
marker, or non-column expression.
index-key-column IS NULL
– If a view is involved, WITH CHECK OPTION is not specified.

Db2 SQL Reference

MERGE has basically become one of the most powerful SQL statements out there and you can actually cause terrible trouble if you use DRDA with VALUES clauses and hard coded „FOR 10 ROWS“ style of SQLs. All is very well documented and worth a read under the heading:

DRDA considerations when NOT ATOMIC CONTINUE ON SQLEXCEPTION is specified (or the NOT ATOMIC CONTINUE ON SQLEXCEPTION clause is not specified and source-values (VALUES) is specified)

Db2 SQL Reference

This Db2 13 APAR also enabled the chance of getting List Prefetch as an access path which is, as far as I can tell, the only „new“ access path in Db2 13.

Pagination anyone?

The use of OFFSET was a great innovation for Db2 but the „other“ pagination was better! I mean data-dependent pagination which changed this old chestnut of an SQL:

WHERE (LASTNAME = 'SMITH' AND FIRSTNAME > 'JOHN') 
   OR (LASTNAME > 'SMITH')

Into this modern SQL:

WHERE (LASTNAME, FIRSTNAME) > ('SMITH', 'JOHN')

Much much better and for online generated dynamic SQL, I am talking to you CICS, a fantastic win! To verify this you get a range-list index scan „NR“ in your ACCESSTYPE PLAN_TABLE column when you EXPLAIN it.

One at a time please

Piece-wise DELETE using the FETCH FIRST nnnn ROWS was also a really good idea instead of causing possible lock escalations and/or timeouts. A simple loop around the DELETE statement and Bob’s your uncle!

Db2 13 – What’s New?

The big stuff here was the extension of the PROFILE table (I keep talking about it don’t I?) as now it also handles Local things – This is a game changer! Starting with CURRENT_LOCK_TIMEOUT & DEADLOCK_RESOLUTION_PRIORITY but I am sure this list will grow and grow. The PROFILE table is just way to good not to use these days!

AI got a major boost

We got a bunch of AI stuff in Db2 13 (SQL Data Insights) but the first new „agile“ one was in FL504 when AI_COMMONALITY was released. It will hopefully enable shops to find outliers in the data which were not there at training time.

Db2 V8 finally done!

Finally, the last thing that was started way back when in DB2 V8 was done! The length of a column name has been expanded from 30 up to 128 bytes. However, do not do this! The SQLDA is *not* designed for this and so it might look nice on paper but, depending on how you interface to them, it might cause serious grief!

Lower cadence higher quality

IBM have announced a cadence of two FL’s per year down from the 3 – 4 when Agile all started and so I am happy that the list of changes will keep getting longer and the quality of the code higher.

Just SQL!

Please remember all I am talking about here is SQL relevant enhancements – there are tons of others as well – just think about Utilities or FTB etc. etc. For a full list always download and read the latest „What’s New?“ guide.

Did I miss anything? Drop me a line if you think so!

TTFN,

Roy Boxwell

2023-09 Toronto visit

Ahhh! What a great time I just had in sunny ol‘ Toronto. I was extremely happy to be back at the Central Canada Db2 Users Group (CCDUG) meeting for 2023 held at the BMO Institute For Learning. It has always been a fantastic venue for presenters and attendees, and this year we had the added bonus of a „Coyote“ warning… fun fun fun!

The CCDUG is a two day, three track mini-IDUG if you like, and the organizers added onto the second day an AppDev track as well. So you had Db2 for z/OS, Db2 for LUW, IMS and AppDev – A very nice mix of things indeed!

Just the Facts, Ma’am!

I am mainly a z/OS style of guy (old school!) and so went for the z/OS track of course! It all started, as normal, with a keynote from Michael Kwok that was all about „accidental discoveries“ and how these often have had a tremendous impact on the world. Examples included Penicillin and Microwave cookers. It also included Db2!

He highlighted the environmental factors required for innovation to succeed:

01 Have an open mind

02 Use Strategy as a filter

03 Commit

Open Minded?

For the first point, Penicillin, having an open mind was important when Alexander Fleming, in September 1928, noticed the effect that a mold had on his old petri dishes. It was having an open mind that enabled him to really see the very first antibiotic in action. Sadly, it took another 14 years before it was actually used in a pure form and yet another year before mass production started in the second world war.

Strategy is Everything!

Using Strategy as a filter was highlighted by looking at how Bill Gates took advantage of the CP/M creators Digital Research people’s failure to meet and agree a deal with IBM to discuss a new OS for the IBM 5150 PC. IBM then contacted Microsoft, who agreed to write PC-DOS which evolved into MS-DOS which basically started the modern PC world we all now know today.

Commit is Not Just a Db2 Verb!

Commit was all about the invention of Microwave Cookers! Percy Spencer working at Raytheon in 1945 was working with Radar sets using high energy magnetrons (A British invention, again in the second world war) and he noticed that a chocolate bar in his pocket melted. It took an awful lot of commitment to eventually get the product to market and it first appeared in 1947 as the „Radarange“ at $5000, 3kw and was water-cooled!

Oh oh! IBM at Work…

Into these three ideas comes IBM! Back in the 60’s and 70’s, data centers started storing more and more data and Tedd Codd wrote down the rules of the Relational Data Model in his seminal paper „A Relational Model of Data for Large Shared Data Banks“ in June 1970. Now back in those days, IBM was not exactly well known as being Open Minded, Strategic or Committed!

Then, a few extraordinary individuals pulled it all together to actually get what was called System R, the granddaddy of all Relational Databases, into existence, starting in 1974 and got its first customer in 1977. It still took another six years to actually arrive at DB2 in 1983, and ten years later, we got DB2 for LUW as well. The actual GA date of DB2 1.1 was the 2nd of April 1985.

So, basically, Db2 was an accidental discovery!

Then it Begins!

The sessions at CCDUG then all kicked off, so here is my list of the ones I either attended or read up afterwards (naturally, I cannot attend every session!) All are linked directly to the CCDUG website and I thank them for allowing public access!

Database Administration Enhancements of Db2 13 for z/OS from Robert Catterall. An excellent run through everything you need to know about Db2 13 including more PROFILE support. This is probably one of the most under-used Db2 features out there.

Db2 for z/OS 101: Buffer Pools and Group Buffer Pools from Tori Felt and Keziah Knopp. Was a highly entertaining 101 level intro to the complex world of Buffer Pools, Managers and disk usage within Db2 – Highly recommended as these little details easily get lost nowadays!

It’s AI Jim, but not as we know it! from me. Obviously, a fantastic presentation all about AI – truth or hype? 😄

Analysis of DEADLOCK IFCID 172 in SMF with no special tool and low cost from Xiaodong Ma. This was all about reading the bits and bytes from the IFCID to find *all* the blockers and waiters – not just the first two you get in the DSNT375I messages.

Who is afraid of DDF? from Toine Michielse was all about controlling your DDF workload simply and easily. Use of PROFILE is also recommended …

End of Day One

Then we had a very pleasant evening with the Vendor Expo and prize draws before retiring to the bar to drink some cold beverages and play pool!

Next day, bright-eyed and bushy-tailed we started off again.

Securing Db2 for z/OS Data: Encryption and Much More from Robert Catteral which contained a ton of great information all about encrypting and securing your data and communications – Great stuff!

An Audit a day keeps the lawyers at bay! from my colleague Ulf Heinrich listed out all the requirements for auditing a Db2 system from the auditor’s perspective. Including real-world examples and ending with a „how it looks in Zowe“ bonus preview!

Db2 for z/OS Administrative and Developer tools strategy: Customer feedback, roadmap, and future direction from Sowmya Kameswaran was all about the ongoing project to unify the user experience in Db2. This includes the IBM Unified Management Server for z/OS, IBM Db2 Administration Foundation for z/OS, IBM Db2 DevOps Experience for z/OS, IBM Query Workload Tuner for z/OS , IBM Db2 Analytics Accelerator Administration services for z/OS , Zowe and IBM Db2 for z/OS Developer Extension! A ton of stuff here!

Partition By Range (PBR) and Relative Page Number (RPN) explained from Frank Rhodes drilled down into the nitty gritty details of PBG, PBR and the seven-byte RID!

Db2 For z/OS and Unicode – What you need to know from Chris Crone went into details about what exactly you need to know about UNICODE and how it affects you, your data, and your SQL applications! COLLATION_KEY is the really good one here!

Db2 for z/OS System Profile Monitoring: Overview and Db2 13 Enhancements from Mark Rader contained all the great new stuff in Db2 13 regarding PROFILE usage. Once again well worth a read as it is the best way to control DDF. Slide 15 is the highlight for me!

On the Application development front there were also a whole bunch of great presentations:

Db2 SQL and SQL PL – A Journey Through Db2 12 and 13 Functions Levels from Chris Crone where he runs through the new SQL and SQLPL stuff from Db2 12 before showing all the new stuff in Db2 13 as well.

Db2 for z/OS: REST and Hybrid Cloud from Tori Felt and Keziah Knopp was a huge presentation all about RESTful APIs and all the fun and games involved with them… Covered here was all the stuff on GET, POST, PUT etc. etc., also JSON, z/OS Connect et al!

Db2 Hot topics from Progressive Insurance from Bob Vargo and Dustin Ratliff. They discussed the challenges of how to get SQL Data Insights up and running and then got into FTB and PROFILE usage again (I love PROFILEs, have I mentioned this before?)

Explain explained from Toine Michielse. Does what it says on the side of the can! He shows, and explains, all the available access paths so that everyone is „on the same page“ afterwards.

Bringing Db2 for z/OS-Based Applications Into the Modern Age from Robert Catterall where he showed the modern new way to access the data using REST services with plus and minus points of them. Then he went on to highlight the „new features“ in Db2 for z/OS like System and Business time or XML columns and tables and finished off with a review of the advantages that SQL PL and Native SQL Procedures can give you.

DB2 SQL – go beyond the usual – My current TOP 40 SQL tips, tricks, and opinions from Brian Laube where he discussed all of the tricks and secrets he has learned over the years. Very worthy of a good read!

That’s all folks! I did not even mention all of the LUW or the IMS ones – Feel free to look for them as well if you are interested.

As always, a fantastic conference and I am already looking forward to the next one in 2024!

TTFN,

Roy Boxwell

2023-05: ZPARMs never stop changing part II

This month, I want to go through some of the absolutely most important ZPARMs that control how your Db2 systems behave in a very significant manner. All of the following ZPARMs have a performance impact of some sort. We are always trying to squeeze the last drop of performance out of our Db2 sub-systems, aren’t we?

Db2 13 and Some Db2 12 Updates Ahead!

Since this Newsletter topic first came out, in March 2022, out of the ten ZPARMs listed *five* have got new defaults! I have highlighted all these changed defaults. I have also added three new „Usual Suspects“ to the list of ZPARMs that must be checked…

Starting with the Easy Stuff…

CACHEDYN. YES/NO, default YES. Should always be set to YES – unless you do not care about saving dynamic SQL performance. Back a few decades ago, the recommendation was to have this set to NO as default! Hard to believe that these days, where most shops have 80% – 90% dynamic SQL during the day!

Now we Get to the Numerics!

OUTBUFF. 400 – 400,000, default 102,400. This is *extremely* important and you really should set it to the highest possible value you can afford in real memory! As a minimum, it should be 102,400 KB (100MB). This is the buffer that Db2 uses to write log records before they are „really“ written to disk. The larger the buffer, the greater the chance that, in case of a ROLLBACK, the data required is in the buffer and not on disk.

Skeletons in the Closet?

EDM_SKELETON_POOL. 5,120 – 4,194,304, default 81,920. This is one of my personal favorites, (I wrote a newsletter solely on this a few years ago). I personally recommend at least 150,000 KB and actually even more if you can back it with real memory. Just like OUTBUFF, pour your memory in here but keep an eye on paging! If Db2 starts to page, you are in serious trouble! Raising this can really help with keeping your DSC in control.

DBDs are Getting Bigger…

EDMDBDC. 5,000 – 4,194,304, default 40,960. The DBD Cache is getting more and more important as, due to UTS usage, the size of DBDs is increasing all the time.

DSC is Always Too Small!

EDMSTMTC. 5,000 – 4,194,304, default 113,386. The EDM Statement Cache (really the Dynamic Statement Cache) is where Db2 keeps a copy of the prepared statements that have been executed. So when the exact same SQL statement with the exact same set of flags and qualifiers is executed, Db2 can avoid the full prepare and just re-execute the statement. This is basically a no-brainer and should be set to at least 122,880 KB. Even up to 2TB is perfectly OK. Remember: A read from here is *much* faster than a full prepare, so you get a very quick ROI and great value for the memory invested! Keep raising the value until your flushing rates for DSC drop down to just 100’s per hour, if you can! Remember to cross check with the EDM_SKELETON_POOL ZPARM as well. It always takes two to Tango…

How Many SQLs?

MAXKEEPD. 0 – 204,800, default 5,000. The Max Kept Dyn Stmts parameter is how many prepared SQLs to keep past commit or rollback. It should be set to a minimum of 8,000 or so. Raising this might well cause a large memory demand in the ssidDBM1 address space so care must be taken.

RIDs Keep Getting Longer…

MAXRBLK. 0, 128 – 2,000,000, default 1,000,000. RID POOL SIZE is the maximum amount of memory to be available for RID Block entries. It should be at least 1,000,000 and, if you can, push it to the maximum of 2,000,000. Unless you want to switch off all RID Block access plans, in which case you set it to zero – Obviously not really recommended!

Sorts Always Need More Space

MAXSORT_IN_MEMORY. 1000 to SRTPOOL. Default 2000. The maximum in-memory sort size is the largest available space to complete ORDER BY, GROUP BY or both SQL Clauses. Remember that this is per thread, so you must have enough memory for lots of these in parallel. The number should be between 1,000 and 2,000, but whatever value you choose, it must be less than or equal to the SRTPOOL size.

Sparse or Pair-wise Access?

MXDTCACH. 0 – 512, default 20. Max data caching is the maximum size of the sparse index or pair-wise join data cache in megabytes. If you do not use sparse index, pair-wise join, or you are not a data warehouse shop, then you can leave this at its default. Otherwise, set it to be 41 MB or higher. If it is a data warehouse subsystem, then you could set this as high as 512 MB. (This ZPARM replaced the short-lived SJMXPOOL, by the way.)

Sort Node Expansion

SRTPOOL. 240 – 128,000, default 20,000. SORT POOL SIZE is the available memory that is needed for the sort pool. IFCID 96 can really help you size this parameter. Remember that the number of sort nodes leapt up from 32,000 in Db2 11 to 512,000 nodes for non-parallelism sorts and 128,000 nodes for a sort within a parallel child task in Db2 12. This means raising this ZPARM can have an even greater positive effect than before.

The Three New Guys on the Block!

To the MAX!

DSMAX used to be around 20,000 and can now be between 1 – 400,000. Remember that you will never actually reach this maximum limit as it is 31-bit memory-constrained.

Thrashing Around…

NPGTHRSH. Valid values are 0 or 1 – 2147483647. Default up to Db2 11 was 0, from Db2 12 default is now 1. SAP systems use a default of 10. The big change here, was in Db2 12 when the change from „no statistics ever ran“ of -1 forced the value to be the „optimizer default“ of 501 instead of the real value -1. This is also why the default is now 1 ,so that this ZPARM has a normal use! Setting it to 0 means that the access path chosen will always only be cost based.

Lock ‚em Up and Throw Away the Key!

NUMLKUS. 0 – 104857600, with a default of 20,000. Just be careful raising this value too high, as each lock will take 540 bytes of storage in the IRLM!

Your „Top Ten List“ + Three

These thirteen ZPARMs really influence how your Db2 system works and so must always be checked and changed with great care and attention to detail. Always do a before and after appraisal to see whether or not changing them helped or hindered your system!

If you have any comments, or other ZPARMs you think are also important for performance, feel free to drop me a line!

IDUG 2023 NA

IDUG is nearly upon again. I will be there in Philadelphia at the SEGUS booth and doing a fair bit of moderating as well. Drop on by, have a chat and pick up some of our swag and join me at the „Roy reviews AI with our WorkloadExpert“ PSP on Thursday for a chance to win some cool stuff.

Hope to see some of you there!

TTFN,

Roy Boxwell

2021-05 Soundex and other cool features part 7 for Db2 12

In this, obviously, never ending series of new features, I will roll up all the new ones since my „SOUNDEX and other „cool“ features – Part six All new for Db2 12“ newsletter from 2018-08 where I first covered the new stuff in Db2 12.

What was new in FL100

At that level, a new table function called BLOCKING_THREADS was introduced, which is primarily used by the command DISPLAY BLOCKERS to help in looking for bad guys who are ready to BLOCK your change to FLxxx

FL500 Changed Scalars

Two scalar BiF got changed: TRANSLATE and UPPER can now both use UNI_60 as a locale.

FL500 New Scalars

In FL500, we got a nice bunch of new Scalar functions:

GENERATE_UNIQUE_BINARY (like GENERATE_UNIQUE except it returns a BINARY(16) value)
HASH_CRC32
HASH_MD5
HASH_SHA1
HASH_SHA256
PERCENTILE_CONT
PERCENTILE_DISC
WRAP

FL500 Changed Aggregates

One aggregate BiF got updated: ARRAY_AGG can now be used for associative arrays.

FL501?

It brought in just one new Aggregate BiF, which was the very famous LISTAGG. This is famous, as it was the „proof of concept“ that Agile actually worked and could function in the wild.

FL502?

Two changes to Scalar BiFs in this release:

GRAPHIC The first argument now accepts numeric data types, including SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE, FLOAT, and DECFLOAT.
VARGRAPHIC The first argument now accepts numeric data types, including SMALLINT, INTEGER, BIGINT, DECIMAL, REAL, DOUBLE, FLOAT, and DECFLOAT.

FL503?

Nothing new for us in that release!

FL504?

It brought in a brand new concept for Db2 on z/OS. The concept of „pass-through BiFs“. These are just checked for valid syntax before being passed on down to the Accelerator that would actually execute them. Obviously, you must be careful about their usage!

CUME_DIST
CUME_DIST (aggregate)
FIRST_VALUE
LAG
LAST_VALUE
LEAD
NTH_VALUE
NTILE
PERCENT_RANK
PERCENT_RANK (aggregate)
RATIO_TO_REPORT
REGEXP_COUNT
REGEXP_INSTR
REGEXP_LIKE
REGEXP_REPLACE
REGEXP_SUBSTR

FL505?

Brought in a whole bunch of encryption and decryption BiFs:

ENCRYPT_DATAKEY converts a block of plain text to a block of encrypted text using a specified algorithm and key label.
DECRYPT_DATAKEY_datatype functions return the block of encrypted text from ENCRYPT_DATAKEY as a block of the datatype specified by the function used. List of datatypes: DECRYPT_DATAKEY_INTEGER, DECRYPT_DATAKEY_BIGINT, DECRYPT_DATAKEY_DECIMAL, DECRYPT_DATAKEY_VARCHAR, DECRYPT_DATAKEY_CLOB, DECRYPT_DATAKEY_VARGRAPHIC, DECRYPT_DATAKEY_DBCLOB, and DECRYPT_DATAKEY_BIT.

FL506?

Here IBM came up with a slew of „alternative names“ for existing BiFs and a new method of calling HASH routines. The background here was to make porting to Db2 on z/OS that little bit easier!

CHAR_LENGTH CHARACTER_LENGTH, which returns the length of its argument in the number of string units that are specified
COVAR_POP COVARIANCE or COVAR, which return the population covariance of a set of number pairs
HASH HASH_MD5, HASH_SHA1, or HASH_SHA256, which return the result of applying a hash algorithm to an input argument, depending on the value specified for the second argument for the HASH function:

        0 (default) HASH_MD5 Returns VARBINARY(16) instead of BINARY(16)
        1 HASH_SHA1          Returns VARBINARY(20) instead of BINARY(20)
        2 HASH_SHA256        Returns VARBINARY(32) instead of BINARY(32)

POW POWER, which returns the value of one argument raised to the power of a second argument
RANDOM RAND, which returns a double precision floating-point random number
STRLEFT LEFT, which returns a string that consists of the specified number of leftmost bytes or the specified string units
STRPOS POSSTR, which returns the position of the first occurrence of an argument within another argument
STRRIGHT RIGHT, which returns a string that consists of the specified number of rightmost bytes or specified string units
TO_CLOB CLOB, which returns a CLOB representation of the first argument
TO_TIMESTAMP TIMESTAMP_FORMAT, which returns a timestamp for a character string expression, using a specified format to interpret the string

FL507?

Here IBM development added another slew of „pass through“ BiFs:

ADD_DAYS
BTRIM
DAYS_BETWEEN
NEXT_MONTH
REGR_AVGX
REGR_AVGY
REGR_COUNT
REGR_INTERCEPT or REGR_ICPT
REGR_R2
REGR_SLOPE
REGR_SXX
REGR_SXY
REGR_SYY
ROUND_TIMESTAMP if invoked with a date expression

FL508?

None that I have read of yet!

FL509?

None yet…

Naturally I will be keeping this newsletter up-to-date, as necessary.

Any questions or ideas, do not hesitate to drop me a line,

TTFN,

Roy Boxwell

2019-12 Fun with Db2 12 PBR RPN

I have recently enhanced our SpaceAssuranceExpert (SAX) product to automatically help out if partitions, or partitioned indexes, are getting too big for their boots in a productive system.

SAX – What is it?

The idea behind SAX, is to stop any and all chances of getting a dreaded SQLCODE -904 in production, especially “out of the blue”.

Our idea was, in Db2 12, with the new PBR RPN to do an on-the-fly ALTER to the DSSIZE, just like SAX does now with the SECQTY to avoid running out of extents.

RTFM Time

A quick look in the manual tells you that :


it is an immediate ALTER (as long as you make the new DSSIZE larger than the old one!) and there are no package invalidations or REORGS required.


This is fantastic!

So I created a nice little PBR and then ran a horrible Cartesian join SPUFI to flood the first partition with data. This join had a TIME card of (,1) to limit it to one second of CPU before getting an Abend S322.

SAX Monitor

The SAX monitor reacted perfectly and did the TP alter to 513 GB DSSIZE (I had an increment size of 512 as a test), and got an SQLCODE -666 (I just love that SQLCODE…), as the INSERT was still running.

This ALTER was then internally queued to be attempted later, like in any other failure case.


All well and good.

Wham! Nasty errors !

Then I did the same for a DPSI on my PBR RPN… Oh dear!I got a nasty IO Error and then an even nastier ROLLBACK loop, meaning I had to cancel the IRLM to stop Db2… (There is an APAR for this problem PH18977.)

Rollback loop?

The ROLLBACK loop was caused by me choosing to use NOT LOGGED as a tablespace attribute. We have a finite amount of log space, and when the transaction was S322’d after one second of CPU,

the rollback could not find one of the archive logs and then we had to cold start Db2

– Not pretty!

Proper test!

I then wrote a couple of little test programs that actually COMMITted after 5000 inserts and then the ALTERs all worked as designed.


Do I worry too much about extended format and extended accessibility?

Next, I worried about the ominous “extended format and extended addressability“ attributes in the DATACLASS for a PBR RPN, and wondered what would happen if a customer has SAX running and it happily ALTERs a TP to say 6 GB when they can only address 4 GB…


IBM to the rescue!

Luckily for us, Db2 development had thought about this!

  • If you attempt to create a PBR RPN (even with a very small size) and your DATACLASS does not have the two attributes set, you get an error message 00D70008 telling you this detail.

  • If you ALTER an existing tablespace to be a PBR RPN and your DATACLASS does not have the two attributes set, then the ALTER works fine. But remember, this is a pending alter and you *must* do a REORG at the TS level with inline TP level copies. This REORG then fails – also with 00D70008.

So, in other words, SAX cannot hurt you here!


What about PBGs?

After all this we also considered PBGs. They have a limit as well – MAXPARTITIONS in their case. So we added an ability to also ALTER MAXPARTITIONS as well. Here you must be more careful though, as these ALTERs are still immediate *but* they invalidate any referring packages!

ABIND YES or NO?

If you work with ABIND set to YES all is good as the ALTER comes in, Db2 invalidates your packages and the auto rebind happens so fast that you do not even notice it happening, however, if you work with ABIND NO then any packages, even the package actually running and doing the inserts, will fail! If you are just using dynamic SQL then it is 100% OK otherwise – Buyer beware!

Aha!

I have opened an Analytics Idea (DB24ZOS-I-1057) to try and get this loophole closed, as I cannot see what access path change could be affected by going from 32 to, say, 36 MAXPARTITIONS.

Db2 keeps adding the parts dynamically and nothing happens then… Go figure… Anyway, if you would like it – Please vote for it!


There are already some nice comments attached to it:

  • DP commented

    this is just a limit in the catalog.  So how could access path be affected?  The actual growth of a partition doesn’t invalidate the package so how just changing the limit do so ?
  • BW commented

    I opened a Case on this asking why packages are being invalidated in this situation and it is still open waiting for a reply.
  • BD commented

    With only impact to catalog, not sure how Access Path would be impacted.  Seems wasteful and counter productive to invalidate packages.

Remember – You never stop learning!

As always I would be pleased to hear from you!

TTFN,
Roy Boxwell
Senior Architect

2019-08 FTB (Fast Traversal Block): Just another TLA?

So we get a new TLA (Three Letter Abbreviation) to try and wrap our heads around in the busy busy world of mainframe databases. This time it comes with a twist, as FTB is actually not “Feed The Beast” sadly, but “Fast Traversal Block”.

In reality, it is referred to everywhere as:

FIT (Fast Index Traversal)

All clear on this and still with me? Good then I may begin!

The problem with modern indexes

The core problem these days is time… the time it takes to go through the leaf pages down to the actual data page is just too long. The vast amount of data we have forces upon the index a large number of layers which, in turn, forces a lot of non-leaf page accesses.

Index structure

We all know how indexes look in Db2, but remember they all have a root page (which is always technically speaking a non-leaf page) which points, normally, to non-leaf pages which, in turn, point to either more non-leaf pages – so you are getting extra levels in the index here  – and finally they point to a leaf page which actually contains pairs of keys and RIDs so that Db2 can get the data from the table page.

Which level are you?

One of the quirks of Db2, is that the leaf pages are called Level 0 and then the non-leaf pages directly above them are Level 1. The root page above these is then Level 2 and so you have a “two level” index.

So how many fetches for a random data access?

The first thing Db2 does is read the root page, looking at the root page it sees which non-leaf page must be fetched and it repeats this until it lands on the leaf page and then it fetches the data. So in my two level index it takes four fetches to get to the data. Now imagine you have a seven level index? Yep you end up fetching a ton of non-leafs!

The solution? FTB!

In Db2 12, IBM introduced the Fast Traversal Block to keep a copy of the non-leaf data in a new area, separate from the buffer pools.

The major reasons for the expected performance boost, are the facts that the structure is L2 cache-aware and each page is equal in size to one cache line (256 bytes), and I/O is massively reduced for the “other” non-leaf pages.

One size fits all?

The size of the FTB is determined by the new ZPARM INDEX_MEMORY_CONTROL with valid values AUTO, DISABLE or 10 – 200,000 MBs. By default AUTO is on. With this value, the size of the FTB is either 20% the size of all available buffer pools or 10MB, whichever is the highest.

The devil is in the detail

To actually start working with FTBs, you have to wait until the FTB Daemon has found an eligible index. From all of your indexes there will be quite a few candidate indexes. These must follow the following rules:


  1. It cannot be longer than 64 bytes

  2. It cannot be versioned (So the OLDEST_VERSION and CURRENT_VERSION must be the same!)

  3. It cannot contain a TIMESTAMP column with TIMEZONE

  4. It cannot have more than 2,000,000 leaf pages


Once through that selection list, the Daemon starts seeing what is happening to this index, is it a random select? That’s is a good thing! Is it an index only access? That’s even better! A split! Oh dear, that’s very bad…from all this, every two minutes, the daemon decides whether or not to use FTB for this index.

Control is in your hands

This is all well and good, but there are a lot of people who like to see *exactly* what is going on and where! So IBM introduced a new command:

-DIS STATS(IMU)

From this you get to see which indexes are being processed and how much space they are using. The Daemon also spits out console messages so you can see what happened and why. Further there is a new Catalog table SYSIBM.SYSINDEXCONTROL where you can micro-manage exactly which indexes to attempt to force into FTB and which ones to exclude. I would try and avoid this option and just let Db2 do the business!

Gotcha’s?

Always! FTB is limited to 10,000 per member which is probably ok! When a MASS DELETE (or TRUNCATE TABLE) happens then the FTB is removed for the related indexes and, even more importantly, is to keep up with current Db2 maintenance! As always with new things, it takes a while to get the birthing pains out of the way!

INCLUDE can kill you!

Finally, remember that the length cannot be greater than 64 bytes. So if you decide to add that one byte character column to the index to get Index Only access, you might then push the index over the edge and up to 65 bytes…

Before ALTERing indexes always check if they are in the FTB (or were!) and check if your alteration will disallow the FTB!

A little SQL to show you what you have

WITH INPUT (NLEVELS, LENGTH, INDEX_NAME) AS                            
 (SELECT  COALESCE(E.NLEVELS , A.NLEVELS )  -- FIRST RTS THEN INDEXES   
       , SUM(CASE D.COLTYPE                                             
             WHEN 'DECIMAL ' THEN                                       
 -- IF , SEPERATOR           SMALLINT( CEILING(( D.LENGTH + 1,0 ) / 2 ))
                             SMALLINT( CEILING(( D.LENGTH + 1.0 ) / 2 ))
             WHEN 'GRAPHIC'  THEN D.LENGTH * 2                          
             WHEN 'VARG'     THEN D.LENGTH * 2                          
             WHEN 'LONGVARG' THEN D.LENGTH * 2                          
             ELSE D.LENGTH                                              
             END)                                                       
       + SUM(CASE A.PADDED                                              
             WHEN 'Y' THEN 0                                            
             ELSE                                                       
               CASE D.COLTYPE                                           
               WHEN 'VARG'     THEN 2                                   
               WHEN 'LONGVARG' THEN 2                                   
               WHEN 'VARCHAR'  THEN 2                                   
               WHEN 'LONGVAR'  THEN 2                                   
               WHEN 'VARBIN'   THEN 2                                   
               WHEN 'DECFLOAT' THEN 2                                   
               ELSE 0                                                   
               END                                                      
             END)                                                       
       + SUM(CASE D.NULLS                                               
             WHEN 'Y' THEN 1                                            
             ELSE 0    
                END) AS LENGTH  
       , STRIP(A.NAME) CONCAT '.' CONCAT STRIP(A.CREATOR) AS INDEX 
 FROM SYSIBM.SYSINDEXES        A                                       
     ,SYSIBM.SYSKEYS            C                                       
     ,SYSIBM.SYSCOLUMNS         D                                       
     ,SYSIBM.SYSINDEXSPACESTATS E                                       
 WHERE A.UNIQUERULE     <> 'D'              -- NOT DUPLICATE            
   AND D.COLTYPE        <> 'TIMESTZ'        -- NOT TIMEZONE             
   AND A.OLDEST_VERSION = A.CURRENT_VERSION -- NOT VERSIONED            
   AND A.TBNAME         = D.TBNAME                                      
   AND A.TBCREATOR      = D.TBCREATOR                                   
   AND A.NAME           = C.IXNAME                                      
   AND A.CREATOR        = C.IXCREATOR                                   
   AND A.NAME           = E.NAME                                        
   AND A.CREATOR        = E.CREATOR                                     
   AND C.COLNAME        = D.NAME                                        
 GROUP BY A.NAME, A.CREATOR , E.NLEVELS, A.NLEVELS)                     
 SELECT NLEVELS, LENGTH , INDEX_NAME                                    
 FROM INPUT                                                             
 WHERE LENGTH <= 64   
 ORDER BY NLEVELS DESC
 FOR FETCH ONLY       
 WITH UR              
 ;                                                    

I have limited it to just show you just the eligible unique, non timezone, non-versioned indexes with a length of 64 or less of course!

Blog time


Here’s an excellent write up from John Campbell with even more technical details:

https://www.ibm.com/developerworks/community/blogs/897a7c98-57af-4523-9cfa-07ebc3f996b4/entry/Db2_12_greatest_hits_with_John_Campbell_fast_index_traversal?lang=en


and from Akiko Hoshikawa: a very good IDUG Blog:

https://www.idug.org/p/bl/et/blogaid=646


As always I would be pleased to hear from you!

TTFN,
Roy Boxwell
Senior Architect

2019-06 A little quote from Terry Purcell: It just takes one

This is a little quote from Terry Purcell which I think should be understood much better than it possibly is today!

Really? Just one?

The quote is based on the idea that :

One single SQL statement can bring your entire system to its knees.

Many people think this is a huge exaggeration – However it is not!

It happened to me

This is not a friend of a friend story, but a story that really happened. Names have, of course, been changed but the story itself is 100% accurate.


The story


It was a normal Db2 PTF Maintenance night…

At this shop they always put maintenance in at midnight on Saturdays. All done automatically, and then roll the update around the data-sharing group so that, relatively quickly, all sub-systems have the same PTF level.

So far so normal

Sunday is not that busy at the site and no-one noticed any change.

Monday was different

Monday morning the machine started acting up. Customers could not login and the help desk was 100% busy. The lead DBA, George, had by around 08:00 a.m. received the first escalation to management and so the number of calls went up.

 

The bad guy

Using our WorkloadExpert (WLX) software together with our Bind ImpactExpert (BIX) the other DBAs, Fred and Ringo, quickly identified one single SQL statement that was taking 30,000 times more resources than in the prior week!

This was an SQL with table functions and LEFT, INNER JOINs etc.

What was it?

The comparison in BIX showed that all that had changed was one index access was now matchcols one instead of matchcols two! That was it! This one tiny change on one little SQL killed this machine…

The fix?

They decided to roll back the PTFs and quickly did this and everything returned to normal…


The lesson?

Always test any PTF before you go live – as just one SQL can kill ya!

Terry had indeed warned us all!


How can we help?

Well, thank you for asking: Our product CDDC contains an SQL replay and compare function that would have spotted this SQL in two different complementary ways. Firstly the BIX part of CDDC would have spotted it straightaway and secondly the replay itself would have thrown this out as a major outlier and bad guy candidate!

Different ways to Rome

They could possibly even have fixed this without backing out the PTF apply. They could have tried using our RUNSTATS Rescue to attempt to use older statistics and see if one of them would have given them a matchcols two access path. This time, however, with all the managers breathing down their necks it was decided – Undo all the changes!


In your shop

How would you have handled this situation in your shop?

Would/Could you have seen this before it happened?

Feel free to send me your comments and ask questions.

TTFN,
Roy Boxwell
Senior Architect

Machine Learning for Db2 z/OS: Artificial Intelligence – Hype or Reality?

Now that the Db2 Optimizer has gone all Artificial Intelligent (AI from now on) on us, I thought it would be interesting to review where we are in this brave new world!

Keeps you out of a BIND

My firm created our BindImpactExpert (BIX) software many moons ago to compare access paths, typically before a program non SQL change or a REBIND after RUNSTATS etc. Just to warn you not to do the REBIND or whatever you were going to do, as it would change the access plan to something you may well not really want to have. This means not just no-brainers like index access to tablespace scan, but also different index usage or table access order changes.

Apples and Oranges

So, what has that got to do with AI?
Well, we used a marketing slogan of trying to compare oranges and apples to try and make it clear :

how tough it really is to try and match SQL statements across different program releases and to compare the results.

ML and AI – The new buzzwords

Now in Db2 12, the optimizer has got machine learning (ML from now on) to help it study which host variables are used the most and how many rows are usually returned by a given query.

Do you “know” your data?

When the optimizer “knows” this info it can of course make much better “guesses” at good repeatable access plans that help everyone! Now the usage of AI and ML always gets me interested because of all the fun you can have with them.


There was recently a great article in my favourite science magazine “New Scientist” all about how to fool image recognition AI:

https://www.newscientist.com/article/mg24232270-200-machine-mind-hack-the-new-threat-that-could-scupper-the-ai-revolution/
(subscribe to read the whole article)


Now you should see the connection back to our BindImpactExpert slogan. So much for Oranges and Apples – AI cannot even see that an Orange is an Orange…

this is not good or even perhaps “suboptimal”…

Adversarial Images and the fun you can have with them…


From 2018 this article

https://www.wired.co.uk/article/artificial-intelligence-hacking-machine-learning-adversarial

Within this article is a great paper all about graffiti changed road signs! Here is a direct link to the article https://arxiv.org/pdf/1707.08945.pdf


Fooling Deep Learning

Another great paper where Deep Learning AIs jump off the deep end:

http://www.evolvingai.org/fooling


Disappearing Humans

and finally this joyous article and a link to a You Tube video:

https://www.theverge.com/2019/4/23/18512472/fool-ai-surveillance-adversarial-example-yolov2-person-detection


But do not wear this picture as a T-Shirt down the street anytime soon!!!

Would you sit in a driverless car?

What this all boils down to is that I, for sure, will never sit in a driverless car within the next 40 years!
Why?

  • Firstly, I believe a true AI does not exist yet and will not exist for at least another 40 years.
  • Secondly, when I see an Orange I see an Orange because I “know” what an Orange looks like! As you have seen AIs have no concept of what an Orange *is* and so can easily see a drill instead or mistake skiers for dogs…
  • Thirdly, I am a programmer and someone somewhere is writing and testing – I hope! – the code… need I say more? Don’t even get me started with Agile or KANBAN.

Artificial Intelligence
extract from © 2019 – „We need to talk, AI“ – Dr. Julia Schneider und Lena Kadriye Ziyal
A Comic Essay on Artificial Intelligence
https://weneedtotalk.ai/

Room full of CTO’s

Have you heard about the hopefully apocryphal story of a room full of CTO’s at a software conference when they were asked “Would you fly in a computer controlled aircraft?” All their hands went up. Then they were asked “Would you fly if it was your software teams that had written the flight control software?” all their hands went down!

Flying is strangely different

While talking about flying my answer would be different?

Why?
Because in the air you have no-one else apart from other highly automated devices anywhere near you! This is the one hope that I can see for the whole driverless car stuff – Driving on the Interstate, Motorway or Autobahn. In the air, the environment is much more rigid. No lights, no crossings, no opposite traffic, no foot traffic, no children chasing a ball across the road etc.

They will come…

I am sure that driverless cars will come soon and I am equally sure that people will die because of them – However, I am also sure that lots of accidents will be avoided as well. After all the AI will always react faster than a human.

With this comes the ethics of AI and ML of course which will also play a major role. If the car has to decide in a crash scenario :

  • who gets hurt, should it protect itself and its cargo above all others?
  • What about a group of playing children or a bus full of nuns?
  • How can it decide who lives and who dies?

Looking on the bright side at least it will not get drunk, or high, and drive! I would also seriously hope that AI does not get road rage!

But even with these “bonus” points, I still have massive doubts and serious worries about the quality of the software and the true image recognition potential of current AIs.

AI everywhere

We were recently asked by a customer to add AI and ML to our SpaceAssuranceExpert (SAX) software. This has been happily running for years, capturing dataset allocations and extent usage in Db2 and making sure they never ran out of space or extents without getting told about it well beforehand.

I had to do a real double take and think “How on earth could AI help here?”

You are approaching 32 Linear Datasets with your segmented tablespace. You have to ALTER and REORG before you use up all the space in the 32nd. Where can AI help here?
We already have extrapolation from current usage…

Don’t get me wrong

I am not against AI and ML…

but I think we *all* need to get a bit more skeptical about what can actually be delivered now.

On the one hand host variable contents and row counts – Yes indeed, but even here outliers will cause grief and then on the other hand the complete and utter science fiction of self-driving cars in our towns and cities.

The money being spent is also amazing, in 2019 it is estimated, by Syncsort,

that the AI budget will be $35.8 Billion 44% more than in 2018.


What do you think?


I am sure that autonomous cars will be a plague upon us – are you?

TTFN,
Roy Boxwell
Senior Architect

PS: I am looking forward to the first Adversarial Images for Db2 z/OS !!!

2019-04 ZOWE for Continuous Delivery – It’s worse than that – he’s tested Jim!

Terrible quote usage I know… But now, with the all new and Agile Db2 12 being picked up and used globally, I think it is time to review a few things that companies and individuals must do, or at least plan, in this “Brave New World” of ours.

To wrap up at the end, I will delve into the idea of using ZOWE to help you test!


This Db2 12 testing checklist, helps to navigate into the Db2 Continuous Delivery world, well supported by Zowe, the modern IBM z/OS GUI

Table of contents – Db2 12-testing-checklist
  1. Cloning
  2. Workload collection
  3. Dynamic & Static SQL storage
  4. IFCID376 & co
  5. EXPLAIN-& Access Path
  6. Execute the SQLs
  7. Db2-12 Test Review
  8. Zowe – Open Source z GUI – for CD

The Db2 12 Testing Checklist


1. Do not be afraid of the clones!

To even begin to start testing Db2 12, you’ll need to start with a complete production clone. I know a lot of people get suddenly scared about this due to audit requirements or, more commonly, space requirements. I mean, who can possibly support a complete 100% data clone of their productive system?

A real clone or a partial clone?

Of course you do not actually need all of your data! All you must have is the Db2 Catalog and Directory, all of the user objects should be DEFINE NO style empty shells. After all, do you really need TBs of productive data to do a test run? Nope! So now we have removed two major problems (Space and Audit) from the list of testing problems.


2. Collect as much workload as you can

You’ll need to gather as much of your current executed SQL as you possibly can. At best 13 months is a good target. “Not possible” I hear you all scream – “Oh yes it is!” say I.


3. Dynamic & Static SQL: Store it cleverly away

You do not need the same SQL statement 12,000,000 times. Just once, but with an execution count. Store the dynamic SQLs and the retrieved Static SQLs away somewhere nice and safe with a COMPRESS YES to save room, and then you can easily get 13 months of data. Another major problem gone from the list of testing problems.


4. Gather your friends close but your enemies closer – IFCID376

Make sure you are running with IFCID 376 always on and everywhere! Test, QA and Prod. The moment it starts spitting out possible “problems”, start trying to track down the causing event (SQL) and verify that it is OK, or get a code change implemented. This only has a little to do with agile Db2 12 but should be on at every shop anyway!

But I don’t get the Dynamic SQL!

You do if you are fast and good enough! If you are permanently snapping the DSC and catching the flushed statements, then you can indeed find out the SQL that caused the IFCID. Another major problem disappears from the list of testing problems.


5. Access Paths a go go

Now it really makes a lot of sense to EXPLAIN all of the SQL that you have before and after the Db2 12 Functional Level (FL) change to see if just flipping to, for example, FL 504 caused access paths to head south… This is something you should all be doing now as well of course. Always validate the access path before being forced to do a REBIND or PREPARE. This way you will not get any nasty surprises… Another major problem vanishes from the list of testing problems.


6. Execute the SQLs

I have one last thing for you…

Take all of the SQL that you have just processed and transform it so it can actually execute.

  1. Logarithmically reduce the execution count and execute all of the SQL while monitoring all of the Db2 KPIs – On the Clone, of course.
  2. Then reset the world back to your starting Clone, do the FL change and then re-execute all the SQL again.
  3. Once finished, compare all of the KPIs – looking for outliers as there has only been the FL change then.

Any differences you see should be just “background noise” but there could also be some nasties in there. This will light them up so that the root problem(s) can be found and corrected way before you actually do the FL change in production.


Well done, you’ve made it!

That’s it! With all these processes set up and fully automated you can actually do repetitive pipeline tests within hours instead of weeks or months or never!


7 – Db2 12 test-Review:


What does this mean for me?

Well,…

  • You need to get a fast, good, automated Cloning system in place.
  • You need to get a fast, good, automated IFCID 376 system in place.
  • You need to get a fast, good, automated SQL workload collection system in place.
  • You need to get a fast, good, automated Access Path comparison system in place.
  • You need to get a fast, good, automated SQL replay system in place.

… this means :

“ Continuous testing in a continuous development world „…


You might, in fact, need to actually buy some software that does all this for you…

May I introduce you to our newest member of the SEGUS/SEG Family:

Db2 Continuous Delivery Deployment Check

is the software that does all of this, and more!

Check out our website for details of how this software really helps in :



  • Reducing the time to test

  • Enabling you to actually go forward with a good, clear conscience!

Visit our CDDC

CDDC for Db2 z/OS - Continuous Delivery Deployment Check - Agile & Environment simulation, Zowe IBM mainframe GUI

8 – ZOWE – Open Source zUI – to the rescue?

You have probably heard of ZOWE by now, the first open source software on z/OS from IBM, Rocket Software and CA Technologies (A Broadcom company). It was launched last year and is growing very quickly.

Introduction to Zowe:

My firm is also using it now and all future developments will also be ZOWE enabled.

In a nutshell,

it allows users to interact with the Mainframe using a modern GUI.

This is based upon z/OSMF and uses work flows to actually do stuff. You define a “micro service” to do one thing and then you can string as many of these together as you like to get something done like, e.g. Provisioning a Db2 system or Cloning a Db2 subsystem etc.


The IBM web based UI for Db2 z/OS


Zowe at SEGUS and

SOFTWARE ENGINEERING?



We will be using ZOWE for two things:

1 – For the Installation and Maintenance of our software at customer sites

ZOWE enables people to do things that normally require “green screen” and we are keenly aware that green screen people are disappearing. If we want the z/OS platform to survive, it must be dragged, kicking and screaming, into the modern world.

2 – For using ZOWE as the front end for the users of our products

The CDDC product that I described in this newsletter will be built around ZOWE, thus expanding its usability in the market of tomorrow.


We are convinced that ZOWE is the way to go – are you?


TTFN,
Roy Boxwell
Senior Architect