2017-05 Db2 version 12: A little bit too agile?

 

How to handle APPLCOMPAT when it comes to Dynamic SQL?

I have been involved in testing in Db2 version 12 for a while now, and I think we need a discussion about a few features that come in with this “agile” release.

APPLCOMPAT to the rescue?

First up, is the use of APPLCOMPAT – This is now available throughout the SQL, but comes with a few problems. If you have static SQL then all is fine and dandy, as the APPLCOMPAT used is stored in the Db2 Catalog. You can easily refer to it and check which package is using what APPLCOMPAT. The real trouble starts with Dynamic SQL.

Dynamic SQL intro

A quick discourse about Dynamic SQL is now required…

All Dynamic SQL that runs, must run “under the control” of a Package.

This means that we all have loads of empty packages that are just used to run Dynamic SQL. You all have the SYSLHxxx, SYSLNxxx style Packages and probably loads more. These Packages are used for authentication, tracing, and validation of requests from remote users.

Package is the Boss

This is all good, apart from when a new Db2 12 Function Level (from now on FL) is activated. Why? Because the “package is boss” – If the package was bound at FL500 and you activate FL501, then any SQL that executes in that package that attempts to use an FL501 statement fails. If the SQL attempts to SET CURRENT APPLCOMPAT = ‘V12R1M501’ it will, of course, also fail.

REBIND the world?

To “fix” this, you simply have to REBIND the package to the new FL level.

Sounds simple, huh?

Well, what that means is that *all* SQL that uses that package will get the ability to go to FL501 straightaway, unless they are coded with SET CURRENT APPLCOMPAT = ‘V12R1M500’ or which level you would prefer…

We all have that coded in our JAVA programs don’t we?

Dynamic Packages always allocated

So the problem now is – You have hundreds of Packages that you must rebind, but you dare not rebind them! Even worse, is that you probably *cannot* rebind them anyway, as they are permanently allocated and in use!

. Imagine how many Dynamic SQLs are running in your shop?
. Can you flush the DSC and rebind all of your “empty” Packages?
. When can you plan such an outage?

COLLECTIONs can help

One work-around is to have a new COLLECTION, which the empty packages are bound to again. This works great, apart from one tiny little problem… The COLLID must then be set/changed in all of the CLI.INI or API places where it is currently set, or just defaulted to, today! At one of my customer sites that would be over 8,000 files to update!

So now my questions to the readers out there


How do you plan to manage this?

How do you plan to roll-out FL levels?

If using a new collection, how many CLI.INIs etc. must be changed?


 

As usual, if you have any comments or queries please feel free to drop me a line!

TTFN

Roy Boxwell

2017-01 Db2 12 technical overview: Roy’s first features review

This Db2 12 technical overview presents in an “easy to read” table list a review of new Db2 12 features

Have you encountered any other Db2 12 changes you’d like to discuss?

 

Now that Db2 12 has gone GA I can finally talk about it. So here’s another new Features “first look” at what I think is cool, great, or odd !

This is my personal list for a Db2 12 technical overview – in no particular order :

  •  Db2 12 SQL Optimizer, triggers, Arrays, Merge, UNICODE Columns. Temporal, SQL pagination, SQL Stability, Log, Partitions,…
  •  Data Sharing
  •  Utilities DSN1COPY, Alternate Copy Tools, Audit, REORG, PBG reorgs, COMPRESSRATIO, RELOAD, RO tablespaces, LOAD, BACKUP and Recovery, PiT, RUNSTATS…

 

AGILE This release of Db2 will be ”the last” release, as Db2 Development has gone all agile on us and will be doing Continuous Delivery (CD) from now on. CD promises Easier, Cheaper, Faster and Simpler Db2 maintenance and the quick realization of new functionality.

 


Db2 12 – SQL


Optimizer


MQT or Table expression columns are “trimmed” if they are not used in the outer query.

In LEFT OUTER JOIN, if columns are not used, they can be Pruned.

UNION ALL gets major work when pushing down join predicates as well as pushing down ORDER BY and FETCH FIRST

Outer table joins can get reordered to avoid unnecessary materializations

User-defined functions get two improvements with merge and the introduction of indexes on the join or correlation predicates that are passed in as parameters

Adaptive Index is designed for Multi Index and single index list prefetch to determine at execute time the filtering of each index. This ensures the optimal execution sequence of indexes or, perhaps, a quicker fallback to Tablespace scan if no filtering index exists.


TRIGGERS

The new “advanced” triggers enable SQL and Global variable usage and SQL PL.


ARRAYS

Get a couple of nice new features, specifically the use of a global variable as an array type and the ability to use the ARRAY_AGG without forcing an ORDER BY.


GLOBAL VARIABLES

Get LOB support and in a SET they can be the target.


PureXML

The XMLMODIFY can do multiple document updates in a single invocation. Various XML performance boosts are also included, e.g. XMLTable and the XSLTRANSFORM allows transformations to different formats.


JSON

When using the JSON_VAL function the first argument must not now always be a BLOB. It can be a view, CASE, table expression, trigger transition variable or SQL PL variable or parameter.


MERGE

Is now a full MERGE with the ability to use table references with multiple MATCHED clauses, including DELETE operations.


SQL PAGINATION

The ability of Db2 to “understand” typical paging has been greatly boosted. Typically it was always coded like:

SELECT blah blah blah
FROM mytable
WHERE (SURNAME = ‘BOXWELL’ AND FORENAME > ‘ROY’)
        OR (SURNAME > ‘BOXWELL’)

This is pretty horrible for the Db2 optimizer but we *all* know what we really mean! Now in Db2 12, so does the optimizer! Sadly you must rewrite your queries a little so this example becomes:

SELECT blah blah blah
FROM mytable
WHERE (SURNAME, FORENAME) > (‘BOXWELL’, ‘ROY’)

Also with this comes a nice little feature called OFFSET ROWS. Typically, this is for when the connection to the server is a bit shaky and so after some paging, when the cursor is reopened, the code “knows” it can miss the first 60 rows, so the cursor changes to be:

SELECT * FROM mytable OFFSET 60 ROWS

Nice feature, but beware of polluting the DSC! It is much better to use a parameter marker for these Offsets!


UNICODE Columns

In DB2 11, we got a “fix” for UNICODE columns that was really a “crutch”. This has now been fixed with real UNICODE columns in DB2 12. You must migrate your existing data though!


Piece-wise DELETE

This is a feature I have wanted for decades! Simply add the FETCH FIRST nnnn ROWS ONLY within a DELETE and then programmatically loop around until you are done. Much easier than the method we have today of DECLAREing a CURSOR with an UPDATE of a dummy column and the DELETE WHERE CURRENT OF and after 5000 or so issue a COMMIT.


TEMPORAL RI

You can now add RI as normal and not be forced to use a trigger or stored procedure.


TEMPORAL TABLES

Get the ability to not just be inclusive-exclusive but also inclusive-inclusive.


TEMPORAL Logical Transactions

Another new feature with temporal tables, is the ability to support logical units of work for SYSTEM_TIME. These logical units of work are not determined by COMMIT or ROLLBACK but by using a built-in Global Variable.


PERCENTILE functions

Two new functions PERCENTILE_CONT and PERCENTIL_DISC are new BIFs.


DRDA Fast Load

Is the ability to load data into z/OS DB2 from files sitting on distributed clients.


ODBC

Gets a new INI keyword KEEPDYNAMIC and the connection attribute of SQL_ATTR_KEEP_DYNAMIC.


Obfuscated Code

Mainly of interest to Vendors is the ability to hide your stored procedure, TRIGGER or UDF coding from prying eyes.


RLF for Static SQL

This is a big one! The Resource Limit Facility has always only been available for dynamic SQL. Now you can also use it to cap Static SQL.


TRANSFER OWNERSHIP

This is a very handy way of clearing out all the old owners from a DB2 system.


SQL Stability

Dynamic Plan Stability is nearly the same as BIND QUERY, but the hope is that it will be easier and better to use! But beware of saving all of your dynamic SQL away!

Static Plan Stability gets a good enhancement that allows FREE on the original or previous. What is really good, is that the current version can be in use so there is no application outage anymore.


Insert

New Insert algorythm can be used for faster unclustered insert processing in some cases. Only for UTS MEMBER CLUSTER (This is actually the default for these spaces).


CONCENTARTE LITERALS

Now supported at the Package Level.


FTB

Fast Index Traversal – Especially good for randomly accessed indexes. If the index is unique, and 64 bytes or less, it is eligible. Index is controlled with the new Catalog table SYSIBM.SYSINDEXCONTROL and the -DISPLAY STATS(INDEXMEMORYUSAGE) command.


Log

Active log size can go from 4GB now up to 768GB ! Be careful here!


In-Memory bufferpools

by using PAGESTEAL(NONE) keyword.


PARTITIONS

Finally we get the chance to give each partition its own DSSIZE as well as the Partitioning indexes! This is great, but sadly is only available to an existing space once you have reorged the whole tablespace…However, once you are there, you can then have data and index parts up to 1TB in size, plus, when you do an ALTER of the DSSIZE, it does not cause an outage (as long as you make it bigger!). A side effect of this is that the RID is now seven bytes (see REORG mapping table for other changes). You can now also add partitions in the middle of an existing PBR table.


 

 


Data Sharing


Recovery

of retained locks from a failed member can be handled automatically


LPL and GRECP recovery

LPL and GRECP recovery auto retries three times after waiting three minutes


 


Db2 12 Utilities


DSN1COPY

In DB2 11 this utility got a few sanity checks and now the REPAIR CATALOG utility can fix some of these. The REPAIR CATALOG TEST also looks for some problems caused by misuse/abuse of DSN1COPY.


ALTERNATE COPY POOLS

The usage of BACKUP SYTEM is growing. So is the amount of storage required! The idea here, is to define a set of copy pools, but only one for many DB2 subsystems. The alternate copy pool uses as many volumes as it needs and leaves the other volume free for a different subsystem backup. This reduces the amount of space that must be allocated.


Audit

A new Authorization arrived: UNLOADAUTH to “replace” the “Does the user have SELECT auth on the table?” check that has run up to now. UNLOAD is special and should be controlled over this auth and no longer over just SELECT.


REORG

PBG tablespaces get the best news here!

PBG reorgs can now spill over into a new PBG if the row(s) do not fit back into the original partition. Classic case here, is compressed data that no longer fits back. This forced people to use a TS level reorg or not use compression.

If the PBG contains LOB data and it extended to a new partition in the log apply phase, then the LOB space was left in COPY Pending… pretty horrible and that no longer happens in DB2 12.

Another PBG bonus, is the delete of “emptied” Partitions after a REORG has completed.

Improved FlashCopy support – You can now decide to stop the REORG if the flash copy fails.

New Catalog column COMPRESSRATIO for use by utilities that records the compression savings at the record instead of at the page level.

RELOAD phase can now be offloaded to zIIP.

RO tablespaces can now be REORGed at any SHRLEVEL.

The mapping table gets changed again due to the relative page numbering in the new PARTITION support (seven byte RID).


LOAD


PART REPLACE with dummy input against an empty (PBR) partition could be quicker.

LOAD SHRLEVEL CHANGE PARALLEL support for PBG for SHRLEVEL CHANGE.

Additional zIIP offload, like in REORG, in the RELOAD phase, including the data conversion and loading of the record into the page set.

LOAD RESUME BACKOUT YES to avoid RECP on failure. Adds a new option on LOAD RESUME SHRLEVEL NONE to allow LOAD to back out the rows already loaded upon encountering an error (such as conversion, LOB/XML, duplicate key, referential integrity violation) without leaving the page set in RECP.

PREFORMAT support for auxiliary tables. Support is extended to LOB table spaces and auxiliary indexes.

Maintain MAXASSIGNEDVAL for identity columns. LOAD now maintains the MAXASSIGNEDVAL for user-provided input and resets the value if a LOAD REPLACE is run on the table space.

LOAD REPLACE support for the COMPRESSRATIO column for use by utilities that records the compression savings at the record instead of at the page level column.


BACKUP and RECOVERY

Point-in-Time support for PBGs, Flashcopy FLASHCOPY_PPRCP keyword. As mentioned the default is changed to not recover unchanged objects. MODIFY RECOVERY gets two new options: DELETEDS to delete the datasets and NOCOPYPEND to not set COPY pending after doing the MODIFY.


PiT

Has been improved with the ability to skip unnecessary recoveries. SCOPE UPDATE only processes objects that have been updated up to the TOLOGPOINT or TORBA.


RUNSTATS

New CLUSTERRATIO formula which should better reflect dynamic prefetch. Terry Purcell has stated that it is not a huge change and does not require a RUNSTATS of all tablespaces!


FREQVAL COUNT nn

The COUNT nn is now optional and, if not used, then RUNSTATS will work out the best number for you. This is really, really nice and I would recommend this in an instant! It has also been retro fitted to DB2 11.


Autonomic Statistics with PROFILEs

I am no fan of this, as I believe it makes for a pretty nasty feedback loop where anyone’s “dumb” QMF/SAS/DSNTEP2/SPUFI will get inserted as a PROFILE COLGROUP, and then these PROFILEs will get bigger and bigger until no-one knows which are really useful and which are just fluff! I would recommend setting the ZPARMs STATFDBK_SCOPE to ALL (Default) STATFDBK_PROFILE to NO (Default is YES). When YES is used DB2 12 will create and/or maintain a PROFILE for you. Finally, validate that the SYSTABLES column STATS_FEEDBACK is set to “N” (Default is “Y”) for any and all tables where you do *not* want SYSSTATSFEEDBACK data. E.g. All the DSNDB01 tables where a RUNSTATS is not even allowed!


DSC

DSC Invalidation got switched off by default. In the past *any* RUNSTATS flushed the cache. Now you must add the key word INVALIDATECACHE YES to get this to occur. (Unless you use the REPORT NO UPDATE NONE syntax this still just flushes the DSC)


Inline Stats

Inline Stats got a huge boost with PROFILE support, MOST/BOTH/LEAST and LOAD PARALLEL got inline stats.


 

Have you encountered any other Db2 12 changes you’d like to discuss?

As usual, feel free to email me with questions or comments.

TTFN

Roy Boxwell

 

5

Rotten Results from RUNSTATS Require Rescue

Do you know the basic rules to ensure access path stability when using RUNSTATS?

Time for another of my “I noticed something strange at a customer site recently” newsletters. Enjoy!

 

RUNSTATS are good aren’t they?

At this particular site, the RUNSTATS methodology of RUNSTATS was, shall we say, “sub-optimal.” They use an ancient system to decide when to RUNSTATS, and they do tablespace’s and index’s *never* at the same time. Just to complicate matters even more, they never use inline RUNSTATS because “if the REORG abends, the statistics in the DB2 catalog are dead”. Now you are all probably well aware of the scale of the disaster at this site?

 

Daily fire fighting

Nearly every day, some access path somewhere goes horribly wrong… the under- manned and over-worked DBA group are tasked to find and fix ASAP. Cures range from a quick INDEX create or change, or perhaps even a really needed RUNSTATS or REORG.

 

Why do the Access Paths go “wrong”?

The real goal is to stop firefighting and to investigate the root cause. Why do so many access paths go wrong on such a regular basis? The answer is the systemic horribleness of RUNSTATS collection. Dynamic SQL is, obviously, very very sensitive to RUNSTATS. For one thing, the statements are kicked out of the cache! The very next time they come back, the DB2 Optimizer redrives the cost calculations and “Hey Presto!” you have a bad access path. Terry Purcell and Pat Bossmann have often said that about 90% of DB2 performance problems stem from bad RUNSTATS. The old adage “garbage in – garbage out” is still true!

 

Timing is everything

The timing of the RUNSTATS is critical for stable access paths.

Basic rules are:

 1 Only do a RUNSTATS if you really really need to!
a. RUNSTATS are not cheap!
b. The Dynamic Statement cache gets wiped
c. Locks on the Catalog can occur
2Avoid doing RUNSTATS even if RTS says to run one!
a. Lots of people use the incorrect counters to trigger a RUNSTATS. Use the correct ones for the correct Object type
b. Never RUNSTATS LOB spaces – completely pointless work!
c. Even if a MASSDELETE has occurred do you really want to “reset” the DB2 catalog statistics?
d. VOLATILE tables must be handled with *extreme* care!
3Choose your RUNSTATS parameters wisely!
a. Doing a blind “RUNSTATS the world” is just as bad as running an empty RUNSTATS!
b. HISTOGRAM should be used with caution
c. More than a hundred COLGROUPs should start alarm bells ringing

Quite a list here, and it really only shows some “Rules of Thumb”. I’ll bet you all have you own?

Is there a way back from the abyss?

But what happens if you have 1000’s of partitions with terabytes of data and the RUNSTATS was, shall we say, ill-advised or badly timed? Can you go back in time? Hands up those who wants to do a PiT recovery on the production catalog! No takers???

Yes! There is a way back from the abyss

I’ll bet you are all well ahead of me here, but the way to do this is pretty straightforward. You simply acquire our latest tool, RUNSTATS Rescue, to handle it all for you. Or, you could try and reset the data in the DB2 catalog from off-line backups that you happen to have taken before the RUNSTATS that is now killing you. …You did do that, right?

Why a tool?

Apart from the fact that this tool is from us, my firm, just trying to “roll your own” can be a real nightmare. Why?

  • Because you must first find out all of the objects that were touched by the badly performing SQL.
  • Then you must get all of the DB2 Optimizer relevant data back from a point in time before the RUNSTATS executed, and/or the last REBIND(s),
  • and then you must flush the dynamic statement cache and REBIND any static SQL.

Sounds like a lot of work.

What else must you do?

You also have to be transparent and so log what you do. You must allow for the ability to back-out your changes as perhaps you make another access path even worse. And it would be really cool if you could do “on the fly” explains to check that the RUNSTATS really *is* the root of all that evil. Remember that ZPARMS and BUFFERPOOLS also have a major influence on access paths. Even the speed of your machine! It is also a must to then be able to go even further back in time – perhaps as much as a year?

Hang on – What about PLAN STABILITY?

Doesn’t plan stability save you? I hear you all cry. Well, “No” is the short answer! If your package is invalidated by a Schema change (the classics are index drop and recreate or VIEW change), then plan stability does not work anymore. Further, in DB2 12, Dynamic Plan Stability has been announced. Sadly it *also* fails right here as there is no SWITCH PREVIOUS/ORIGINAL support!

It all works together

So, for the static SQL case where the package is not invalided, Plan Stability is good. If not: – RUNSTATS Rescue to the rescue. For Dynamic SQL – RUNSTATS Rescue is the answer.

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

 

Can you guess how interesting the DSN_PREDICAT_TABLE really is?

DB2 z/OS literal replacement:

Do you know the queries to list the Dynamic SQL which have literals or Parameter markers?

Here is a very handy method to (fairly) quickly see if you have a literal problem that is killing the DSC or causing access path problems.

 

I was asked by a customer the other week, to help them out with a classic DB2 SQL Performance problem involving dynamic SQL. Should Literals or Parameter markers be used in dynamic SQL? Now, of course, the classic answer is: “It depends.” But this customer wanted to go a stage further and get a list of all the dynamic SQL which had literals in the text.

 

EXPLAIN to the rescue!

Naturally an EXPLAIN must be the starting point, because manually parsing the SQL text is incredibly difficult. If you have paid for the DB2 Optimizer—and the ability of EXPLAIN to output to seventeen tables—why not use it and see what you get?

Manual time

After trolling through various manuals, it quickly becomes apparent that the table of choice is the DSN_PREDICAT_TABLE, as it contains at least one row for each predicate and, after all, it is predicates that are truly interesting for the optimizer. (Naturally DSC usage also hangs on the use, or not, of literals also in the SELECT—but that is another story!)

What is in the table?

The contents are quite simple really:

SELECT  A.QUERYNO                 Identification

      , A.QBLOCKNO                Identification

      , A.APPLNAME                Identification

      , A.PROGNAME                Identification

      , A.PREDNO                  Identification

      , A.TYPE                    Type of op AND, OR, EQUAL etc

      , A.LEFT_HAND_SIDE          Column/Value/colexp etc

      , A.LEFT_HAND_PNO           Child predicate number

      , A.LHS_TABNO               Table no.

      , A.LHS_QBNO                Query Block no.

      , A.RIGHT_HAND_SIDE         As left hand

      , A.RIGHT_HAND_PNO          As left hand

      , A.RHS_TABNO               As left hand

      , A.RHS_QBNO                As left hand

      , A.FILTER_FACTOR           Estimated FF

      , A.BOOLEAN_TERM            Whole WHERE is Boolean?

      , A.SEARCHARG               DM or RDS?

      , A.JOIN                    Simple join or not?

      , A.AFTER_JOIN              Predicate after/during join?

      , A.ADDED_PRED              T Transitive clos., B Bubble,
                                  C correlation,

                                  J Join, K like, L local, P push down,

                                  R page range, S simplification

      , A.REDUNDANT_PRED          Is the predicate redundant?

      , A.DIRECT_ACCESS           ROWID Possible?

      , A.KEYFIELD                Is the predicate in indexes?

      , A.EXPLAIN_TIME            Identification

      , A.CATEGORY                IBM Internal use

      , A.CATEGORY_B              IBM Internal use

      , A.TEXT                    First 2000 bytes of text

      , A.PRED_ENCODE             IBM Internal use

      , A.PRED_CCSID              IBM Internal use

      , A.PRED_MCCSID             IBM Internal use

      , A.MARKER                  Host vars, parameter markers,
                                  special regs

      , A.PARENT_PNO              If a root predicate then zero

      , A.NEGATION                Is NOT used?

      , A.LITERALS                Literals separated by colons

      , A.CLAUSE                  HAVING, ON, WHERE or SELECT

      , A.GROUP_MEMBER            Identification

      , A.ORIGIN                  Origin of predicate. Blank, C, R or U

      , A.UNCERTAINTY             Level of uncertainty

      , A.SECTNOI                 Identification

      , A.COLLID                  Identification

      , A.VERSION                 Identification

--V11 , A.EXPANSION_REASON        Archive or Temporal table indicator

FROM BOXWELL.DSN_PREDICAT_TABLE A

 

The first four columns are used for joining to your existing EXPLAIN tables (PLAN_TABLE etc.)

Columns of interest

Now what interested me straightaway, were the columns MARKER and LITERALS. Looking in the Managing Performance documentation you see that these columns:

MARKER CHAR(1) NOT NULL WITH DEFAULT

Whether this predicate includes host variables, parameter markers, or special Registers.

LITERALS VARCHAR(128) NOT NULL

This column indicates the literal value or literal values separated by colon symbols.

 

So now it looks quite easy just do a select where LITERALS is non blank. Of course that fails miserably…

First attempt

Here’s an SQL that shows what comes out:
DSN; predicat table, query1-DB2-z-OS-literal-replacement

DSN; predicat table, query1-DB2-z-OS-literal-replacement

This looks like lots of literals, but is actually just one WHERE predicate and one ON being broken down. So I thought “Aha! The PARENT_PNO must be the problem.” Sure enough, when you add AND PARENT_PNO = 0 to the SQL it now Returns:

DSN; predicat table, query1-DB2-z-OS-literal-replacement

Now all of the ON data has vanished, so you must add an OR into the query:

Second attempt

Giving us now this Output:

DSN; predicat table, query1-DB2-z-OS-literal-replacement

The COLEXP row SQL text looks like:

SUBSTR(DIGITS("A"."CORR_TYPE"),1,10)

So we can also exclude these rows from the select.

DSN; predicat table, query1-DB2-z-OS-literal-replacement

Still duplicates?

This*still* looks like duplicates, but now pushed down to the ON clause in this context. Final fix is to make sure that the LHS_TABNO and RHS_TABNO are both equal to zero. Now we have the “literal finder” SQL:
DSN; predicat table, query1-DB2-z-OS-literal-replacement

Host with no colon?

Which gives (excluding the first columns for clarity!):
DSN; predicat table, query1-DB2-z-OS-literal-replacement

See the HV2 and then the list of HVs? Now this is not documented at all! A comma separated list of host variables… super…

All we can now do, at least in standard SQL, is split the output into two distinct blocks, one where MARKER is ‘N’ so no host variables or special registers are involved, and one where they are! It should be easy to remove the HVs and then see if the LITERALS column is empty—or not—but that is a bit more than a simple newsletter can possibly do!

And Finally

So now the final two queries:

SELECT QUERYNO , QBLOCKNO , PREDNO                 
      ,LITERALS
      ,TEXT                                        
FROM BOXWELL.DSN_PREDICAT_TABLE                    
WHERE NOT LITERALS = ''                            
  AND ((CLAUSE = 'WHERE'                           
    AND PARENT_PNO = 0)                            
    OR CLAUSE = 'ON')                              
  AND NOT RIGHT_HAND_SIDE = 'COLEXP'               
  AND LHS_TABNO = 0                                
  AND RHS_TABNO = 0                                
  AND MARKER    = 'N'                              
ORDER BY QUERYNO , QBLOCKNO , PREDNO               
;                                                  
SELECT QUERYNO , QBLOCKNO , PREDNO                 
      ,LITERALS
      ,TEXT                                        
FROM BOXWELL.DSN_PREDICAT_TABLE                    
WHERE NOT LITERALS = ''                            
  AND ((CLAUSE = 'WHERE'                           
    AND PARENT_PNO = 0)                            
    OR CLAUSE = 'ON')                              
  AND NOT RIGHT_HAND_SIDE = 'COLEXP'               
  AND LHS_TABNO = 0                                
  AND RHS_TABNO = 0                                
  AND MARKER    = 'Y'                              
ORDER BY QUERYNO , QBLOCKNO , PREDNO               
;

This gives you a very handy method to (fairly) quickly see if you have a literal problem that is killing the DSC or causing access path problems at your shop.

I hope you liked this month’s topic.

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

 

Once again for your cut and paste :

Query 1

SELECT CLAUSE, PARENT_PNO, SUBSTR(RIGHT_HAND_SIDE , 1 , 8)
      ,LHS_TABNO, RHS_TABNO                               
      ,MARKER, LITERALS                             
FROM BOXWELL.DSN_PREDICAT_TABLE                           
WHERE NOT LITERALS = ''    

Output1                               
-----+-------+---------+---------+---------+---------+---------+
CLAUSE PARENT_PNO        LHS_TABNO  RHS_TABNO  MARKER  LITERALS
-----+-----+---------+---------+---------+---------+-----------+
WHERE      0                    0           0  N       0,1,'W' 
WHERE      1  VALUE             2           0  N       0,1     
WHERE      1  VALUE             3           0  N       'W'     
ON         1  COLEXP            2           0  N       1,10    
ON         1  VALUE             2           0  N       'F'     
ON         1  VALUE             2           0  N       'en_US'
Output2
-----+---------+---------+---------+---------+---------+---------+------
 CLAUSE     PARENT_PNO            LHS_TABNO  RHS_TABNO  MARKER  LITERALS
-----+---------+---------+---------+---------+---------+---------+------
 WHERE               0                    0          0  N       0,1,'W'

Output3
-----+--------+--------+---------+---------+-----------------
 CLAUSE PARENT_PNO     LHS_TABNO RHS_TABNO MARKER LITERALS
-----+-------+--------+---------+----------------------------
 WHERE    0                 0            0  N     'X',' ',
 220,219,1,10,'F’
 ON       1  COLEXP         2            0  N      1,10
 ON       1  VALUE          2            0  N     'F'
 ON       1  VALUE          2            0  N     'en_US'
 WHERE    0  VALUE          3            0  N     'X'
Output4
-----+---------+---------+---------+---------+----------+----
 CLAUSE PARENT_PNO    LHS_TABNO  RHS_TABNO  MARKER LITERALS
-----+---------+---------+---------+---------+---------+----
 WHERE   0                0              0  N      'X','
                                                   ',220,219,1,10,'F’
 ON      1  VALUE         2              0  N      'F'
 ON      1  VALUE         2              0  N      'en_US'
 WHERE   0  VALUE         3              0  N      'X'


Query 2

SELECT QUERYNO , QBLOCKNO , PREDNO                 
      ,MARKER, SUBSTR(LITERALS, 1 , 32) AS LITERALS
      ,TEXT                                        
FROM BOXWELL.DSN_PREDICAT_TABLE                    
WHERE NOT LITERALS = ''                            
  AND ((CLAUSE = 'WHERE'                           
    AND PARENT_PNO = 0)                            
    OR CLAUSE = 'ON')                              
  AND NOT RIGHT_HAND_SIDE = 'COLEXP'               
  AND LHS_TABNO = 0                                
  AND RHS_TABNO = 0                                
ORDER BY QUERYNO , QBLOCKNO , PREDNO               
;

Output5
MARKER  LITERALS                          TEXT                                    
---+---------+---------+---------+---------+---------+---------+--------
N 'X','X','T'                  (((((((((((("B"."WLX_TIMESTAMP"=(SELECT 
N 'X','X','T'                  (((((((((((("B"."WLX_TIMESTAMP"=(SELECT 
Y  HV2,'S'                     ("IQA0610"."IQATW001"."WLX_TIMESTAMP"=(E
N 'X'                          ("Z"."WLX_TIMESTAMP"="A"."WLX_TIMESTAMP"
N 'X'                          ("Z"."WLX_TIMESTAMP"="A"."WLX_TIMESTAMP"
N 'X','X','I','  '             ((((((((((("B"."WLX_TIMESTAMP"=(SELECT M
N 'X','X','I','  '             ((((((((((("B"."WLX_TIMESTAMP"=(SELECT M
N 'X'                          ("Z"."WLX_TIMESTAMP"="A"."WLX_TIMESTAMP"
N  X'                          ("Z"."WLX_TIMESTAMP"="A"."WLX_TIMESTAMP"
Y  HV1,HV2,HV3,HV4,HV5,'F',HV6 ((((("SYSIBM"."SYSCOPY"."DBNAME"=(EXPR)

BIF HealthCheck licensed Freeware for DB 10

Detect static and dynamic SQL and match to the relevant collection, packages,…

BIF HealthCheck overvew (Built-in Function Checker for DB2 z/OS)

BIF HealthCheck reports the following BIF incompatibilities in DB2 10

  • Execution of the DB2 9 for z/OS version of SYSIBM.CHAR(DECIMAL-EXPR)
  • Execution of the DB2 9 for z/OS version of SYSIBM.VARCHAR(DECIMAL-EXPR), CAST (DECIMAL AS VARCHAR), OR CAST (DECIMAL AS CHAR)

  • Use of an unsupported character string representation of a TIMESTAMP
  • Use of a USER-DEFINED FUNCTION (UDF) that has the unqualified name ARRAY_EXISTS
  • Use of a USER-DEFINED FUNCTION (UDF) that has the unqualified name CUBE
  • Use of a USER-DEFINED FUNCTION (UDF) that has the unqualified name ROLLUP

  • Execution of a non-Java client that called a Stored Procedure (SP) that is on the DB2 for z/OS Data Server, while subsystem parameter DDF_COMPATIBILITY was set to SP_PARMS_NJV (the Data Server returned output argument values whose data types matched the data types of the call statement arguments).

  • Execution of a SQL statement by a client non-Java application that included an unsupported conversion from a string type to a numeric type, while the DB2 z/OS Data Server environment was one of the following (the Data Server issues SQLCODE -301)
    • In version 10 Conversion Mode (CM)
    • In version 10 New-Function Mode (NFM) and implicit casting was disabled because subsystem parameter DDF_COMPATIBILITY was set to SP_PARMS_NJV, or DISABLE_IMPCAST_NJ

 

BIF incompatibilities in DB2 11

 

More about BIF

BIF-Usage

Presentation

BIF CompatibilityDB2 10 compatibility mode
Changes to the STRING formating of decimal data within the CHAR and VARCHAR built-in function and to the CAST specification with CHAR and VARCHAR result types as well as  UNSUPPORTED TIMESTAMP STRINGs.
White PaperFinding BIFsAnd How to Lead a Problem-Free Life With Them in the Future
Navigating the Challenges of moving to a new DB2 Release
Newsletter2015-01 – BIFCIDS – Where’s the BIF?How will you deal with loop-hole usage in production code?
VideoBIF Usage(11min.) Trap  and correct the BIFs that will cause belly-ache one day soon
“Give and Take”
Program” page
 Give and Take
Program
We have “GIVEn” various free-of-charge Use Cases from SQL Workload Expert for DB2 z/OS like
1  Index Maintenance Costs
2  EXPLAIN Suppression
3  BIF Usage 
4  BIF HealthCheck – This last one is still available
We TAKE the anonymized results for research
and will communicate with the local User Groups for discussions
 User StatementsBIF Usage:

“Give and Take
Program 3”

 Customer CommentsRead the Customer Comments across the Industry

  • Health Care
  • Insurance
  • Banking
  • Car Manufacturing

 

 

BIF HealthCheck Licensed Freeware for DB2 11

Detect static and dynamic SQL and match to the relevant collection, packages,…

BIF HealthCheck overview (Built-in Function Checker for DB2 z/OS)

BIF HealthCheck reports the following BIF incompatibilities in DB2 11

  • Execution of the DB2 9 for z/OS version of SYSIBM.CHAR(DECIMAL-EXPR)
  • Execution of the DB2 9 for z/OS version of SYSIBM.VARCHAR(DECIMAL-EXPR), CAST (DECIMAL AS VARCHAR), OR CAST (DECIMAL AS CHAR)

  • Use of an unsupported character string representation of a TIMESTAMP
  • Use of the DB2 10 for z/OS default SQL path instead of the V11 path, which has more implicit Schemas

  • Execution of a non-Java client that called a Stored Procedure (SP) that is on the DB2 for z/OS Data Server, while subsystem parameter DDF_COMPATIBILITY was set to SP_PARMS_NJV (the Data Server returned output argument values whose data types matched the data types of the call statement arguments).
  • Execution of an insert statement that inserts into an XML column without the XMLDOCUMENT function, which generates SQLCODE -20345 on a DB2 release prior to V11, but does not generate an error starting in V11

  • V10 XPATH evaluation behavior was in effect, which resulted in an error (e.g. a data type conversion error occurred for a predicate that would otherwise be evaluated to false.). Starting in V11, such errors might be suppressed

  • Execution of a SQL statement by a client non-Java, or Java application that included an unsupported conversion from a string type to a numeric type, or from a numeric type to a string type while the DB2 z/OS Data Server environment was one of the following (the Data Server issues SQLCODE -301)

o The Data Server was in version 11 New-Function Mode (NFM)
o APPLICATION COMPATIBILITY was set to V10R1
o Implicit casting was disabled because subsystem parameter DDF_COMPATIBILITY was set to SP_PARMS_NJV, or DISABLE_IMPCAST_NJV

BIF incompatibilities in DB2 10

 

More about BIF

BIF-Usage

Presentation

BIF CompatibilityDB2 10 compatibility mode
Changes to the STRING formating of decimal data within the CHAR and VARCHAR built-in function and to the CAST specification with CHAR and VARCHAR result types as well as  UNSUPPORTED TIMESTAMP STRINGs.
White PaperFinding BIFsAnd How to Lead a Problem-Free Life With Them in the Future
Navigating the Challenges of moving to a new DB2 Release
Newsletter2015-01 – BIFCIDS – Where’s the BIF?How will you deal with loop-hole usage in production code?
VideoBIF Usage(11min.) Trap  and correct the BIFs that will cause belly-ache one day soon
“Give and Take”
Program” page
 Give and Take
Program
We have “GIVEn” various free-of-charge Use Cases from SQL Workload Expert for DB2 z/OS like
1  Index Maintenance Costs
2  EXPLAIN Suppression
3  BIF Usage 
BIF HealthCheck (Freeware) – This last one is still available
We TAKE the anonymized results for research
and will communicate with the local User Groups for discussions
 User StatementsBIF Usage:

“Give and Take
Program 3”

 Customer CommentsRead the Customer Comments across the Industry

  • Health Care
  • Insurance
  • Banking
  • Car Manufacturing

GIVE and TAKE Program

1,2,3


Give and Take 2020

Information on the Give and Take 4,5,6,7


Previous Give & Take

We have “GIVEn” various free-of-charge Use Cases from our SQL WorkloadExpert for Db2 z/OS like:

1 Index Maintenance Costs

2 EXPLAIN Suppression

3 BIF Usage

Limited free-of-Charge Db2 Application

This Program started in Europe, during our 30th anniversary was such a success, that it is now being Extended for the benefit of North American Db2 z/OS sites.

<a href="http://www.seg.de/produkte/db2-zos-produkte/sql-workloadexpert-for-db2-zos/" target="

Index Maintenance Costs, EXPLAIN Suppression, BIF

Limited free-of-Charge DB2 Application

This Program started in Europe, during our 30th anniversary was such a success, that it is now being Extended for the benefit of North American DB2 z/OS sites.

SQL WorkloadExpert for DB2 z/OS (WLX) contains several “Use Cases”. We provided three of them, free of charge, for one month to different sites. In return, we received their results. We’d like to share this with you now.

We have “GIVEn” various free-of-charge Use Cases from SQL Workload Expert for DB2 z/OS like
1  Index Maintenance Costs
2  EXPLAIN Suppression
3  BIF Usage – This last one is still available
We TAKE the anonymized results for research
and will communicate with the local User Groups for discussions

Inspiring experiences

Customer Statements

3BiF USAGE
News
Read the Customer Comments across the Industry 

 

  • Health Care
  • Insurance
  • Banking
  • Car Manufacturing

First results from DB2 z/OS sites

1Index Mantenance CostsNearly all the data we got back showed a positive result for created Indexes…
2EXPLAIN SuppressionAbout 10% of SQLs are actually “left over”…
3 BIF Usage When migrating to a new DB2 version, the BIFs are not always compatible and an extreme amount of data is produced.

 

The difficulty of near-time analysis to track down BIFs within dynamic SQL have been solved with the BIF Usage Use Case…

[Results from DB2 z/OS sites]

Program 3 – BIF Usage –  has now started

BIF-Usage

 

Presentation

BIF CompatibilityDB2 10 compatibility mode
Changes to the STRING formating of decimal data within the CHAR and VARCHAR built-in function and to the CAST specification with CHAR and VARCHAR result types as well as  UNSUPPORTED TIMESTAMP STRINGs.
White PaperFinding BIFsAnd How to Lead a Problem-Free Life With Them in the Future
Navigating the Challenges of moving to a new DB2 Release
Newsletter2015-01 – BIFCIDS – Where’s the BIF?How will you deal with loop-hole usage in production code?
VideoBIF Usage(11min.) Trap  and correct the BIFs that will cause belly-ache one day soon

BIF Usage video

2014-01: Complementing IBMs ACCESS PLAN Stability 1, for Dynamic SQL

 

In collaborazione con Expertise4IT

Con il package di salvataggio globale delle RUNSTATS implementato da SEG: Per quale ragione questa buona idea in qualche caso fallisce?

 

• La storia
• Cos’è accaduto realmente?
• Come risolvere il problema?
• Rescue Steps
• Un esempio (comprensivo di pannelli JCL)
• Il prossimo mese
Questo mese vi voglio raccontare una storia. La storia è vera, ma I nomi sono stati cambiati per proteggere gli innocenti

 

 

La storia

Una grande azienda schedula regolarmente la fasatura dei processi di produzione nella notte di Giovedì. Lo scorso anno, una notte tutto procedeva regolarmente ma … al Venerdì mattina …….

…i telefoni cominciarono a squillare perchè gli utenti si lamentavano di tempi di risposta lunghi o nulli (timeout) in un’applicazione piuttosto critica per il business.

Il problema rapidamente evolve dai normali ‘spegnete e riavviate’ e ‘avete cambiato qualcosa?’ ai senior managers che chiedono per quale motivo le cose non funzionano più.

Allo stato il Gruppo DBA non era stato ancora coinvolto, dal momento che il primo pensiero era che si trattasse di un “bad” package passato in produzione accidentalmente nella notte.

  • Il Gruppo di gestione della produzione, allora, riportava indietro I packages in gioco, ma l’operazione non produceva effetti….
  • I ritardi di consegna diventavano peggiori. Vengono quindi ‘stoppati’ quasi tutti i server WebSphere, almeno per permettere che un pò di lavoro venisse eseguito sul sistema sovraccarico. Essendo ormai giunti al livello di panico, i DBA and il team JAVA vengono coinvolti.
  • E questi trovano rapidamente il colpevole; si trattava di un SQL dinamico molto complesso che si era ‘comportato’ correttamente e con efficienza fino ad un certo istante della notte di Giovedì, e adesso performava in maniera disastrosa.
  •  I DBA riorganizzano le tabelle più consistenti fra quelle in gioco, nella speranza che le cose vadano meglio, ma, purtroppo, non ci sono risultati apprezzabili….

    Finalmente il DBA propone la creazione di un indice con relative RUNSTATS in produzione.

L’access path si modifica sostanzialmente e le performance tornano ad essere buone.
I server WebSphere vengono restartati e, gradualmente, si torna alla normalità.
Tutto questo processo é durato 2 giorni!
L’azienda deve rifasare la sua catena logistica e le consegne Just-In-Time, quindi il blocco ha avuto qualche seria ripercussione, ovviamente….  E questa è tutta la storia

 

Cosa è realmente avvenuto?

Il DBA, investigando sulle possibili cause del blocco, ha scoperto che ciò che realmente era accaduto, era che era andata in macchina una RUNSTATS solo su una piccola tabella che, fatta girare durante la notte di Giovedì in un momento inopportuno, aveva determinato un passaggio dell’access path allo stato di “pear-shaped”(aka Belly Up) per tutta la giornata di Venerdì e per metà del Sabato…

Come risolvere rapidamente e facilmente il problema?

Il DBA pensò ai modi in cui il problema, che avrebbe potuto riproporsi in futuro, potesse essere risolto in maniera facile e rapida. E qui comincia la mia parte della storia…

Questa azienda utilizza software proprio e possiede una licenza d’uso della componente Enterprise Statistics Distribution (ESD) del Bind ImpactExpert, che estrae, e opzionalmente converte, tutte i dati di catalogo di cui ha bisogno l’ottimizzatore per fare il proprio lavoro. Normalmente, I clienti utilizzano questo tool per copiare le statistiche di produzione in stile ‘sand box’, per vedere se un APAR o una migrazione di DB2 può causare problemi imprevisti. Per questo motivo, essi usano la componente Early-PreCheck del nostro tool Bind ImpactExpert per SQL statico e dinamico. Adesso facciamo un altro scenario, chiamato DSC (Dynamic Statement Cache) Protection, che dovrebbe realizzare quello che desidera il cliente, ma fa anche molto altro e, ovviamente, ha costi più elevati!

Ed è così che nacque l’idea di un tool ‘tascabile’ che abbiamo chiamato RUNSTATS Rescue. Ho sentito che chiedevate “Perchè si chiama PocketTool?”; la risposta è: “Perchè costa quanto il denaro che si tiene in tasca!” (aka Pin Money negliUSA o una “allowance” se preferite). Questi tools sono realmente molto economici!

Adesso, prima che smettiate di leggere questa newsletter e cominciate a lamentarvi del fatto che si tratti di una pura attività di marketing, per favore mettetevi in testa che quanto io descrivo in questo documento potrebbe anche essere stato scritto da voi e, quindi, avete solo bisogno di darmi credito sufficiente per condividere l’idea…

 

RUNSTATS Rescue

L’idea è usare l’EXPLAIN in qualunque modalità, shape or form, sia in SPUFI, che direttamente all’interno di qualsiasi monitor, semplicemente per ‘EXPLAINare’ l’SQL critico, e tenere a mente quale PLAN_TABLE owner state usando e, anche, il QUERYNO che avete appena usato.

Usando questi due inputs, RUNSTATS Rescue analizza l’output dell’ EXPLAIN per costruire una lista di control cards di estrazione e update per il nostro tool ESD, per tutte le tabelle usate e per I relative indici –anche se, ovviamente, non in uso! Infine viene anche generato un flusso di RUNSTATS DSC per tutti I tablespaces coinvolti nella query al fine di avere la certezza che la prossima volta che lo statement critico andrà in macchina, userà le corrette statistiche.

Ora, ovviamente, sorge la domanda: “Come faccio a sapere quail sono le statistiche da usare come Rescue statistics?”

La risposta è: “Quelle che c’erano prima prima che venisse eseguita una REORG con inline statistics o una RUNSTATS “. Questo è il punto chiave da tenere a mente: dovete semplicemente eseguire lo ESD extract prima che venga eseguita qualunque operazione di maintenance sul Database.
Molti siti hanno giorni, o weekends, in cui girano le procedure di maintenance, e non è certo un problema estrarre I dati e salvarli, per esempio,in un GENGROUP, al fine di facilitare l’individuazione di data e time in cui le statistiche erano ‘buone’ e poi, con il job di RUNSTATS Rescue, ripristinare molto rapidamente le statistiche necessarie. Questo consente al gruppo DBA di avere più tempo per capire cos’è realmente successo e apportare le giuste correzioni, quantomeno rimanendo a proprio agio.

10 Rescue Steps

  • 1. Selezionare il nuovo scenario di RUNSTATS Rescue
  • 2. Generare JCL
  • 3. Opzionalmente copiarle in GENGROUP dataset
  • 4. Inserire EXPLAIN TABLE-CREATOR e QUERYNO
  • 5. Lancio automatico del nostro browser di catalogo
  • 6. Drill down fino al livello degli indici
  • 7. Chiedere un “new” file name per le “rescue” statistics estratte
  • 8. Eseguire l’estrazione delle RUNSTATS Rescue
  • 9. Resettare le statistiche ed eseguire le RUNSTATS
  • 10. Le “Rescued” Statistics

 

Di seguito un esempio guidato di cosa avviene in realtà:

 

In basso nella schermata, si può osservare il nuovo scenario delle  RUNSTATS Rescue – Selezionandolo si attiva una finestrella con tre passi. Il primo deve essere eseguito solo una volta e va semplicemente inserito in un job produttivo esistente. Si raccomanda di farne il primo job della normale catena di maintenance del DB2 Database Maintenance.

1 – Select the new scenario RUNSTATS Rescue

 

2 – Generate some JCL

La prima opzione genera alcuni JCL come sotto illustrato:

 

3 – Opzionalmente copiare in un GENGROUP dataset

Esempio di step finale di copia su GENGROUP

 

4 – Insert the EXPLAIN TABLE-CREATOR and QUERYNO

Selezionare il secondo passo per preparare le RUNSTATS Rescue

5 – Lancio automatico della scansione di catalogo

Premendo “enter” si lancia il browser sul catalogo per consentire la vision degli oggetti usati dallo SQL…..

 

6 – Drill down to the Index level

…con I relativi indici, ovviamente!

 

7 – Ask “new” file name for the extracted “rescue” statistics

Uscendo con PF3 il tool chiede il dsname delle Production Statistics originali, come estratte dal job del primo passo, e un  “new” file name per le rescue statistics estratte:

 

 

8 – Perform the RUNSTATS Rescue extraction

Il JCL che esegue l’estrazione delle RUNSTATS Rescue, incluso il passo opzionale su saving su GDG, si presenta come sotto:

 

9 – Reset the statistics and executes the RUNSTATS

Infine, viene selezionato il terzo step, che azzera le statistiche ed esegue la RUNSTATS che provvede al flush della DSC

 

10 – The “Rescued” Statistics

Adesso, la prossima volta che lo statement apparirà, userà le “rescued” statistics e riprodurrà il vecchio Access Path.

 

Nel prossimo mese

Nel prossimo mese desidero espandere la problematica trattata con la possibilità di estenderne la soluzione allo SQL statico.

Il mese successive, scenderò nei dettagli della ‘DSC Protection scenario’ che ho citato prima. Non si tratta di un tool tascabile, ovviamente, ma è molto interessante!

Come sempre questions or comments sono benvenuti,

TTFN Roy Boxwell
Senior Software Architect