Machine Learning for Db2 z/OS: Artificial Intelligence – Hype or Reality?

Now that the Db2 Optimizer has gone all Artificial Intelligent (AI from now on) on us, I thought it would be interesting to review where we are in this brave new world!

Keeps you out of a BIND

My firm created our BindImpactExpert (BIX) software many moons ago to compare access paths, typically before a program non SQL change or a REBIND after RUNSTATS etc. Just to warn you not to do the REBIND or whatever you were going to do, as it would change the access plan to something you may well not really want to have. This means not just no-brainers like index access to tablespace scan, but also different index usage or table access order changes.

Apples and Oranges

So, what has that got to do with AI?
Well, we used a marketing slogan of trying to compare oranges and apples to try and make it clear :

how tough it really is to try and match SQL statements across different program releases and to compare the results.

ML and AI – The new buzzwords

Now in Db2 12, the optimizer has got machine learning (ML from now on) to help it study which host variables are used the most and how many rows are usually returned by a given query.

Do you “know” your data?

When the optimizer “knows” this info it can of course make much better “guesses” at good repeatable access plans that help everyone! Now the usage of AI and ML always gets me interested because of all the fun you can have with them.


There was recently a great article in my favourite science magazine “New Scientist” all about how to fool image recognition AI:

https://www.newscientist.com/article/mg24232270-200-machine-mind-hack-the-new-threat-that-could-scupper-the-ai-revolution/


Now you should see the connection back to our BindImpactExpert slogan. So much for Oranges and Apples – AI cannot even see that an Orange is an Orange…

this is not good or even perhaps “suboptimal”…

Adversarial Images and the fun you can have with them…


From 2018 this article

https://www.wired.co.uk/article/artificial-intelligence-hacking-machine-learning-adversarial

Within this article is a great paper all about graffiti changed road signs! Here is a direct link to the article https://arxiv.org/pdf/1707.08945.pdf


Fooling Deep Learning

Another great paper where Deep Learning AIs jump off the deep end:

http://www.evolvingai.org/fooling


Disappearing Humans

and finally this joyous article and a link to a You Tube video:

https://www.theverge.com/2019/4/23/18512472/fool-ai-surveillance-adversarial-example-yolov2-person-detection


But do not wear this picture as a T-Shirt down the street anytime soon!!!

Would you sit in a driverless car?

What this all boils down to is that I, for sure, will never sit in a driverless car within the next 40 years!
Why?

  • Firstly, I believe a true AI does not exist yet and will not exist for at least another 40 years.
  • Secondly, when I see an Orange I see an Orange because I “know” what an Orange looks like! As you have seen AIs have no concept of what an Orange *is* and so can easily see a drill instead or mistake skiers for dogs…
  • Thirdly, I am a programmer and someone somewhere is writing and testing – I hope! – the code… need I say more? Don’t even get me started with Agile or KANBAN.

Artificial Intelligence - Machine Learning for Db2 z/OS
extract from © 2019 – „We need to talk, AI” – Dr. Julia Schneider und Lena Kadriye Ziyal
A Comic Essay on Artificial Intelligence

https://weneedtotalk.ai


Room full of CTO’s

Have you heard about the hopefully apocryphal story of a room full of CTO’s at a software conference when they were asked “Would you fly in a computer controlled aircraft?” All their hands went up. Then they were asked “Would you fly if it was your software teams that had written the flight control software?” all their hands went down!

Flying is strangely different

While talking about flying my answer would be different?

Why?
Because in the air you have no-one else apart from other highly automated devices anywhere near you! This is the one hope that I can see for the whole driverless car stuff – Driving on the Interstate, Motorway or Autobahn. In the air, the environment is much more rigid. No lights, no crossings, no opposite traffic, no foot traffic, no children chasing a ball across the road etc.

They will come…

I am sure that driverless cars will come soon and I am equally sure that people will die because of them – However, I am also sure that lots of accidents will be avoided as well. After all the AI will always react faster than a human.

With this comes the ethics of AI and ML of course which will also play a major role. If the car has to decide in a crash scenario :

  • who gets hurt, should it protect itself and its cargo above all others?
  • What about a group of playing children or a bus full of nuns?
  • How can it decide who lives and who dies?

Looking on the bright side at least it will not get drunk, or high, and drive! I would also seriously hope that AI does not get road rage!

But even with these “bonus” points, I still have massive doubts and serious worries about the quality of the software and the true image recognition potential of current AIs.

AI everywhere

We were recently asked by a customer to add AI and ML to our SpaceAssuranceExpert (SAX) software. This has been happily running for years, capturing dataset allocations and extent usage in Db2 and making sure they never ran out of space or extents without getting told about it well beforehand.

I had to do a real double take and think “How on earth could AI help here?”

You are approaching 32 Linear Datasets with your segmented tablespace. You have to ALTER and REORG before you use up all the space in the 32nd. Where can AI help here?
We already have extrapolation from current usage…

Don’t get me wrong

I am not against AI and ML…

but I think we *all* need to get a bit more skeptical about what can actually be delivered now.

On the one hand host variable contents and row counts – Yes indeed, but even here outliers will cause grief and then on the other hand the complete and utter science fiction of self-driving cars in our towns and cities.

The money being spent is also amazing, in 2019 it is estimated, by Syncsort,

that the AI budget will be $35.8 Billion 44% more than in 2018.


What do you think?


I am sure that autonomous cars will be a plague upon us – are you?

TTFN,
Roy Boxwell
Senior Architect

PS: I am looking forward to the first Adversarial Images for Db2 z/OS !!!

2017-01 Db2 12 technical overview: Roy’s first features review

This Db2 12 technical overview presents in an “easy to read” table list a review of new Db2 12 features

Have you encountered any other Db2 12 changes you’d like to discuss?

 

Now that Db2 12 has gone GA I can finally talk about it. So here’s another new Features “first look” at what I think is cool, great, or odd !

This is my personal list for a Db2 12 technical overview – in no particular order :

  •  Db2 12 SQL Optimizer, triggers, Arrays, Merge, UNICODE Columns. Temporal, SQL pagination, SQL Stability, Log, Partitions,…
  •  Data Sharing
  •  Utilities DSN1COPY, Alternate Copy Tools, Audit, REORG, PBG reorgs, COMPRESSRATIO, RELOAD, RO tablespaces, LOAD, BACKUP and Recovery, PiT, RUNSTATS…

 

AGILE This release of Db2 will be ”the last” release, as Db2 Development has gone all agile on us and will be doing Continuous Delivery (CD) from now on. CD promises Easier, Cheaper, Faster and Simpler Db2 maintenance and the quick realization of new functionality.

 


Db2 12 – SQL


Optimizer


MQT or Table expression columns are “trimmed” if they are not used in the outer query.

In LEFT OUTER JOIN, if columns are not used, they can be Pruned.

UNION ALL gets major work when pushing down join predicates as well as pushing down ORDER BY and FETCH FIRST

Outer table joins can get reordered to avoid unnecessary materializations

User-defined functions get two improvements with merge and the introduction of indexes on the join or correlation predicates that are passed in as parameters

Adaptive Index is designed for Multi Index and single index list prefetch to determine at execute time the filtering of each index. This ensures the optimal execution sequence of indexes or, perhaps, a quicker fallback to Tablespace scan if no filtering index exists.


TRIGGERS

The new “advanced” triggers enable SQL and Global variable usage and SQL PL.


ARRAYS

Get a couple of nice new features, specifically the use of a global variable as an array type and the ability to use the ARRAY_AGG without forcing an ORDER BY.


GLOBAL VARIABLES

Get LOB support and in a SET they can be the target.


PureXML

The XMLMODIFY can do multiple document updates in a single invocation. Various XML performance boosts are also included, e.g. XMLTable and the XSLTRANSFORM allows transformations to different formats.


JSON

When using the JSON_VAL function the first argument must not now always be a BLOB. It can be a view, CASE, table expression, trigger transition variable or SQL PL variable or parameter.


MERGE

Is now a full MERGE with the ability to use table references with multiple MATCHED clauses, including DELETE operations.


SQL PAGINATION

The ability of Db2 to “understand” typical paging has been greatly boosted. Typically it was always coded like:

SELECT blah blah blah
FROM mytable
WHERE (SURNAME = ‘BOXWELL’ AND FORENAME > ‘ROY’)
        OR (SURNAME > ‘BOXWELL’)

This is pretty horrible for the Db2 optimizer but we *all* know what we really mean! Now in Db2 12, so does the optimizer! Sadly you must rewrite your queries a little so this example becomes:

SELECT blah blah blah
FROM mytable
WHERE (SURNAME, FORENAME) > (‘BOXWELL’, ‘ROY’)

Also with this comes a nice little feature called OFFSET ROWS. Typically, this is for when the connection to the server is a bit shaky and so after some paging, when the cursor is reopened, the code “knows” it can miss the first 60 rows, so the cursor changes to be:

SELECT * FROM mytable OFFSET 60 ROWS

Nice feature, but beware of polluting the DSC! It is much better to use a parameter marker for these Offsets!


UNICODE Columns

In DB2 11, we got a “fix” for UNICODE columns that was really a “crutch”. This has now been fixed with real UNICODE columns in DB2 12. You must migrate your existing data though!


Piece-wise DELETE

This is a feature I have wanted for decades! Simply add the FETCH FIRST nnnn ROWS ONLY within a DELETE and then programmatically loop around until you are done. Much easier than the method we have today of DECLAREing a CURSOR with an UPDATE of a dummy column and the DELETE WHERE CURRENT OF and after 5000 or so issue a COMMIT.


TEMPORAL RI

You can now add RI as normal and not be forced to use a trigger or stored procedure.


TEMPORAL TABLES

Get the ability to not just be inclusive-exclusive but also inclusive-inclusive.


TEMPORAL Logical Transactions

Another new feature with temporal tables, is the ability to support logical units of work for SYSTEM_TIME. These logical units of work are not determined by COMMIT or ROLLBACK but by using a built-in Global Variable.


PERCENTILE functions

Two new functions PERCENTILE_CONT and PERCENTIL_DISC are new BIFs.


DRDA Fast Load

Is the ability to load data into z/OS DB2 from files sitting on distributed clients.


ODBC

Gets a new INI keyword KEEPDYNAMIC and the connection attribute of SQL_ATTR_KEEP_DYNAMIC.


Obfuscated Code

Mainly of interest to Vendors is the ability to hide your stored procedure, TRIGGER or UDF coding from prying eyes.


RLF for Static SQL

This is a big one! The Resource Limit Facility has always only been available for dynamic SQL. Now you can also use it to cap Static SQL.


TRANSFER OWNERSHIP

This is a very handy way of clearing out all the old owners from a DB2 system.


SQL Stability

Dynamic Plan Stability is nearly the same as BIND QUERY, but the hope is that it will be easier and better to use! But beware of saving all of your dynamic SQL away!

Static Plan Stability gets a good enhancement that allows FREE on the original or previous. What is really good, is that the current version can be in use so there is no application outage anymore.


Insert

New Insert algorythm can be used for faster unclustered insert processing in some cases. Only for UTS MEMBER CLUSTER (This is actually the default for these spaces).


CONCENTARTE LITERALS

Now supported at the Package Level.


FTB

Fast Index Traversal – Especially good for randomly accessed indexes. If the index is unique, and 64 bytes or less, it is eligible. Index is controlled with the new Catalog table SYSIBM.SYSINDEXCONTROL and the -DISPLAY STATS(INDEXMEMORYUSAGE) command.


Log

Active log size can go from 4GB now up to 768GB ! Be careful here!


In-Memory bufferpools

by using PAGESTEAL(NONE) keyword.


PARTITIONS

Finally we get the chance to give each partition its own DSSIZE as well as the Partitioning indexes! This is great, but sadly is only available to an existing space once you have reorged the whole tablespace…However, once you are there, you can then have data and index parts up to 1TB in size, plus, when you do an ALTER of the DSSIZE, it does not cause an outage (as long as you make it bigger!). A side effect of this is that the RID is now seven bytes (see REORG mapping table for other changes). You can now also add partitions in the middle of an existing PBR table.


 

 


Data Sharing


Recovery

of retained locks from a failed member can be handled automatically


LPL and GRECP recovery

LPL and GRECP recovery auto retries three times after waiting three minutes


 


Db2 12 Utilities


DSN1COPY

In DB2 11 this utility got a few sanity checks and now the REPAIR CATALOG utility can fix some of these. The REPAIR CATALOG TEST also looks for some problems caused by misuse/abuse of DSN1COPY.


ALTERNATE COPY POOLS

The usage of BACKUP SYTEM is growing. So is the amount of storage required! The idea here, is to define a set of copy pools, but only one for many DB2 subsystems. The alternate copy pool uses as many volumes as it needs and leaves the other volume free for a different subsystem backup. This reduces the amount of space that must be allocated.


Audit

A new Authorization arrived: UNLOADAUTH to “replace” the “Does the user have SELECT auth on the table?” check that has run up to now. UNLOAD is special and should be controlled over this auth and no longer over just SELECT.


REORG

PBG tablespaces get the best news here!

PBG reorgs can now spill over into a new PBG if the row(s) do not fit back into the original partition. Classic case here, is compressed data that no longer fits back. This forced people to use a TS level reorg or not use compression.

If the PBG contains LOB data and it extended to a new partition in the log apply phase, then the LOB space was left in COPY Pending… pretty horrible and that no longer happens in DB2 12.

Another PBG bonus, is the delete of “emptied” Partitions after a REORG has completed.

Improved FlashCopy support – You can now decide to stop the REORG if the flash copy fails.

New Catalog column COMPRESSRATIO for use by utilities that records the compression savings at the record instead of at the page level.

RELOAD phase can now be offloaded to zIIP.

RO tablespaces can now be REORGed at any SHRLEVEL.

The mapping table gets changed again due to the relative page numbering in the new PARTITION support (seven byte RID).


LOAD


PART REPLACE with dummy input against an empty (PBR) partition could be quicker.

LOAD SHRLEVEL CHANGE PARALLEL support for PBG for SHRLEVEL CHANGE.

Additional zIIP offload, like in REORG, in the RELOAD phase, including the data conversion and loading of the record into the page set.

LOAD RESUME BACKOUT YES to avoid RECP on failure. Adds a new option on LOAD RESUME SHRLEVEL NONE to allow LOAD to back out the rows already loaded upon encountering an error (such as conversion, LOB/XML, duplicate key, referential integrity violation) without leaving the page set in RECP.

PREFORMAT support for auxiliary tables. Support is extended to LOB table spaces and auxiliary indexes.

Maintain MAXASSIGNEDVAL for identity columns. LOAD now maintains the MAXASSIGNEDVAL for user-provided input and resets the value if a LOAD REPLACE is run on the table space.

LOAD REPLACE support for the COMPRESSRATIO column for use by utilities that records the compression savings at the record instead of at the page level column.


BACKUP and RECOVERY

Point-in-Time support for PBGs, Flashcopy FLASHCOPY_PPRCP keyword. As mentioned the default is changed to not recover unchanged objects. MODIFY RECOVERY gets two new options: DELETEDS to delete the datasets and NOCOPYPEND to not set COPY pending after doing the MODIFY.


PiT

Has been improved with the ability to skip unnecessary recoveries. SCOPE UPDATE only processes objects that have been updated up to the TOLOGPOINT or TORBA.


RUNSTATS

New CLUSTERRATIO formula which should better reflect dynamic prefetch. Terry Purcell has stated that it is not a huge change and does not require a RUNSTATS of all tablespaces!


FREQVAL COUNT nn

The COUNT nn is now optional and, if not used, then RUNSTATS will work out the best number for you. This is really, really nice and I would recommend this in an instant! It has also been retro fitted to DB2 11.


Autonomic Statistics with PROFILEs

I am no fan of this, as I believe it makes for a pretty nasty feedback loop where anyone’s “dumb” QMF/SAS/DSNTEP2/SPUFI will get inserted as a PROFILE COLGROUP, and then these PROFILEs will get bigger and bigger until no-one knows which are really useful and which are just fluff! I would recommend setting the ZPARMs STATFDBK_SCOPE to ALL (Default) STATFDBK_PROFILE to NO (Default is YES). When YES is used DB2 12 will create and/or maintain a PROFILE for you. Finally, validate that the SYSTABLES column STATS_FEEDBACK is set to “N” (Default is “Y”) for any and all tables where you do *not* want SYSSTATSFEEDBACK data. E.g. All the DSNDB01 tables where a RUNSTATS is not even allowed!


DSC

DSC Invalidation got switched off by default. In the past *any* RUNSTATS flushed the cache. Now you must add the key word INVALIDATECACHE YES to get this to occur. (Unless you use the REPORT NO UPDATE NONE syntax this still just flushes the DSC)


Inline Stats

Inline Stats got a huge boost with PROFILE support, MOST/BOTH/LEAST and LOAD PARALLEL got inline stats.


 

Have you encountered any other Db2 12 changes you’d like to discuss?

As usual, feel free to email me with questions or comments.

TTFN

Roy Boxwell

 

5

2015-07 Bad Data Day

A Good time to check your DB2 Catalog Statistics !

One of my favorite topics is STATISTICS and RUNSTATS.

This month I have a short newsletter involving both of them!

 

Something jumped right out…

Some time ago we were helping one of our customers to perform an Early Precheck (Going from DB2 9 to 10). To do so, we requested a copy of their entire DB2 production statistics so the optimizer could work here at our labs in Dusseldorf – just like at the customer site.

 

…as we loaded up DB2 production statistics of a customer

We loaded up the data and I noticed something that just jumped right out…

SELECT TYPE 
      ,FREQUENCYF 
      ,NUMCOLUMNS 
      ,SUBSTR(COLVALUE , 1 , 11) AS COLVALUE 
      ,COLGROUPCOLNO 
       FROM SYSIBM.SYSCOLDIST 
       WHERE TBOWNER = 'aaaaaaa' 
         AND TBNAME  = 'bbbbbb' 
         AND NAME    = 'cccccc' 
ORDER BY 1 , 3 , 2 
; 
---------+---------+---------+---------+---------+---------+---------
TYPE             FREQUENCYF   NUMCOLUMNS   COLVALUE      COLGROUPCOLNO
---------+---------+---------+---------+---------+---------+---------
F    +0.6066539624818615E-02            1 . xxxxxxxxxx .. 
F    +0.6066539624818615E-02            1 . xxxxxxxxxx 
F    +0.6287988717751475E-02            1 . xxxxxxxxxx .. 
F    +0.6287988717751475E-02            1 . xxxxxxxxxx 
F    +0.8554928116458912E-02            1 . xxxxxxxxxx .. 
F    +0.8554928116458912E-02            1 . xxxxxxxxxx 
F    +0.8578238547293950E-02            1 . xxxxxxxxxx .. 
F    +0.8578238547293950E-02            1 . xxxxxxxxxx 
F    +0.8852136109605646E-02            1 . xxxxxxxxxx 
F    +0.8852136109605646E-02            1 . xxxxxxxxxx .. 
F    +0.1229042465777374E-01            1 . xxxxxxxxxx 
F    +0.1229042465777374E-01            1 . xxxxxxxxxx .. 
F    +0.1331608361451540E-01            1 . xxxxxxxxxx .. 
F    +0.1331608361451540E-01            1 . xxxxxxxxxx 
F    +0.1342098055327308E-01            1 . xxxxxxxxxx .. 
F    +0.1342098055327308E-01            1 . xxxxxxxxxx 
F    +0.1633478440765281E-01            1 . xxxxxxxxxx 
F    +0.1633478440765281E-01            1 . xxxxxxxxxx .. 
F    +0.8439891140288000E+00            1 . xxxxxxxxxx .. 
F    +0.8439891140288000E+00            1 . xxxxxxxxxx 
DSNE610I NUMBER OF ROWS DISPLAYED IS 20 
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 100 

 

There were “duplicates”!

This is just one of many examples. Also above, you can see the problem – Notice the data in COLGROUPCOLNO? Sometimes there is a hex value and sometimes not!

Now, believe this or not, all of this goes back to a bug in RUNSTATS in DB2 V8 which got this APAR:

PK33517:

COLGROUPCOLNO ASSOCIATED WITH SINGLE COLUMN CONTAINS A NUMERIC VALUE INSTEAD OF ZERO LENGTH FIELD ACCORDING TO SQL REF

What’s interesting here, is this APAR is marked as FIN so the “bug” disappeared in DB2 9!

The bug was fixed in DB2 9, but the “bad” data was not automatically cleaned up

As you can see, the bug caused an erroneous value in the COLGROUPCOLNO to be set for single column frequency rows.

The bug was then fixed in DB2 9, but the “bad” data that had been inserted was not automatically cleaned up and, as I hope you all know, the SYSCOLDIST data is never automatically deleted – it is only ever updated. So when the bug was fixed, the low-value or hexadecimal column number in that field was no longer EQUAL to a zero length field, and so an insert was done. Since then these rows have just stayed there…

 

Query to count this bad rows in the SYSCOLDIST

I wrote a little query just to show the count of how many of these bad rows existed in the SYSCOLDIST:

SELECT COUNT(*) AS BAD_GUYS 
FROM SYSIBM.SYSCOLDIST 
WHERE NUMCOLUMNS                = 1 
  AND TYPE                      = 'F' 
  AND NOT LENGTH(COLGROUPCOLNO) = 0 
; 
---------+---------+---------+---------+------
    BAD_GUYS 
---------+---------+---------+---------+------
       8680 
DSNE610I NUMBER OF ROWS DISPLAYED IS 1


All of these entries should be deleted from SYSCOLDIST to help the optimizer pick the right access paths!

Check your stats !

I first noticed this problem during a test of our Statistics HealthCheck product, which flagged over 1,200 critical problems in the DB2 catalog and, thinking I had broken something, I checked all of the checks and found the above bad data. Now is as good a time as any to check your stats *and* download our Statistics Healthcheck Freeware!!

 

As usual any questions or  comments are welcome,

TTFN Roy Boxwell

Senior Software Architect