2023-07 Directory Details

This month, I am going to tell you a true story from our labs in Düsseldorf, where I learnt a few things about Db2 and how the Db2 Directory works…

What is it?

The Db2 Directory is the “shadow” catalog if you like. It is basically the machine-readable stuff that echoes what is in some of the Db2 Catalog tables that we all know and love and use nearly every day!

Whatya got?

Well, the Db2 Directory is the DSNDB01 database and, up until Db2 10, was completely hidden from view when looking at it with SQL. The VSAM datasets were there but you could not select from them – Pretty useless! My company, Software Engineering GmbH, actually wrote an assembler program to read the SYSLGRNX table and output the interesting data therein so that it could be used for image copy decisions etc. But, then IBM finally decided to open up the Db2 Directory to our prying eyes! (Sad footnote: They still output the LGRDBID and LGRPSID as CHAR(2) fields!!! Completely useless for joining of course – See my older blogs all about SYSLGRNX and doing the conversion to a correct SMALLINT way of doing it!

Tables, Tables, Tables

You actually do not have that much data available for use with it!

U is Unique index Y or – for Duplicates allowed and AUX for the standard LOB AUX Index. Bold field names are DESC order.

This table gives you an overview of what you get and also shows the two tablespaces that were, for me at least, of special interest!

Where’s the Beef?

On my test system, the tablespaces SYSSPUXA and SYSSPUXB were both getting larger and larger. Now the task is to understand why you need to know which of the above tables is “linked” to which other ones, and then which link to the Db2 Catalog tables. Time for another table!

So?

What you can see from this, is that the DSNDB01.SPT01 (which we know is the SYSIBM.SPTR) is linked to a whole bunch of Package-related tables and this is all documented – so far, so good! What got me interested, were the LOB tablespaces SYSSPUXA and SYSSPUXB. In my system they were taking up 13,929 and 6,357 Tracks respectively. Might not sound much to a real shop out there, but for me with only 118,000 rows in the SPTR it piqued my interest!

What is in it?

The SYSSPUXA (Table SYSSPTSEC_DATA) contains the machine-readable access paths generated by BIND/REBIND with versioning etc. so that being quite big was, sort of, OK. The SYSSPUXB (Table SYSSPTSEC_EXPL) contains *only* the EXPLAIN-related information for the access path. This was added a few Db2 releases ago so that you could extract the current access path without doing a REBIND EXPLAIN(YES) as that would show the access path “right now” as opposed to what it was, and still is, from, say, five years ago. These two access paths might well be completely different!

How many?

The SPTR had 6,630 tracks.

The SYSSPTSEC_DATA had 13,929 tracks.

The SYSSPTSEC_EXPL had 6,357 tracks.

This is a total of 1,795 Cylinders for 118,553 rows of data – for me, that’s a huge amount.

What is “in” there?

I quickly saw that there were *lots* of versions of packages and some very odd “ghosts” lurking in the data. Here’s a little query to give you a glimpse:

SELECT SUBSTR(SP.SPTCOLID, 1, 18)  AS COLLID    
     , SUBSTR(SP.SPTNAME, 1, 8)    AS NAME      
     , SUBSTR(SP.SPTVER, 1 , 26)   AS VERSION   
     , HEX(SP.SPTRESV)             AS RESERVED  
FROM SYSIBM.SPTR SP                             
WHERE 1 = 1                                     
--  AND NOT SP.SPTRESV  = X'0000'               
  AND NOT SP.SPTCOLID LIKE 'DSN%'               
  AND NOT SP.SPTCOLID LIKE 'SYS%'               
LIMIT 100                                       
;                                                

Now, the weird thing is, that the SPTRESV (“RESERVED”) column obviously actually contains the Plan Management number. So, you have “normally” up to three entries. Zero for Original, One for Previous and Two for Current. What I saw, was a large number of Fours!

Set to Stun!

Where did they all come from? A quick bit of looking around revealed that it was Package Phase-In! They have to keep the old and the new executables somewhere… So then, I started trying to work out how to get rid of any old rubbish I had lying around.

FREE This!

First up was a simple FREE generator for old versions of programs deliberating excluding a few of our own packages that require versions for cross-system communications.

WITH NEWEST_PACKAGES (COLLID                      
                     ,NAME                        
                     ,CONTOKEN ) AS               
 (SELECT SP.SPTCOLID                              
        ,SP.SPTNAME                               
        ,MAX(SP.SPTCONID)                         
  FROM SYSIBM.SPTR       SP                       
  WHERE NOT SP.SPTCOLID LIKE 'DSN%'               
    AND NOT SP.SPTCOLID LIKE 'SYS%'               
    AND NOT SP.SPTNAME IN ('IQADBACP' , 'IQAXPLN')
  GROUP BY SP.SPTCOLID                            
          ,SP.SPTNAME                             
 )                                                
SELECT DISTINCT 'FREE PACKAGE(' CONCAT SQ.SPTCOLID
                CONCAT '.'      CONCAT SQ.SPTNAME 
                CONCAT '.('     CONCAT SQ.SPTVER  
                CONCAT '))'                       
FROM NEWEST_PACKAGES   NP                         
    ,SYSIBM.SPTR       SQ                         
    ,SYSIBM.SYSPACKAGE PK                         
WHERE NP.COLLID   = SQ.SPTCOLID                   
  AND NP.NAME     = SQ.SPTNAME                    
  AND NP.CONTOKEN > SQ.SPTCONID                   
  AND SQ.SPTCOLID = PK.COLLID                     
  AND SQ.SPTNAME  = PK.NAME                       
  AND PK.CONTOKEN > SQ.SPTCONID                   
  AND PK.LASTUSED < CURRENT DATE - 180 DAYS       
--LIMIT 100                                       
;                                                

Note that this excludes all IBM packages and my two “SEGUS suspects” and pulls out all duplicates that have also not been executed for 180 days. Running it and then executing the generated FREEs got rid of a fair few, but those “Four” entries were all still there!

FREE What?

Then I found a nice new, well for me anyways, use of the FREE PACKAGE command. You have to be brave, you have to trust the documentation and you trust me because I have run it multiple times now! The syntax must be:

FREE PACKAGE(*.*.(*)) PLANMGMTSCOPE(PHASEOUT)

Do *not* forget that last part!!! Or make sure your resume is up to date!

This then gets rid of all the junk lying around! Was I finished? Of course not… Once it had all been deleted I then had to run a REORG of all these table spaces and so now we come to part two of the BLOG…

REORGing the Directory

Firstly, if you are in Db2 13 you must Reorg the SPT01 and SYSLGRNX anyway to get the new DSSIZE 256GB activated. Secondly, Db2 is clever, so for certain table spaces, it will actually check the LOG to make sure you have taken a COPY:

“Before you run REORG on a catalog or directory table space, you must take an image copy. For the DSNDB06.SYSTSCPY catalog table space and the DSNDB01.DBD01 and DSNDB01.SYSDBDXA directory table spaces, REORG scans logs to verify that an image copy is available. If the scan of the logs does not find an image copy, Db2 requests archive logs.”

Db2 for z/OS Utility Guide and Reference “Before running REORG TABLESPACE”

Pretty clear there!

We are good to go as we only have the SPT01 and its LOBs. Here is an example Utility Syntax for doing the deed:

REORG      TABLESPACE DSNDB01.SPT01       
           SHRLEVEL REFERENCE AUX YES     
           SORTDEVT SYSALLDA SORTNUM 3    
           COPYDDN (SYSC1001)                   

Pretty simple as the AUX YES takes care of the LOBs. Remember to COPY all objects afterwards as well!

COPY       TABLESPACE DSNDB01.SPT01 
           COPYDDN (SYSC1001)       
           FULL YES                 
           SHRLEVEL REFERENCE       

COPY       TABLESPACE DSNDB01.SYSSPUXA
           COPYDDN (SYSC1001)         
           FULL YES                   
           SHRLEVEL REFERENCE         

COPY       TABLESPACE DSNDB01.SYSSPUXB
           COPYDDN (SYSC1001)         
           FULL YES                   
           SHRLEVEL REFERENCE         

How many after?

Once these were all done, I looked back at the track usage:

The SPTR had 4,485 tracks (was 6,630)

The SYSSPTSEC_DATA had 7,575 tracks (was 13,929)

The SYSSPTSEC_EXPL had 4,635 tracks (was 6,357)

This is a total of 1,113 Cylinders (was 1,795) for 90,858 (was 118,553) rows of data.

This is very nice saving of 25% which was worth it for me!

Directory Tips & Tricks

Finally, a mix-n-match of all things Directory and Catalog.

Remember to always reorg the Directory and the Catalog table spaces in tandem.

Remember to always do a COPY before you do any reorgs!

FASTSWITCH YES is ignored for both Catalog and Directory reorgs.

Any more Limits?

Yep, you cannot REORG the DSNDB01.SYSUTILX at all. Only hope here is IDCAMS Delete and Define – dangerous!

LOG YES is required if SHRLEVEL NONE is specified for the catalog LOB table spaces.

If SHRLEVEL REFERENCE is specified, LOG NO must be specified.

The SORTDEVT and SORTNUM options are ignored for the following catalog and directory table spaces:

The COPYDDN and RECOVERYDDN options are valid for the preceding catalog and directory tables if SHRLEVEL REFERENCE is also specified.

Inline statistics with REORG TABLESPACE are not allowed on the following table spaces:

IBM now pack a complete Catalog and Directory REORG with the product to make it nice and easy to schedule and run! Look at member <your.db2.hlq>.SDSNSAMP(DSNTIJCV) for details.

To REORG or not to REORG?

This is the eternal question! For Db2 13 you must do at least two table space REORGs, as previously mentioned, but the hard and fast rule about the complete Db2 Catalog and Directory is: about once per year is normally sufficient. If you notice BIND/PREPARE times starting to go horribly wrong then a REORG is probably worth it, and it may be time to check the amount of COLGROUP statistics you have!

The recommendation from IBM is, “before a Catalog Migration or once every couple of years, and do more REORG INDEX than REORG TS.”

I am now keeping an eagle eye on my Db2 Directory LOBs!

If you have any Directory/Catalog Hints & Tips I would love to hear from you.

TTFN

Roy Boxwell

2023-06: IDUG 2023 North America – Roy review

Hi all! Now safely back from Philly and, as ever, I learned a lot there! Already looking forward to Praque and the EMEA IDUG this October.

As always, I have listed out all of the Db2 for z/OS presentations I could find and added a few review sentences to them. Any links herein require you have a password and userid at IDUG with the required rights to access the data. This means you must have been either an attendee or virtual attendee with full access – Just being a member of IDUG is *not* enough!

SP01 | Db2 for z/OS Update: The Latest From The Lab

Speakers: Akiko Hoshikawa, Haakon Roberts

A01 | Data Fabric in 60 mins for Db2 for z/OS DBAs!

Speakers: Cuneyt Goksu

Data Fabric is one of the trend topics in IT Industry as part of Digital Transformation. This session summarizes both architectural, use case and product level discussions in the context of IBM Z and Db2 for z/OS Eco system.

This contains some great info about secure ports, certificates and the use of system profiles for remote access.

A02 | Who does that? Using SQL Data Insights to spot unusual behavior

Speakers: Mike Behne

This session explores SQL Data Insights (SQLDI) capabilities, reporting on early efforts to apply SQLDI to learn more from available data.

Basically, saying what I found as well: Training is expensive and you gotta “Know your data” before you really start!

A03 | Db2 for z/OS Utilities – The Very Latest News

Speakers: Haakon Roberts

The Db2 Utilities team continues to deliver significant new function, availability and performance improvements after the GA of Db2 13. This session will cover the very latest developments and also look forward to some of what you can expect to see later in 2023.

As usual a great presentation with all the newest stuff you just need to know! This included a free 30 minute long power outage which didn’t fluster Haakon at all! However, it caused Chris Muncan to motor through his next presentation!!

A04 | Recovering to another subsystem with DSN1COPY

Speakers: Chris Muncan

Ever have a major production problem that you didn’t know about and found out a month later and need to restore the data but to not restore on top of production? We did and here’s how we did it!

World record speed speech about DSN1COPY – Quicker than SSDs!!! Due to a small mistake in the timing app Chris thought he had even less time than he already had… He told me the session would be re-recorded at a saner pace! Even so, there were glorious nuggets like the requirement to issue the ALTER xxx-xxx ADDVOLUMES( * , * ) which catches people out all the time these days!

A05 | Data Modernization: Embrace the Mesh!

Speakers: Greg DeBo

Data Mesh is the new hot term for Data, I’m gonna talk about how to integrate mainframe data into your Data Mesh.

This was all about moving or not moving data around. Especially interesting was all the IMS stuff! Still nailed to its perch!

A06 | Db2 for z/OS Performance Latest Updates

Speakers: Akiko Hoshikawa

The latest performance and capacity planning updates from Db2 for z/OS (both Db2 12 and Db2 13) as well as IBM zSystems updates that you could take advantages of. The session will explain the key items with the reference of instrumentation updates to evaluate the features.

Aikiko with the normal list of great and good things. Heads up for the CFLEVEL 25 level size change. You had *all* better check your CF Sizes!!!

A07 | Use profiles to monitor and control Db2 application context

Speakers: Maryela Weihrauch

With increased popularity of distributed applications, related Db2 system and application definitions were defined in distributed application servers. Sub-optimal definitions could impact on overall Db2 health. Db2 z/OS provides capabilities to create profiles to monitor and control various aspects of a Db2 specific system and application contexts in Db2 profile tables. Db2 13 extends the profile tables to new attributes for local and remote applications. Maryela will review existing profile capabilities and introduce Db2 13 extensions as well as discuss use case examples.

Reviewed all about system profiles and some use cases to stop DoS attacks – Good stuff!

A08 | Best Practices for Applying Db2 for z/OS Software Maintenance

Speakers: Robert Tilkes

Discuss best practices for Db2 for z/OS maintenance strategy, SMP/e environment configuration, patch management and deployment.

All about applying fixes, rsus and hipers to Db2 for z/OS and the sad fact that we are *all* behind here! Included some nice sample JCL at the end to check your own SMP/E system.

A10 | Db2 13 for z/OS install and migration using z/OSMF workflows

Speakers: Sueli Almeida

In this session we will demonstrate how you can exploit IBM Z/OSMF workflows to install and / or migrate a Db2 subsystem or members of a data sharing group. We will illustrate how the workflow artifacts are created. Next, we will show these artifacts are assembled into a workflow. Finally we will show how the execution of the workflow progression can be tracked or monitored.

A very brave idea, in my opinion, of automating Db2 upgrades and deployment using z/OSMF.

A11 | Migrating to Db2 13

Speakers: John Lyle

Presenting the Db2 13 migration process. Note: This was a top 10 presentation at IDUG EMEA. I’ve improved the content and added some new detail.

All about getting there, including the new -DIS GROUP DETAIL output to help you see “where you are”.

A12 | Copy up – Buttercup

Speakers: Chad Reiber

Presentation about Db2 z/OS image copies. The how’s and why’s image copies are important. What and when they should be taken.

Absolutely everything you were afraid to know, but asked anyway, about COPY!

A13 | Db2 Attachment Facilities – Advanced Topics

Speakers: Emil Kotrc

Db2 Attachment Facilities are the interfaces between the application programs and Db2 for z/OS. In this presentation we will go through the basics of the most common attachment facilities, we will show how and when to use them, and we will also cover some advanced topics such as security implications, thread reuse, connection switching.

A deep deep dive into Db2 attachment details! Contains even assembler examples…

A14 | Db2 Logging Basics & Exploitation Beyond Recovery

Speakers: Steen Rasmussen

This is a “beginner” session describing why the LOG is one of the crucial components of Db2 and cover some log basics of what the log contains as well as get an understanding of how the log can be exploited for other tasks.

Steen introduces the Db2 log and how you can use it (or better yet buy a log analysis tool and let it do the work!)

A15 | Database Administration Enhancements of Db2 13 for z/OS

Speakers: Robert Catterall

This session will focus on the Db2 13 enhancements that pertain to database administration: online conversion of PBG table spaces to PBR, online removal of active log data sets, profile table support for local-to-Db2 applications, instrumentation improvements, and more.

Robert ran through all of the goodies in Db2 13 that could impact DBAs, including the new insert logic for PBGs that had always annoyed me in the past!

A16 | Who’s afraid of DDF

Speakers: Toine Michielse

For those who are being confronted with DDF workload, this presentation will discuss pitfalls, new resources to monitor and tune and application changes to be considered.

Highlights of using, and getting better performance from, DDF to connect.

B01 | Db2 Analytics Accelerator – Newest Enhancements

Speakers: Eberhard Hechler

This presentation is discussing the newest functions and features of the Db2 Analytics Accelerator, such as enhancements of the IBM Integrated Synchronization (InSync) engine (e.g., ALTER TABLE ROTATE PARTITION support), query acceleration improvements (e.g., ability to add Db2 unique constraints on the Accelerator), collecting actual explain information, Db2 Analytics Accelerator on IBM zSystems enhancements (e.g., improved I/O performance and reduced CPU consumption), reducing overall trace collection time, new and enhanced stored procedures, and administration enhancements. The presentation ends with an outlook to future enhancements.

This was all about Accelerators, specifically 7.5.8, 7.5.9 & 7.5.10 in this case!

B02 | Back to basics – Real life battle experience

Speakers: Joe Huang

I am currently training 3 junior z/OS DBAs in our company on various DB2 topics. I like to combine 6 of the training material into one and make a 60-minute presentation for the beginner or semi-seasoned DBAs.

Excellent real world experiences including work files, and all the ZPARMs involved as well as unique rowid pitfalls.

B03 | DBA’s Epic Journey

Speakers: Leila Hosseini

There are lots of performance tuning hints that we are aware of. Maybe we read the IBM Manuals or IDUG resources that we can leverage as a possible solution to our issues. As a DBA all of us know lots of performance and tuning tips and tricks , but when Unexplainable performance degradation happens! What would be a DBA’s reactions? It is the Art of DBA to put together all his/her knowledge and observations , analyze the situation and Resolve the issue.

A DBA needs to find out answer for following questions:

1-what is the main cause of the issue?

2- how it could be resolved?

Real world SQL tuning experiences and problem solving. With four example super bad SQLs and how to fix them up!

B05 | Db2 13 Early Customer Experiences

Speakers: Anthony Ciabattoni

The presentation will concentrate on Db2 13 early customer experiences discussing what went well, what they liked and also the things they wish they knew and could have avoided.

A quick run though the pre-reqs of Db2 13 and then some of the highlights including correct amounts of copying to avoid ransomware style attacks.

B06 | Db2 Utilities in Practice

Speakers: Hendrik Mynhardt

This session will cover not just what is new, but also how to apply best practices for all your Db2 utilities in a real environment.

Lots of data about the new SYSUTILITIES table including reminding us to clean it up! Followed by a run-through of the big utilities and how they have been enhanced recently.

B07 | Db2 12+/13 for z/OS Database Design and Application Performance: Features and Usage

Speakers: Susan Lawson

With every new release and function level, (12+/13) of Db2 we look to see what features will allow us to improve the capabilities and performance of our existing applications as well as the availability of our data. We also have to plan to utilize new features in our development efforts.

Great to see Susan again as she is one of my favourite presenters. A ton of info in the presentation. Well worth a read afterwards!

B08 | Back-To-Basics: Table Space and Index Fundamentals

Speakers: Louise Comeaux

A review of the various types of tablespaces and the indexes that are defined to them.

A very nice run through all current DB, Tablespace, table, and Index types including PBR RPN, DPSIs etc. Note that LOB and XML were not covered as this was, after all, a back-to-basics presentation.

B10 | When Microseconds Matter

Speakers: Thomas Baumann

Imagine a well-tuned Db2 z/OS SQL workload where the most frequently executed queries use 50 microseconds CPU per query or even less. Is it worth further tuning? And what are the tuning techniques to be applied? This presentation starts at exactly that point and – without touching the SQL query text – demonstrates how another 10% of CPU resources were squeezed out of that workload. And we will also discuss at what point we can truly decide that a query runs at maximum speed and no further tuning is possible.

All about squeezing more juice out of that lemon! Great example methodology and example SQLs make it easy to start doing this all!

B11 | Db2 for z/OS 101: Buffer Pools and Group Buffer Pools

Speakers: Mark Rader

Are you new to Db2 for z/OS? Or new to Db2 for z/OS data sharing? Want a refresher on buffer pools? Buffer pools, and group buffer pools for data sharing, are key to supporting Db2 workloads. Come learn the basics for these important resources.

Great four-way presentation with really good visualization. Played to a packed house and was very well received. More presentations like this please!

B12 | Db2 for z/os System Profile, The New ZPARMs and More!

Speakers: Paul Bartak

A practical look at implementing Db2 system profile rules to customize your experience with Db2 for z/OS. I started presenting on Db2 System Profiles with Db2 10. This presentation will cover updates through Db2 13.

Another presentation talking all about system profiling. It is one of the most under-used fantastic features of Db2 so I fully understand this!

B13 | Large tables – Obstacles and Strategies to win!

Speakers: Scott Walker

Data is ever growing and challenges with large tables make our jobs more difficult. Success with these monsters is imperative. Everything matters with table design/maintenance. Perhaps you are stuck with an old design that is not prepared for influx of data or you’re building a new table and want to build something scalable and low maintenance. I will give you a few items to consider as well as pain points I’ve lived through. Additionally, this session will be interactive – audience participation will benefit the conversation.

An intro and exposé of all things HUGE in Db2! Tips and tricks included!

B14 | Monitoring your distributed workload for Db2 for z/OS

Speakers: Jørn Thyssen

Distributed workload is becoming more prevalent, and for many customers it is a significant part of the overall Db2 for z/OS workload. In this session we will explore the various options available with Db2 for z/OS to understand and monitor your distributed workload. Our focus is on the system side to help you protect your Db2 system.

This introduced all the new and varied ways of seeing from where remote SQL is coming from and a nice set of ways to check if your drivers are up to date.

B15 | Things About Db2 for zOS I Wish I’d Remember When….

Speakers: Michael Cotignola

This presentation will cover some of the more overlooked or forgotten options, commands, syntax that could prove to be invaluable in managing your Db2 environment. Intended target audience is for people new to Db2 for zOS, application developers who may benefit from knowing more about the internals of Db2, or people like myself who just can’t remember a command, syntax or option and need a refresher.

A nice stroll down the, sometimes unfamiliar, road of things we forget or use very rarely.

B16 | Db2 13 for z/OS Application Management Enhancements

Speakers: Tammie Dang

Application development and management are always important topics with Db2 for z/OS due to the complex process and volume of applications exposed on the platform. Certain legacy applications are difficult to change and all changes to applications typically require following a strategic process from development to test,before deploying to the production environment. In a cloud environment that hosts multi-tenancy, applications are typically different in characteristics. These applications can each access different database objects and have their own concurrency requirements and toleration.

You can now use Db2 13 to set application-granularity lock controls such as timeout interval and deadlock resolution priority to match the individual application’s need. And you can do this without the cost of changing the application’s source code. Db2 13 also introduces a mechanism to optimize for the success of DDL break-in without needing to duplicate versions of the application packages and without impacting non-dependent applications.

This presentation went through the new and changed options including system profiles again… You get the idea that people are trying to tell you something here??? Plus DDL break-in explained!

C13 | Db2 for z/OS availability, efficiency and application stability enhancements

Speakers: Frances Villafuerte

Db2 13 provides many new features to accommodate application workload growth and simplify processes for DBAs. This session gives you an overview of key features.

Started with a nice list of removed ZPARMs and which values they now have in perpetuity, then a list of changed ZPARM values so you can easily verify that you are not “living in the past” . My personal favourite is EDM_SKELETON_POOL from 51200 to 81920. I know of sites which still have 10240! Check out my blog about “Small ZPARM – Big effect!” https://www.segus.com/2017-04-db2-zparm-edmpool-size/  for details of what this ZPARM actually controls and enables! It is not really that clear from the docu at all! This was then followed up with all the problems with PBG spaces, including the horrible inserted empty partition problem,  and how some of these problems are solved in Db2 13. John Campbell stated before he retired “MAXPARTITIONS 1 DSSIZE 64GB no other setting is good!”. Then it continued into the PBG -> PBR Migration scenarios as IBM, at least long term, want us off PBGs completely!

E01 | Optimization 101. What makes Db2 Choose Certain Access Paths

Speakers: Tony Andrews

This is a great ‘Back to Basics’ presentation (and then some) for understanding the logic of the optimizer. The Db2 optimizer is a cost based optimizer estimating the cost of many possible access paths for an SQL query, ultimately choosing what it thinks to be the least expensive access path. But what determines the choices, and what makes one cheaper than another? Come learn the basics of the Db2 optimizer, and what you can do to help and influence its logic to the most efficient paths. Come learn the basics of performance tuning queries, programs, and applications.

Optimizer 101 introducing you to everything the optimizer uses to make its decision on access paths.

E02 | Db2 13 Application Development Topics

Speakers: Emil Kotrc

What is new in Db2 13 for application developers? Let’s explore these topics in this session. We will cover SQL related enhancements as well as performance improvements that application developers can benefit from.

This was a real potpourri of Db2 13 stuff! CD, Current Lock timeout, Profile tables (again!), SQL DI and APARs!

E03 | Taming Dynamic SQL with Db2 V12

Speakers: Steve Loesch

This presentation will contain an overview of the techniques to capture and observe performance of Dynamic SQL in Db2 V12 implemented at Navy Federal Credit Union. Navy Federal Credit Union has many mission critical applications using Dynamic SQL. Examples of Db2 features such as Dynamic SQL statement stabilization, creation of Dynamic SQL history, a cross reference of SYSDYNQRY tables and the DSN_STATEMENT_CACHE table, and a SQL statement that will show that a table is used in packages and/or SYSDYNQRY statements.

Was all about getting, explaining and tuning your dynamic SQL by looking at the use of stabilized queries.

E04 | Db2 Java High Performance Best Practices Volume X

Speakers: Dave Beulke

This presentation details Db2 and Java performance best practices and discusses how to optimize your processing to run 100x faster. Db2 design, partitioning strategies, coding best practices, java class frameworks and debugging/tracing practices will be presented that can immediately eliminate your bottlenecks and enhance performance. After this discussion you will be able to dramatically improve your Db2 access, Java runtimes, minimize CPU and quickly access/process billions of rows with the best performance possible.

I just loved the 13th and 14th slides… I am also not a fan of those “things”… If you want to tune Java on the Mainframe *this* is your best starting point!

E05 | Do your Db2z application developers like Python? Sure let’s build z applications using it!

Speakers: Sowmya Kameswaran

Python is one of the top programming languages in the world. It’s easy to learn; for DBAs similar to REXX; and has a robust set of libraries that enable delivering business value; specifically surrounding data rapidly and easily. This presentation will use a Python on z/OS with the python-ibm_db library; along with a few visualization libraries to provide some fun demonstrations that also show the power and ease of use Python.

The first 10 slides were just setting the ground for working with Python on z! Not really difficult, but different from what we as “Hostees” are used to methinks! However it contained some really cool stuff as well as calling Visual Explain!

E06 | Is it worth it to migrate CICS cobol app to Windows .net?

Speakers: Mateusz Ksiazek

The presentation will show the real production approach for migration from CICS Cobol application to Windows .net.

After seeing all the graphics, you have to really wonder: Was it all worth it? Naturally, it all looks more modern but you still have to pay the ferryman at the end of the day!

E08 | Db2 for z/OS and LUW Big Buttons for Application Performance

Speakers: Daniel L Luksetich

This is not A deep dive into application performance. It is the simple but huge right things to do to have a dramatic positive impact to application performance!

Dan gives his great hints and tips about general SQL performance here. Not doing the call, coding a JOIN, use of ARRAY types etc etc.

E10 | Db2 Developer’s Top Tens

Speakers: Tony Andrews

There are many areas of Db2 application development that developers, testers, business analysts, etc, should know about. This presentation lays out my top 10 for the areas that are so important in performance and developing a good application. Areas of importance being SQL tuning tips, programming tips, and of course performance and the Db2 optimizer.

This was another run through of everything optimizer and then lists of things to check and do or not do for SQL Tuning. Quite excellent!

E12 | Code Your Db2 Applications for Performance from the Start!

Speakers: Craig Mullins

Most developers do not seriously consider Db2 performance implications until it is too late. But there are best practices that can be used to build performance into your programs even as from the very beginning.

Craig gives his great explanations as to how to code for performance from the start.

E13 | Db2 SQL Performance for Application Developers

Speakers: David Morris

Db2 SQL Performance for Application Developers. Developers can learn Db2 SQL performance tricks and best practices. When working with DBAs, developers will write better performing SQL, know what an Explain plan is, optimize SQL queries.

A nice run through the do’s and don’ts of SQL for Application developers.

F01 | What Db2 can do; I can do too – first steps towards machine learning

Speaker: Toine Michielse

In this presentation I give an overview of software and ideas that can be used to get yourself started in exploiting both data and software in your day to day life.

This got all “snaky” when Anaconda and Python re-appeared. But it was really all about first steps in Machine Learning!

F02 | Database Trends 2023 – Things Are Changing and You Better Keep Up!

Speakers: Craig Mullins

What are the predominant trends in 2023 that impact data professionals and their usage of DBMSes.

With over 350 different DBMSs out there this was a great review of the state-of-the-art! Slide 15 is my fave!

F03 | Encrypting Db2 Connections with TLS – what a Db2 DBA should know

Speakers: Christoph Theisen

The presentation shows what is needed from a Db2 z/OS and Db2 LUW perspective to set up TLS encryption successfully. The main focus is on the Db2 Client side but we also cover the most important server-side topics.

This was all about TLS, certificates, Keyrings and Key stores! Fascinating stuff! Slide 53 point one was my fave because it’s so true … Please note that Christoph was a little bit confusing in his page numbering… The Slide 53 is really the PDF slide number *not* the “slide number” you see in his presentation…

F04 | Db2 SQL – go beyond the usual – My current TOP 40 SQL tips, tricks, and opinions

Speakers: Brian Laube

Modern SQL is a powerful tool on its own for the DBA and application developer. Keeping on top of modern SQL techniques and functionality lets us move beyond the usual comfortable SQL. The presentation will go over my top SQL tricks and tips for producing useful output and answer your questions about your data and environment. In addition, I will provide a list of definitions and opinions that are important to agree upon when discussing Db2 and SQL. Some are obvious and some are not. But it is good to agree on terms.

A great show of all the “modern” SQL you can use these days but mostly don’t…at the end was Brian’s Wish List for enhancements – Check ‘em out and go vote!!!

F06 | Declared Global Temporary Tables (DGTT) user stories

Speakers: Kurt Struyf

Declared Global Temporary Tables (DGTT) have been around for some time, this presentation will focus on best use cases from customers. We will address the different kind of Temporary Tables in Db2, together with their advantages and disadvantages. This presentation will show some performance use cases, where DGTTs, brought a big performance benefit to customers.

Great interaction here with IBM Development, where various members of the audience shouted out a wish list… Gotta see if IBM Development took enough notes!!! Also interesting, was the idea of setting WFDBSP to YES to help manage these beasts. I am not a fan of workfile separation but I can see here that there is a use case. Another good take-away was the easy ability to rewrite IN LIST with 4000 items (I have also seen CRAZY in-lists coming out of generated code…) to a DGTT and even get index access! Naturally EXPLAIN is really hard as they do not really “exist” but the presentation explains how you can do it!

F07 | IBM Champions: Building technical eminence through advocacy

Speakers: Libby Ingrassia

Learn why and how to build technical eminence through advocacy – and how that can lead you to the IBM Champions program.

As a fellow champion I can only fully agree here! You get more than you put in but you must put something in!!!

F10 | Eliminate Risk when Migrating from IMS to Db2

Speakers: Bill Bostridge

Businesses looking to modernize their IBM System Z platform by moving from IMS to Db2 need a rapid and efficient migration solution. They need to eliminate the traditional risks and costs associated with rewriting applications to support Db2.

Showed a very nice way of migrating your data to Db2 but *not* changing your current IMS applications. Nice indeed!

F11 | Db2 for z/OS – Keeping your remote access secure

Speakers: Gayathiri Chandran

This session will discuss establishing secure remote connections to Db2 for z/OS.

A great overview of AT-TLS and MFA when accessing from remote. Security is always worth reading up on! And also, you get yet another review of system Profiling.

F12 | COBOL abound

Speakers: Erik Weyler

If we put a little effort into creating easy to use tools, our developers can be so much more productive. But how are the tools created and where do they run? In this inspirational talk, examples of tools, techniques and environments will be discussed. We will take a journey from ISPF, to PC and VS Code, to zCX. We will learn a little about GnuCOBOL, Zowe, and how data in a relational database, regarding the use of a hierarchical database, can be visualized in a graph database. In a container. On the mainframe.

Crazy what you can do with COBOL these days! I learned a lot from this session and intend to use it the moment I get the chance! SonarQube is the starting point…

F15 | Playing (with) FETCH

Speakers: Chris Crone

This session will delve into the many ways to get data out of Db2. These vary from SELECT INTO, to SELECT FROM FINAL TABLE, to FETCH FOR :N ROWS, to RESULT SETS. Db2 has evolved over the years and there are many ways to get data from Db2 – this session will be both a primer and a review.

Chris telling us way more than I ever wanted to learn about FETCH in SQL!!!

PSP11 | It’s AI Jim, but Not as We Know It!

Speakers: Roy Boxwell

Ahhh! My good self waffling on about how much AI is not really actually Intelligent!

TTFN,

Roy Boxwell

2023-05 ZPARMs never stop changing part II

This month, I want to go through some of the absolutely most important ZPARMs that control how your Db2 systems behave in a very significant manner. All of the following ZPARMs have a performance impact of some sort. We are always trying to squeeze the last drop of performance out of our Db2 sub-systems, aren’t we?

Db2 13 and Some Db2 12 Updates Ahead!

Since this Newsletter topic first came out, in March 2022, out of the ten ZPARMs listed *five* have got new defaults! I have highlighted all these changed defaults. I have also added three new “Usual Suspects” to the list of ZPARMs that must be checked…

Starting with the Easy Stuff…

CACHEDYN. YES/NO, default YES. Should always be set to YES – unless you do not care about saving dynamic SQL performance. Back a few decades ago, the recommendation was to have this set to NO as default! Hard to believe that these days, where most shops have 80% – 90% dynamic SQL during the day!

Now we Get to the Numerics!

OUTBUFF. 400 – 400,000, default 102,400. This is *extremely* important and you really should set it to the highest possible value you can afford in real memory! As a minimum, it should be 102,400 KB (100MB). This is the buffer that Db2 uses to write log records before they are “really” written to disk. The larger the buffer, the greater the chance that, in case of a ROLLBACK, the data required is in the buffer and not on disk.

Skeletons in the Closet?

EDM_SKELETON_POOL. 5,120 – 4,194,304, default 81,920. This is one of my personal favorites, (I wrote a newsletter solely on this a few years ago). I personally recommend at least 150,000 KB and actually even more if you can back it with real memory. Just like OUTBUFF, pour your memory in here but keep an eye on paging! If Db2 starts to page, you are in serious trouble! Raising this can really help with keeping your DSC in control.

DBDs are Getting Bigger…

EDMDBDC. 5,000 – 4,194,304, default 40,960. The DBD Cache is getting more and more important as, due to UTS usage, the size of DBDs is increasing all the time.

DSC is Always Too Small!

EDMSTMTC. 5,000 – 4,194,304, default 113,386. The EDM Statement Cache (really the Dynamic Statement Cache) is where Db2 keeps a copy of the prepared statements that have been executed. So when the exact same SQL statement with the exact same set of flags and qualifiers is executed, Db2 can avoid the full prepare and just re-execute the statement. This is basically a no-brainer and should be set to at least 122,880 KB. Even up to 2TB is perfectly OK. Remember: A read from here is *much* faster than a full prepare, so you get a very quick ROI and great value for the memory invested! Keep raising the value until your flushing rates for DSC drop down to just 100’s per hour, if you can! Remember to cross check with the EDM_SKELETON_POOL ZPARM as well. It always takes two to Tango…

How Many SQLs?

MAXKEEPD. 0 – 204,800, default 5,000. The Max Kept Dyn Stmts parameter is how many prepared SQLs to keep past commit or rollback. It should be set to a minimum of 8,000 or so. Raising this might well cause a large memory demand in the ssidDBM1 address space so care must be taken.

RIDs Keep Getting Longer…

MAXRBLK. 0, 128 – 2,000,000, default 1,000,000. RID POOL SIZE is the maximum amount of memory to be available for RID Block entries. It should be at least 1,000,000 and, if you can, push it to the maximum of 2,000,000. Unless you want to switch off all RID Block access plans, in which case you set it to zero – Obviously not really recommended!

Sorts Always Need More Space

MAXSORT_IN_MEMORY. 1000 to SRTPOOL. Default 2000. The maximum in-memory sort size is the largest available space to complete ORDER BY, GROUP BY or both SQL Clauses. Remember that this is per thread, so you must have enough memory for lots of these in parallel. The number should be between 1,000 and 2,000, but whatever value you choose, it must be less than or equal to the SRTPOOL size.

Sparse or Pair-wise Access?

MXDTCACH. 0 – 512, default 20. Max data caching is the maximum size of the sparse index or pair-wise join data cache in megabytes. If you do not use sparse index, pair-wise join, or you are not a data warehouse shop, then you can leave this at its default. Otherwise, set it to be 41 MB or higher. If it is a data warehouse subsystem, then you could set this as high as 512 MB. (This ZPARM replaced the short-lived SJMXPOOL, by the way.)

Sort Node Expansion

SRTPOOL. 240 – 128,000, default 20,000. SORT POOL SIZE is the available memory that is needed for the sort pool. IFCID 96 can really help you size this parameter. Remember that the number of sort nodes leapt up from 32,000 in Db2 11 to 512,000 nodes for non-parallelism sorts and 128,000 nodes for a sort within a parallel child task in Db2 12. This means raising this ZPARM can have an even greater positive effect than before.

The Three New Guys on the Block!

To the MAX!

DSMAX used to be around 20,000 and can now be between 1 – 400,000. Remember that you will never actually reach this maximum limit as it is 31-bit memory-constrained.

Thrashing Around…

NPGTHRSH. Valid values are 0 or 1 – 2147483647. Default up to Db2 11 was 0, from Db2 12 default is now 1. SAP systems use a default of 10. The big change here, was in Db2 12 when the change from “no statistics ever ran” of -1 forced the value to be the “optimizer default” of 501 instead of the real value -1. This is also why the default is now 1 ,so that this ZPARM has a normal use! Setting it to 0 means that the access path chosen will always only be cost based.

Lock ’em Up and Throw Away the Key!

NUMLKUS. 0 – 104857600, with a default of 20,000. Just be careful raising this value too high, as each lock will take 540 bytes of storage in the IRLM!

Your “Top Ten List” + Three

These thirteen ZPARMs really influence how your Db2 system works and so must always be checked and changed with great care and attention to detail. Always do a before and after appraisal to see whether or not changing them helped or hindered your system!

If you have any comments, or other ZPARMs you think are also important for performance, feel free to drop me a line!

IDUG 2023 NA

IDUG is nearly upon again. I will be there in Philadelphia at the SEGUS booth and doing a fair bit of moderating as well. Drop on by, have a chat and pick up some of our swag and join me at the “Roy reviews AI with our WorkloadExpert” PSP on Thursday for a chance to win some cool stuff.

Hope to see some of you there!

TTFN,

Roy Boxwell

2023-04 AI Performance

Hi! Continuing on with my AI blog (last one. I promise!) I wish to delve into the innards of the USS part of the SQL Data Insights experience and show you what it all costs!

A Quick Review Perhaps?

Please check my older newsletters for everything about install etc. of SQL DI, and one important thing which is the latest Vector Prefetch APARs (also see my last newsletter for details). Now. I will be doing “before and after” performance reviews with this feature on and off.

Bad News First!

What I have found, is that when I take a 500,000 row table into SQL DI and choose 17 columns, it takes the *entire* machine as well as all local page datasets and I was forced to cancel it after five hours…

Looking in the Logs…

If you go trawling around your Unix Directories, you will trip over these paths:

/u/work/sqldi

Home is Where the Spark is!

This is “home” where all of the SQL DI stuff is “installed”, naturally your name might be different!

Under here is the next layer of interest to me for the Spark processing.

It is Magic!

/u/work/sqldi/spark – Now this is where Spark does “the magic” and actually computes all your vector table data. It runs in stages and the first is the Base10 (I guess numeric analysis) part. For my test data it looks like this:

Scroll down to the bottom:

So, this ran really quickly!

Internals…

Then it does a ton of internal stuff and it starts actually doing the learning, which is “progressed” in a file like this:

/u/work/sqldi/spark/worker/driver-20230315063441-0002/ibm_data2vec-15-03-2023_07:15:01.log

Just Sitting There, Typing REF and Pressing ENTER…

Of course your name will be different, but just sitting there in OMVS and using the REF command you will see this file grow in size every now and again. When it does, quickly Browse on in and you will see stuff like this:

ibm-data2Vec (1.1.0 for zOS) starting execution using file /var/sqldi/temp/training/DSNAIDB_AIDB_DAI
ibm-data2vec found the required library: libzaio.so. Proceeding with the training..                 
ibm-data2vec will use following mode: CBLAS                                                         
User has not provided training chunk size. Using 1 GB chunk size for reading training file.         
ibm-data2Vec is preallocating space for the model using user-provided value 1230314                 
ibm-data2Vec starting execution using file /var/sqldi/temp/training/DSNAIDB_AIDB_DAIN0610_IQATW001_1
83951683 ! 2023-03-15 07:17:27 ! Time elapsed learning vocab from train file = 145.91525s           
Processed 13103200 words in the training file. There are 1213852 unique words in the vocabulary: Pri
Model training code will generate vectors for row-identifier (pk_id) or user-specified primary keys 
83951683 ! 2023-03-15 07:17:27 ! Stage 1 completed. Time elapsed during file reading = 145.91643s   
Training the database embedding (db2Vec) model using 12 CPU thread(s)  

Whole Machine Gone – Oh Oh!

Now, in my case, it just sat there for a while taking all paging, all frames, all ziip and cp cpu and then it wrote out:

Epoch 0 learning rate Alpha=0.024704 Training Progress=5.00%                                        
Epoch 0 learning rate Alpha=0.024404 Training Progress=10.00%                                       
Epoch 0 learning rate Alpha=0.024099 Training Progress=15.00%                                       
Epoch 0 learning rate Alpha=0.023791 Training Progress=20.00%                                       
Epoch 0 learning rate Alpha=0.023486 Training Progress=25.00%                                       
Epoch 0 learning rate Alpha=0.023182 Training Progress=30.00%                                       
Epoch 0 learning rate Alpha=0.022885 Training Progress=35.00%                                       
Epoch 0 learning rate Alpha=0.022582 Training Progress=40.00%                                       
Epoch 0 learning rate Alpha=0.022286 Training Progress=45.00%                                       
Epoch 0 learning rate Alpha=0.021980 Training Progress=50.00%                                       
Epoch 0 learning rate Alpha=0.021673 Training Progress=55.00%                                       

That last line was written out at 12:42 and after starting at 07:17 you can see that I still had nearly a five hour wait ahead of me. Time to cancel and rethink this!

Restart!

Thankfully, on the GUI interface (where you cannot see this progress info, sadly!) the “Stop training” button worked after a while. If it does not respond then you can just issue the

S SQLDAPPS,OPTION='SQLDSTOP' 

command to stop it. Then, once all stopped, and the cpus have cooled down a bit, you can select a smaller data set and retry learning!

Smaller is Sometimes Better!

And with 40.000 rows it is much faster:

50397300 ! 2023-03-15 12:17:16 ! Stage 1 completed. Time elapsed during file reading = 26.992490s 
Training the database embedding (db2Vec) model using 12 CPU thread(s)                             
Epoch 0 learning rate Alpha=0.024765 Training Progress=5.00%                                      
Epoch 0 learning rate Alpha=0.024539 Training Progress=10.00%                                     
Epoch 0 learning rate Alpha=0.024308 Training Progress=15.00%                                     
Epoch 0 learning rate Alpha=0.024073 Training Progress=20.00%                                     
Epoch 0 learning rate Alpha=0.023826 Training Progress=25.00%                                     
Epoch 0 learning rate Alpha=0.023591 Training Progress=30.00%                                     
Epoch 0 learning rate Alpha=0.023354 Training Progress=35.00%                                     
Epoch 0 learning rate Alpha=0.023115 Training Progress=40.00%                                     
Epoch 0 learning rate Alpha=0.022878 Training Progress=45.00%                                     
Epoch 0 learning rate Alpha=0.022637 Training Progress=50.00%                                     
Epoch 0 learning rate Alpha=0.022406 Training Progress=55.00%                                     

Naturally, this is heavily dependent on the machine you have, the memory you have and the size of your local paging dataset.

EXPLAIN Yourself!

So now to do some EXPLAIN runs and then a quick comparison of the “double” AI Whammy that I have, quickly followed by the “New” PTF that, hopefully, sorts it all out.

Double Trouble?

You might have noticed that in my test SQLs I have to use the BiF AI twice. Once for the SELECT and once for the WHERE. This is because the use of the AI_VALUE column is not supported in the WHERE predicate.

Naturally, you can re-write the query to look like this:

SELECT * FROM                                      
(SELECT AI_SEMANTIC_CLUSTER( PROGRAM,              
                           'DSNTIAUL',             
                           'DSN§EP2L',             
                           'DSN§EP4L') AS AI_VALUE 
      ,A.WLX_TIMESTAMP                             
      ,A.STMT_ID                                   
      ,A.STMT_TIMESTAMP                            
      ,SUBSTR(A.PRIM_AUTHOR , 1 , 8) AS PRIM_AUTHOR
      ,SUBSTR(A.PROGRAM , 1 , 8) AS PROGRAM        
      ,SUBSTR(A.REF_TABLE , 1 , 18) AS REF_TABLE   
      ,A.EXECUTIONS                                
      ,A.GETP_OPERATIONS                           
      ,A.ELAPSE_TIME                               
      ,A.CPU_TIME                                  
      ,A.STMT_TEXT                                 
FROM DAIN0610.IQATW001 A                           
WHERE 1 = 1                                        
  AND A.PROGRAM NOT IN    ('DSNTIAUL',             
                           'DSN§EP2L',             
                           'DSN§EP4L')             
  AND A.STMT_ORIGIN = 'D'                          
  )                                                
WHERE AI_VALUE IS NOT NULL                         
ORDER BY 1 DESC -- SHOW BEST FIRST                 
--ORDER BY 1 -- SHOW WORST FIRST                   
FETCH FIRST 10 ROWS ONLY ;                         

Does My Work File Look Big to You?

The problem is that now you have a HUGE work file… In my tests it was always much quicker to code the AI BiF twice. After all, it is always “Your Mileage May Vary”, “The Cheque is in the post” or “It depends”, isn’t it?

AI Does Use the Optimizer!

EXPLAIN Output… The AI Does indeed get output by EXPLAIN (I was surprised about this to be honest!) for the following query:

SELECT AI_SEMANTIC_CLUSTER( PROGRAM,               
                           'DSNTIAUL',             
                           'DSN§EP2L',             
                           'DSN§EP4L') AS AI_VALUE 
      ,A.WLX_TIMESTAMP                             
      ,A.STMT_ID                                   
      ,A.STMT_TIMESTAMP                            
      ,SUBSTR(A.PRIM_AUTHOR , 1 , 8) AS PRIM_AUTHOR
      ,SUBSTR(A.PROGRAM , 1 , 8) AS PROGRAM        
      ,SUBSTR(A.REF_TABLE , 1 , 18) AS REF_TABLE   
      ,A.EXECUTIONS                                
      ,A.GETP_OPERATIONS                           
      ,A.ELAPSE_TIME                               
      ,A.CPU_TIME                                  
      ,A.STMT_TEXT                                 
FROM DAIN0610.IQATW001 A                           
WHERE 1 = 1                                        
  AND A.PROGRAM NOT IN    ('DSNTIAUL',             
                           'DSN§EP2L',             
                           'DSN§EP4L')             
  AND AI_SEMANTIC_CLUSTER( PROGRAM,                
                           'DSNTIAUL',             
                           'DSN§EP2L',             
                           'DSN§EP4L')             
      IS NOT NULL                                  
  AND A.STMT_ORIGIN = 'D'                          
ORDER BY 1 DESC -- SHOW BEST FIRST                 
--ORDER BY 1 -- SHOW WORST FIRST                   
FETCH FIRST 10 ROWS ONLY ;                         

The EXPLAIN output looks like:

Then it gets an interesting STAGE2 RANGE predicate!

which resolves into:

So here we see what the BiF is doing from the perspective of the Optimizer! If you run the nested table version of the query then this line does *not* appear at all!

Notice here that the RANGE is now a STAGE1!

Optimize This!

So IBM Db2 has incorporated it into the Optimizer which is a good thing. But please remember: your SQL can have local predicates that cut down the size of the work file and so evens out the access times… Basically, you must code both and test to see which of the solutions is better for typical usage (As always really…)

Time, Measure, Repeat

Ok, now just doing one execute of the double query requires 2.58 seconds of CPU and 15.35 seconds elapsed. The statement is *in* the DSC so prepare time can be ignored. Here you can see it has been executed twice so we have average values but I am using the CPU from the batch job as it is more precise.

Changing the query to now fetch back all rows instead of first ten requires 7.06 seconds of CPU and 48.78 seconds elapsed. But it returned over 200K rows!

While the query was running you can see the SQLD SQL DI in SDSF taking quite large chunks of zIIP time…

Now I will enable Vector Prefetch with a value of 10GB to see if it makes an impact for these queries. To do this you must update the ZPARM MXAIDTCACH and then enable the changed ZPARM.

That is Not What I was Expecting!

First query is now 2.56 CPU and 15.26 Elapsed. More like background noise than an improvement. And now with the FETCH FIRST removed 7.07 and 49.36 seconds. I guess my queries are not improved with Vector Prefetch!

Could be Me…

From the IBM Vector Prefetch docu:

With vector prefetch enabled, CPU performance for AI queries with AI function invocation on qualified rows improves particularly when the ratio of the cardinality of qualified rows to the total number of numeric vectors for the column is high.

https://community.ibm.com/community/user/datamanagement/blogs/neena-cherian/2023/03/07/accelerating-db2-ai-queries-with-the-new-vector-pr

Time to Join the Real World!

Now let’s try and see if I can discover something new in real data! Anything sensitive has been obfuscated!

SELECT AI_SIMILARITY( PROGRAM,                     
                     'IQADBACP') AS AI_VALUE       
      ,SUBSTR(A.PRIM_AUTHOR , 1 , 8) AS PRIM_AUTHOR
      ,SUBSTR(A.PROGRAM , 1 , 8) AS PROGRAM        
      ,SUBSTR(A.REF_TABLE , 1 , 18) AS REF_TABLE   
      ,A.WLX_TIMESTAMP                             
      ,A.STMT_ID                                   
      ,A.STMT_TIMESTAMP            
      ,A.EXECUTIONS                                
      ,A.GETP_OPERATIONS                           
      ,A.ELAPSE_TIME                               
      ,A.CPU_TIME                                  
      ,A.STMT_TEXT                                 
FROM DAIN0610.IQATW001 A                           
WHERE 1 = 1                                        
  AND NOT A.PROGRAM     = 'IQADBACP'               
  AND     AI_SIMILARITY  ( PROGRAM,                
                          'IQADBACP')              
       IS NOT NULL                                 
  AND     A.STMT_ORIGIN = 'D'                      
ORDER BY 1 DESC -- SHOW BEST FIRST                 
--ORDER BY 1 -- SHOW WORST FIRST                   
FETCH FIRST 10 ROWS ONLY;                          

This is similar to my test from last month but now on real data. Note that I have added a predicate A.STMT_ORIGIN = ‘D’ as I only want Dynamic SQL programs:

Dynamic Hits?

Here you can see that it has found a variety of programs that also do dynamic SQL but I also “helped” it by only asking for dynamic SQL. So now once again but this time without the predicate A.STMT_ORIGIN = ‘D’:

Success!

It has found nearly all from the first list but also different ones, crucially it has *not* found any Static SQL!

So, that’s enough of AI for the next few months for me. However, if you have any questions or ideas that I could try out feel free to email!

TTFN,

Roy Boxwell

2023-03 AI in the real world

OK, I kept you all waiting long enough… Here are my AI results with Db2 13 FL501!

Start at the Start

We begin with the beginning as last time:

Let’s get Connected!

Here you can see that I have already defined my little test Db2 13 system to the system:

Join the Dots …

Now just click on the vertical dots:

Here you can Disconnect, Edit (Which shows you the same window as “add connection”), List AI objects or Delete.

What do we have?

Choosing List AI objects you see what has been created:

Clicking on the down arrow on the left-hand side to expand looks a lot better than last month:

Clickedy-click-click

Now, clicking on the vertical dots on the right hand side, you can choose to Disable AI query or Enable AI query. (I have actually added a new column for consideration, so first I clicked on Disable and then clicked again on Enable)

Just the Facts, Ma’am – Again

Here you must make your “Usual Suspects” decision: which columns to actually use in building the AI Model. I am using our WorkLoadExpert performance table in this newsletter and have selected 17 columns that I think will work together nicely. Only one can be a “Key” column – I choose STMT_ID in this case. Once you are done selecting columns, click on the big blue “Next” button where you may then add additional filters to remove any rows you know are to be ignored:

Playing Chicken?

When done, click on the big blue “Enable” button and you get your last chance to chicken out:

SIO and CPU Records!

Click here and then get a cup of coffee….or go to SDSF and marvel at how much CPU and IO Spark actually uses and does this as the light bulbs dim in your part of the world…

You Keep me Spinning

Oddly, at least when I do this, the Initializing spinning wheels:

Right Round and Around

… never stop. The WLM Stored procedure for utilities was finally kicked off about 40 minutes later:

                    J E S 2  J O B  L O G  --  S Y S T E M  
                                                             
10.43.19 STC09611 ---- WEDNESDAY, 01 MAR 2023 ----           
10.43.19 STC09611  $HASP373 DD10WLMU STARTED                 
10.43.19 STC09611  IEF403I DD10WLMU - STARTED - TIME=10.43.19
10.43.19 STC09611  ICH70001I SQLDIID  LAST ACCESS AT 09:37:37

A Loaded Question?

And loaded all the required data:

ICE134I 0 NUMBER OF BYTES SORTED: 99083595                  
ICE253I 0 RECORDS SORTED - PROCESSED: 49173, EXPECTED: 49173

A quick exit and re-logon to the web interface…and Tra la!

Not only AI but Dr Who!

It is also strange that it seems to be in a time machine, one hour in advance of my local…Anyways, my new data is there and so onward! (I have since heard that our time zone setting is actually to blame and that just going back one level, and then forward again, stops the spinning wheel problem. However, just wait until Spark finishes and the stored procedure has loaded your data!)

Never Trust a Statistic You haven’t Faked Yourself!

Clicking on Data statistics shows:

Influencer of the Day?

Then you can look at the Column influence:

Super Model?

Back at the top you can then review the Model details:

Or just a Cluster….

Here are the Cluster center details:

Going back to the List AI Objects window, there are two blue buttons: Add object and Run query. I did not discuss Run Query last month but it gives you a SPUFI-like ability on the PC, tailored to the AI BiFs:

Lets RUN Away!

Clicking on Query type gives a drop-down list of the basic AI BiFs where it then gives you an example SQL (based on the documentation, *not* on any AI Tables you might have done!). Once you type in any query the “run” box turns blue:

It Works!

Click run and see the results:

Data Review

Once the model is trained, you can then review on the host what it has done. In SPUFI you can find details of what you have done in the pseudo Db2 catalog tables that support Data Insights, (I have removed a ton of rows to make this readable – sort of!):

SELECT * FROM 
SYSAIDB.SYSAIOBJECTS ; 
---------+---------+---------+---------+---------+---------+---------+---------+-------+-
              OBJECT_ID  OBJECT_NAME                       OBJECT_TYPE  SCHEMA  NAME     
---------+---------+---------+---------+---------+---------+---------+---------+-------+-
                     26  --------------------------------  T            IQA061QB IQATW001

-------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
STATUS                   CONFIGURATION_ID                 MODEL_ID  CREATED_BY                        CREATED_DATE              
-------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
Enabled                                36                       36  SQLDIID                           2023-02-24-07.57.42.086932

-------+---------+---------+---------+---------+---------+---------+---------+
LAST_UPDATED_BY                   LAST_UPDATED_DATE           DESCRIPTION     
-------+---------+---------+---------+---------+---------+---------+---------+
SQLDIID                           2023-03-01-10.43.38.407460  ----------------



SELECT * FROM  
SYSAIDB.SYSAICONFIGURATIONS ; 
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-----
       CONFIGURATION_ID  NAME                                            OBJECT_ID  RETRAIN_INTERVAL  KEEP_ROWIDENTIFIER_KEY 
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-----
                     36  --------------------------------                       26  ----------------  Y                      
---+---------+-------
NEGLECT_VALUES       
---+---------+-------
                     
---+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--------
CREATED_BY                        CREATED_DATE                LAST_UPDATED_BY                   LAST_UPDATED_DATE         
---+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--------
SQLDIID                           2023-03-01-09.51.00.994421  SQLDIID                           2023-03-01-09.51.00.994461



SELECT * FROM 
SYSAIDB.SYSAICOLUMNCONFIG
ORDER BY 1 , 3 , 2 ;   

--+---------+---------+---------+---------+---------+---------+---------+---------+---------+
CONFIGURATION_ID  COLUMN_NAME              COLUMN_AISQL_TYPE  COLUMN_PRIORITY  NEGLECT_VALUES
--+---------+---------+---------+---------+---------+---------+---------+---------+---------+
              36  END_USERID               C                  H                              
              36  PRIM_AUTHOR              C                  H                              
              36  PROGRAM                  C                  H                              
              36  REF_TABLE                C                  H                              
              36  REF_TAB_QUAL             C                  H                              
              36  STMT_ORIGIN              C                  H                              
              36  STMT_TEXT                C                  H                              
              36  TRANSACTION              C                  H                              
              36  WORKSTATION              C                  H                              
              36  COPIES_NO                I                  H 
.
.
.                             
              36  WLX_TYPE                 I                  H                              
              36  WORKSTATION_OLD          I                  H                              
              36  STMT_ID                  K                  H                              
              36  CPU_TIME                 N                  H                              
              36  ELAPSE_TIME              N                  H                              
              36  EXECUTIONS               N                  H                              
              36  GETP_OPERATIONS          N                  H                              
              36  ROWS_EXAMINED            N                  H                              
              36  ROWS_PROCESSED           N                  H                              
              36  STMT_LENGTH              N                  H                              

When the column COLUMN_AISQL_TYPE has a value of “I” it means it is ignored by AI processing. Also note that this table SYSAICOLUMNCONFIG gets two extra columns (COLUMN_VECTOR_CARDINALITY and MAX_DATA_VALUE_LEN) once you apply the vector prefetch upgrade APARs:

  1. For IBM Z AI Optimization (zAIO) library and IBM Z AI Embedded (zADE) library in the IBM Z Deep Neural Network (zDNN) stack on z/OS:
    • Apply OA63950 and OA63952 for z/OS 2.5 (HZAI250).
    • Apply OA63949 and OA63951 for z/OS 2.4 (HBB77C0).
  2. For OpenBLAS on z/OS:
    • Apply PH49807 and PH50872 for both z/OS 2.5 and z/OS 2.4 (HTV77C0).
    • Apply PH50881 for z/OS 2.5 (HLE77D0).
    • Apply PH50880 for z/OS 2.4 (HLE77C0).
  3. For Db2 13 for z/OS, apply PH51892. Follow the instructions for DDL migration outlined in the ++ HOLD text. By default, the new Db2 subsystem parameter MXAIDTCACH is set to 0, indicating that vector prefetch is disabled. To enable vector prefetch, set MXAIDTCACH to a value between 1 and 512. This parameter is online changeable. See “IBM Db2 13 for z/OS documentation” on MXAIDTCACH.
  4. For SQL Data Insights 1.1.0 UI and model training (HDBDD18), apply PH51052.

Further, the table SYSAIMODELS got a new column MODEL_CODE_LEVEL and an increase in size for the METRIC column to 500K with the above APARs.

SELECT * FROM  
SYSAIDB.SYSAIMODELS ; 
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
               MODEL_ID  NAME                                            OBJECT_ID         CONFIGURATION_ID  VECTOR_TABLE_CREATOR
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------
                     36  --------------------------------                       26                       36  DSNAIDB             

+---------+---------+--
VECTOR_TABLE_NAME      
+---------+---------+--
AIDB_IQA061QB_IQATW001

+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
VECTOR_TABLE_STATUS  VECTOR_TABLE_DBID  VECTOR_TABLE_OBID  VECTOR_TABLE_IXDBID  VECTOR_TABLE_IXOBID  VECTOR_TABLE_VERSION
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
A                                  329                  3                  329                    4                     1

-------+---------+---------+---------+---------+---------+---------+---------+-------
METRICS                                                                              
-------+---------+---------+---------+---------+---------+---------+---------+-------
[{"discriminator":8.59443984950101,"influence":0.9367419701380996,"name":"TRANSACTION",

-------+---------+---------+
INTERPRETABILITY_STRUCT     
-------+---------+---------+

-------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+----
CREATED_BY                        CREATED_DATE                LAST_UPDATED_BY                   LAST_UPDATED_DATE         
-------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+----
SQLDIID                           2023-03-01-09.51.03.777504  SQLDIID                           2023-03-01-10.43.37.796847

---+---------+---------+---------+---------+--
MODEL_ROWID                                   
---+---------+---------+---------+---------+--
2495E518C773B081E09C018000000100000000002213  



SELECT * FROM                            
SYSAIDB.SYSAICOLUMNCENTERS 
ORDER BY 1 , 2 , 3 ;
----+---------+---------+---------+---------+---------+---------+
MODEL_ID  COLUMN_NAME                          CLUSTER_MIN  LABEL
----+---------+---------+---------+---------+---------+---------+
      36  CPU_TIME                 -0.7200000000000000E+76  EMPTY
      36  CPU_TIME                 +0.0               E+00  c0   
      36  CPU_TIME                 +0.2000000000000000E+01  c1   
      36  CPU_TIME                 +0.1617671400000000E+08  c9   
      36  ELAPSE_TIME              -0.7200000000000000E+76  EMPTY
      36  ELAPSE_TIME              +0.0               E+00  c0   
      36  ELAPSE_TIME              +0.2000000000000000E+01  c1   
      36  ELAPSE_TIME              +0.1008466600000000E+08  c9   
      36  ELAPSE_TIME              +0.1074954980000000E+09  c10  

SELECT * FROM  
SYSAIDB.SYSAITRAININGJOBS ; 
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--------+-------
        TRAINING_JOB_ID                OBJECT_ID         CONFIGURATION_ID                 MODEL_ID  STATUS  PROGRESS RESOURCE 
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--------+-------
                     33                       26                       33                       33  F              0 
                     34                       26                       34                       34  F              0 
                     35                       26                       35                       35  C            100 
                     36                       26                       36                       36  C            100 


-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+------
MESSAGES                                                                                                                        
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+------
{"messages":"failed to train model: Something went wrong with the zLoad, please check the SQL DI log for more details.","resumeI
{"messages":"failed to train model: Something went wrong with the zLoad, please check the SQL DI log for more details.","resumeI
{"messages":"model training is completed","sparkSubmitId":"driver-20230224105851-0002"}                                         
{"messages":"model training is completed","sparkSubmitId":"driver-20230301085133-0003"}                                         

-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+----
START_TIME                  END_TIME                    CREATED_BY                        CREATED_DATE              
-+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+----
2023-02-24-08.01.20.737455  2023-02-24-08.51.56.386011  SQLDIID                           2023-02-24-08.01.20.737455
2023-02-24-10.52.27.687965  2023-02-24-11.43.22.095144  SQLDIID                           2023-02-24-10.52.27.687965
2023-02-24-11.58.20.109571  2023-02-24-12.49.20.660143  SQLDIID                           2023-02-24-11.58.20.109571
2023-03-01-09.51.03.777662  2023-03-01-10.43.38.407414  SQLDIID                           2023-03-01-09.51.03.777662

---+---------+---------+---------+---------+---------+------
LAST_UPDATED_BY                   LAST_UPDATED_DATE         
---+---------+---------+---------+---------+---------+------
SQLDIID                           2023-02-24-08.51.56.386030
SQLDIID                           2023-02-24-11.43.22.095164
SQLDIID                           2023-02-24-12.49.20.660160
SQLDIID                           2023-03-01-10.43.38.407425

KPIs from my Data

Here are a few KPIs from these first test runs:

SELECT COUNT(*) FROM IQA061QB.IQATW001 ;
      64390                                                 

SELECT COUNT(*) FROM DSNAIDB.AIDB_IQA061QB_IQATW001 ;
      49173                                                 

SELECT  SUBSTR(A.COLUMN_NAME, 1, 12) AS COLUMN_NAME 
      , SUBSTR(A.VALUE      , 1, 12) AS VALUE 
      , A.VECTOR 
FROM DSNAIDB.AIDB_IQA061QB_IQATW001  A 
ORDER BY 1 , 2 ; 

---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--
COLUMN_NAME   VALUE         VECTOR                                                                                                  
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--
CPU_TIME      c0            3E594822BC9D2C7A3CD4F61DBD37E5033D34B314BD4CF8E3BD4B4D47BCB6CE293D1DBA1A3D858FDF3DC4DF08BD9E77753CCED43F
CPU_TIME      c1            3D9214383CFE4C90BDB3DFE4BBE407563BBA69553DB48FEFBCF39451BC6BABF0BDA31BDFBDB52F883C30B992BC8D71AF3D9E54FF
ELAPSE_TIME   c0            3E55B744BCCC5CED3D129B14BC9E553C3C9B121EBD8949C0BD4F838DBD1582A33D36D6363DA1F72F3DBCB033BDAFB88F3D4DE348
ELAPSE_TIME   c1            3DE390AC3D2DCC98BD2DF437BC5B7F713D766D103BD1AC10BB48E2C43B9FA9E6BD80D5D7BDC40AFE3CE586C9BCACADE93DFE2745
END_USERID    BOXWEL2       3D505075BD80E40F3D3AAB60BBA463F6BBCC51C43D92B118BD044D20BD8C6B3B3CC315133BBB087A3DC1D5923DC4EB763D039C8B
END_USERID    BOXWEL3       3D2FB919BC5013E3BD6652DDBD4654DA3DA4AC83BA70024FBD7FAFD0BCF16670BB2CCB4B3DBE32E93DFE13383CB052283C82FD46

As I mentioned last month the vector tables are very “special”!

What now?

So now we have analyzed a bunch of SQL WorkLoadExpert data from our own labs. What can we do?

First up, I wish to see what user KKKKKKK does with dynamic SQL that is “similar” to what I do with table IQATW001 but I am only interested in those SQLs where the AI thinks it is more than 0.5 (so very analogous):

SELECT AI_ANALOGY('BOXWEL3'  USING MODEL COLUMN PRIM_AUTHOR,  
                  'IQATW001' USING MODEL COLUMN REF_TABLE  ,  
                  'KKKKKKK'  USING MODEL COLUMN PRIM_AUTHOR,  
                  REF_TABLE ) AS AI_VALUE                     
      ,A.WLX_TIMESTAMP                                        
      ,A.STMT_ID                                              
      ,A.STMT_TIMESTAMP                                       
      ,SUBSTR(A.PRIM_AUTHOR , 1 , 8 ) AS PRIM_AUTHOR          
      ,SUBSTR(A.PROGRAM     , 1 , 8 ) AS PROGRAM              
      ,SUBSTR(A.REF_TABLE   , 1 , 18) AS REF_TABLE            
      ,A.EXECUTIONS                                           
      ,A.GETP_OPERATIONS                                      
      ,A.ELAPSE_TIME                                          
      ,A.CPU_TIME                                             
      ,A.STMT_TEXT                                            
FROM IQA061QB.IQATW001 A                                      
WHERE A.PRIM_AUTHOR = 'KKKKKKK'                               
  AND  AI_ANALOGY('BOXWEL3'    USING MODEL COLUMN PRIM_AUTHOR,
                  'IQATW001'   USING MODEL COLUMN REF_TABLE  ,
                  'KKKKKKK'    USING MODEL COLUMN PRIM_AUTHOR,
                  REF_TABLE )                                 
       > 0.5                                                  
ORDER BY 1 DESC -- SHOW BEST FIRST                            
--ORDER BY 1 -- SHOW WORST FIRST                              
FETCH FIRST 2000 ROWS ONLY ;                                  

And the results:

---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-------
               AI_VALUE  WLX_TIMESTAMP                               STMT_ID  STMT_TIMESTAMP              PRIM_AUTHOR
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-------
+0.7875136583708362E+00  2022-12-16-11.43.46.365129                    54801  2023-01-05-14.23.34.706136  KKKKKKK    
+0.7875136583708362E+00  2022-12-16-11.43.46.365129                    54800  2023-01-05-14.23.34.704234  KKKKKKK    
+0.7875136583708362E+00  2022-12-16-11.43.46.365129                    43654  2023-01-05-08.09.18.888198  KKKKKKK    
+0.7875136583708362E+00  2022-12-16-11.43.46.365129                    43653  2023-01-05-08.09.18.832308  KKKKKKK    
+0.7754887840772942E+00  2022-12-05-11.43.10.984618                     2616  2022-12-05-10.15.20.884139  KKKKKKK    
+0.7754887840772942E+00  2022-12-05-11.43.10.984618                     2609  2022-12-05-10.15.20.487031  KKKKKKK    
+0.7754887840772942E+00  2022-12-05-11.43.10.984618                     2617  2022-12-05-10.15.20.905752  KKKKKKK    
+0.7754887840772942E+00  2022-12-16-11.43.46.365129                    37239  2023-01-04-11.07.49.907438  KKKKKKK    
+0.7754887840772942E+00  2022-12-16-11.43.46.365129                    37230  2023-01-04-11.07.49.457189  KKKKKKK    
+0.7754887840772942E+00  2022-12-16-11.43.46.365129                    37237  2023-01-04-11.07.49.879457  KKKKKKK    
+0.7754887840772942E+00  2022-12-16-11.43.46.365129                    37238  2023-01-04-11.07.49.903769  KKKKKKK    
+0.7754887840772942E+00  2022-12-05-11.43.10.984618                     2618  2022-12-05-10.15.20.909552  KKKKKKK    
+0.7754887840772942E+00  2022-12-16-11.43.46.365129                    45396  2023-01-05-08.17.11.633089  KKKKKKK    
+0.7754887840772942E+00  2022-12-16-11.43.46.365129                    45389  2023-01-05-08.17.11.311055  KKKKKKK    
+0.7754887840772942E+00  2022-12-16-11.43.46.365129                    45397  2023-01-05-08.17.11.655514  KKKKKKK    

+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--
PROGRAM   REF_TABLE                        EXECUTIONS          GETP_OPERATIONS              ELAPSE_TIME
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+--
SEDBTIAA  R510T002                                  1                        0                        8
SEDBTIAA  R510T002                                  1                        0                        9
SEDBTIAA  R510T002                                  1                        0                       11
SEDBTIAA  R510T002                                  1                        3                       61
SEDBTIAA  IQATA001                                  0                        0                        0
SEDBTIAA  IQATA001                                  0                        0                        0
SEDBTIAA  IQATA001                                  0                        0                        0
SEDBTIAA  IQATA001                                  1                        4                       32
SEDBTIAA  IQATA001                                  1                        2                      111
SEDBTIAA  IQATA001                                  1                       12                    20749
SEDBTIAA  IQATA001                                  1                        4                       36
SEDBTIAA  IQATA001                                  0                        0                        0
SEDBTIAA  IQATA001                                  1                       12                    18571

All interesting stuff! I use dynamic SQL to INSERT into the table a lot, and it has determined that use of dynamic SQL with tables R510T002 and IQATA001 is analogous. In fact, it is! The SQLs were all INSERT, DELETE and UPDATE… Clever ol’ AI!

Dynamic Duo?

Now I wish to see which programs process dynamic SQL like the IBM DSNTIAD and DSNTIAP programs:

SELECT AI_SEMANTIC_CLUSTER( PROGRAM,             
                           'DSNTIAD',            
                           'DSNTIAP') AS AI_VALUE
      ,A.WLX_TIMESTAMP                           
      ,A.STMT_ID                                 
      ,A.STMT_TIMESTAMP                          
      ,SUBSTR(A.PROGRAM , 1 , 8) AS PROGRAM      
      ,A.EXECUTIONS                              
      ,A.GETP_OPERATIONS                         
      ,A.ELAPSE_TIME                             
      ,A.CPU_TIME                                
      ,A.STMT_TEXT                               
FROM IQA061QB.IQATW001 A                         
WHERE A.PROGRAM NOT IN ('DSNTIAD', 'DSNTIAP')    
  AND A.STMT_ORIGIN = 'D'                        
ORDER BY 1 DESC -- SHOW BEST FIRST                      
--ORDER BY 1 -- SHOW WORST FIRST                            
FETCH FIRST 10 ROWS ONLY ;                       

And the results:

---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+----
               AI_VALUE  WLX_TIMESTAMP                               STMT_ID  STMT_TIMESTAMP              PROGRAM 
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+----
+0.7104441523551941E+00  2023-01-06-05.27.28.779825                      824  2023-01-17-09.13.48.940697  DSN§EP2L
+0.5050856471061707E+00  2023-01-06-05.27.28.779825                      559  2023-01-12-13.04.44.032345  O2DB81  
+0.5032740235328674E+00  2023-01-06-05.27.28.779825                      561  2023-01-12-13.04.44.075465  O2DB84  
+0.5007491707801819E+00  2023-01-06-05.27.28.779825                      560  2023-01-12-13.04.44.062180  O2DB82  
+0.4917877912521362E+00  2023-01-06-05.27.28.779825                      558  2023-01-12-13.04.43.986695  O2DB80  
+0.4652681946754456E+00  2023-01-06-05.27.28.779825                      562  2023-01-12-13.04.44.105563  O2DB85  
+0.4551711678504944E+00  2023-02-02-10.45.26.535375                        8  2023-01-17-16.51.06.825629  O2DB8X  
+0.4551711678504944E+00  2023-01-06-05.27.28.779825                        8  2023-01-17-16.51.06.825629  O2DB8X  
+0.4551711678504944E+00  2023-01-06-05.27.28.779825                      557  2023-01-12-13.04.40.846826  O2DB8X  
+0.4452087283134460E+00  2023-01-06-05.27.28.779825                        7  2023-01-17-16.50.54.118774  O2DB6X  

+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-
EXECUTIONS          GETP_OPERATIONS              ELAPSE_TIME                 CPU_TIME  STMT_TEXT                                    
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-
         1                        0                        0                        0  INSERT INTO KKKKKKK.SAXDBT (SELECT A.NAME, 'B
         1                        0                       29                       29  SELECT COUNT(*),COALESCE(SUM(CASE EXCEPTION_C
         1                        0                       28                       24  SELECT COUNT(*),COALESCE(SUM(CASE EXCEPTION_C
         1                        0                       22                       22  SELECT COUNT(*),COALESCE(SUM(CASE EXCEPTION_C
         1                        0                       76                       43  SELECT COUNT(*),COALESCE(SUM(CASE EXCEPTION_C
         1                        0                       44                       24  SELECT COUNT(*),COALESCE(SUM(CASE EXCEPTION_C
         0                        0                        0                        0  SELECT COALESCE(COALESCE(A.DBNAME,B.DBNAME),C
         1                      215                   132887                    13462  SELECT COALESCE(COALESCE(A.DBNAME,B.DBNAME),C
         1                       18                     1035                      791  SELECT COALESCE(COALESCE(A.DBNAME,B.DBNAME),C
         2                        7                    27753                     1236  SELECT COALESCE(COALESCE(A.DBNAME,B.DBNAME),C

Again, very nice – it spotted all of the RealTime DBAExpert Dynamic SQL access programs in use…

Undynamic Duo?

Ok, now the opposite of that query, show me the SQLs that are like them but not them!

SELECT AI_SEMANTIC_CLUSTER( PROGRAM,                      
                           'DSNTIAD',                     
                           'IQADBACP',                    
                           'SEDBTIAA') AS AI_VALUE        
      ,A.WLX_TIMESTAMP                                    
      ,A.STMT_ID                                          
      ,A.STMT_TIMESTAMP                                   
      ,SUBSTR(A.PRIM_AUTHOR , 1 , 8) AS PRIM_AUTHOR       
      ,SUBSTR(A.PROGRAM , 1 , 8) AS PROGRAM               
      ,SUBSTR(A.REF_TABLE , 1 , 18) AS REF_TABLE          
      ,A.EXECUTIONS                                       
      ,A.GETP_OPERATIONS                                  
      ,A.ELAPSE_TIME                                      
      ,A.CPU_TIME                                         
      ,A.STMT_TEXT                                        
FROM IQA061QB.IQATW001 A                                  
WHERE A.PROGRAM NOT IN ('DSNTIAD', 'IQADBACP' ,'SEDBTIAA')
--AND A.STMT_ORIGIN = 'D'                                 
--ORDER BY 1 DESC -- SHOW BEST FIRST                      
ORDER BY 1 -- SHOW WORST FIRST                            
FETCH FIRST 10 ROWS ONLY ;                                

and the output:

+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-------
STMT_ID  STMT_TIMESTAMP              PRIM_AUTHOR  PROGRAM   REF_TABLE                        EXECUTIONS          GETP_OPERATIONS
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-------
  39974  2023-01-11-09.15.29.336208               DSMSUMA   SYSVOLUMES                                3                        9
  39973  2023-01-11-09.15.29.336151               DSMSUMA   SYSDATABASE                               3                        6
  39976  2023-01-11-09.15.29.363187               DSMSUMA   SYSTABLEPART                              3                      408
  39975  2023-01-11-09.15.29.343490               DSMSUMA   SYSTABLES                                 3                      870
  39981  2023-01-11-09.15.29.459879               DSMSUMA   SYSPLAN                                   3                        9
  39980  2023-01-11-09.15.29.449311               DSMSUMA   SYSTABLESPACE                             3                      399
  39979  2023-01-11-09.15.29.381912               DSMSUMA   SYSINDEXES                                3                     1970
  39978  2023-01-11-09.15.29.370762               DSMSUMA   SYSINDEXES                                3                      786
  39977  2023-01-11-09.15.29.369892               DSMSUMA   SYSINDEXPART                              3                      381
  39972  2023-01-11-09.15.29.336020               DSMSUMA   SYSSTOGROUP                               3                        6

Aha! It found a little assembler program that fires off SQL like the top three!

The Apple doesn’t Fall far from the Tree

Finally, I want to see which programs behave like IQADBACP (our main dynamic SQL driver program):

SELECT AI_SIMILARITY( PROGRAM,              
                     'IQADBACP') AS AI_VALUE
      ,A.WLX_TIMESTAMP                      
      ,A.STMT_ID                            
      ,A.STMT_TIMESTAMP                     
      ,SUBSTR(A.PROGRAM , 1 , 8) AS PROGRAM 
      ,A.EXECUTIONS                         
      ,A.GETP_OPERATIONS                    
      ,A.ELAPSE_TIME                        
      ,A.CPU_TIME                           
      ,A.STMT_TEXT                          
FROM IQA061QB.IQATW001 A                    
WHERE NOT A.PROGRAM = 'IQADBACP'            
  AND A.STMT_ORIGIN = 'D'                   
ORDER BY 1 DESC -- SHOW BEST FIRST                      
--ORDER BY 1 -- SHOW WORST FIRST                            
FETCH FIRST 10 ROWS ONLY;                   

And the output:

---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---
               AI_VALUE  WLX_TIMESTAMP                               STMT_ID  STMT_TIMESTAMP              PROGRAM
---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---
+0.4575602412223816E+00  2023-02-02-10.45.26.535375                        7  2023-01-17-16.50.54.118774  O2DB6X 
+0.4575602412223816E+00  2023-01-06-05.27.28.779825                        7  2023-01-17-16.50.54.118774  O2DB6X 
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      220  2023-01-20-10.11.14.618038  DSMDSLC
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      222  2023-01-20-10.11.38.136712  DSMDSLC
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      221  2023-01-20-10.11.21.993833  DSMDSLC
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      252  2023-01-20-10.55.07.078652  DSMDSLC
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      251  2023-01-20-10.54.37.901247  DSMDSLC
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      233  2023-01-20-10.47.23.961076  DSMDSLC
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      232  2023-01-20-10.46.59.756430  DSMDSLC
+0.4400676488876343E+00  2023-01-06-05.27.28.779825                      224  2023-01-20-10.33.42.609175  DSMDSLC

+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-
EXECUTIONS          GETP_OPERATIONS              ELAPSE_TIME                 CPU_TIME  STMT_TEXT                                    
+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+-
         0                        0                        0                        0  SELECT COALESCE(COALESCE(A.DBNAME,B.DBNAME),C
         2                        7                    27753                     1236  SELECT COALESCE(COALESCE(A.DBNAME,B.DBNAME),C
         2                     1387                    57974                    14900  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000
         6                     4170                    68943                    53330  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000
         6                     4596                   286233                    99773  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000
         1                      851                    55367                    42542  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000
         1                      298                   122961                    24848  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000
         2                     1260                    68272                    48952  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000
         1                      192                     3395                     2508  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000
         3                      810                    43520                    23771  SELECT CASE WHEN B.VCATNAME < ' ' THEN '00000

Again, it found all of the correct programs.

Quibble Time!

I did find some small problems…

I use ALIASes a lot and they appear in the drop-down selection box when in “Add object”, but if you choose one as an AI Object:

This then leads on to the second quibble… The red windowed error messages stay there until you click them away… This can lead you to believe that a problem exists when in reality everything is groovy!

I also found out that the spinning wheel completes if you wait for Spark and LOAD and then go back and forward on the panel.

Finally, the way you move around the product is a bit odd… sometimes you use the browser back function, sometimes you click on a “Back” button, sometimes you click on a bread crumb, sometimes there are multiple options hidden under triple vertical dots which change depending on where you are in the process.

I am sure these little UI bugs will all get ironed out very quickly!

End of Quibbles.

First Baby Steps Taken!

This little trip into the AI world is really just the tip of the iceberg. I will be doing many more AI queries over the coming months, and I hope to show all my results, either here or in another one of my Newsletters and/or at the German GUIDE in April 2023 and, hopefully, at the IDUG 2023 as well.

Any questions about AI, do not fear to ask, and when not me then ChatGPT!

TTFN

Roy Boxwell

2023-02 It’s AI Jim, but not as we know it!

Ok, ok, I am a little bit of a geek… But in my defense at least I have started kicking around with Artificial Intelligence and not just ChatGPT! This month, I wish to dip my toes into the icy, cold waters of AI and show you what you can do in Db2 13 right out-of-the-box !

What’s in a BiF?

Db2 13 FL500 brings three new Scalar BIFs. These are the SQL Data Insights functions. They come supplied with Db2 13 but you do have to install a bunch of stuff to actually get them working (so not really out-of-the-box, but close!)

Five Easy Steps?

First, you need to make sure you have all the prereqs in place. These are basically a couple of APARs for the IBM Z Deep Neural Network Library (zDNN), the z/OS Supervisor, IBM OpenBLAS, z/OS OpenSSH and IBM 64-bit SDK for z/OS Java. zDNN and OpenBLAS come with z/OS 2.4/2.5, but without the required APARs the libraries may be empty.

SQL Data Insights (SQL DI) is a kind of no-charge add-on to Db2 13, so you need to order and install it separately (FMID HDBDD18).

All the prereqs are listed in the Db2 docu: Preparing SQL Data Insights installation

Now the Install

Then you need to install and customize SQL DI, starting with the definition of a (technical) user along with its appropriate authorization (Configuring setup user ID for SQL Data Insights). They’re asking for 100 GB of storage for the zFS home directory, but I think you’ll only need that when you start to run AI model training on vast amounts of data. For my first tiny steps into the world of Db2 AI it worked with a tenth of that without any problems. It may well change with my soon upcoming tests! The requirements listed for CPU and system memory aren’t much smaller and I’m experiencing a very measurable CPU consumption whenever the model training on an object starts.

RACF for Advanced Users!

The next step (Configuring user authentication for SQL Data Insights) is very likely a task for your RACF colleague, unless you have RACF SPECIAL authority (Who on earth *ever* has SPECIAL these days … any auditor would throw a right royal wobbly there!) or sufficient authority as described in RACDCERT command. RACDCERT manages RACF digital certificates and SQL DI needs that to allow secure (https) connections to the user interface, coming as an easy-to-use web application.

While You are Waiting, Sir…

While your RACF colleague is getting the (technical) user and the certificate in place, you can sneak into Db2’s SDSNSAMP lib to customize and execute DSNTIJAI. This guy creates the required database and pseudo-catalog tables as described in Configuring Db2 for SQL Data Insights. There are also GRANTs in the sample job, but I had to add another one for procedure DSNWLM_UTILS, since SQL DI uses that for LOAD.

And We are Finished!

Finally, you must do the SMP/E installation of SQL DI, followed by executing the installation script in the USS environment as described in Installing and configuring SQL Data Insights. USS scripts seem to be (along with certificates) the fun part of installing products these days. Carefully plan, and stick with, the values that you enter during the interactive installation of the sqldi.sh script. If you re-run the installation, for example, and decide for another SQLDI_HOME, your .profile will have a # Generated by SQL Data Insights installation script section that will not be updated. The script also starts SQL DI and SPARK (needed and installed by SQL DI). However, there seem to be some very low internal timeout values set for verifying the successful start – at least in my environment. The script complained that start failed, but it was all up and running fine. After you verified the successful installation as described at Verifying the installation and configuration of SQL Data Insights, you can start having fun with SQL DI. I, however, decided to additionally go for the optional step Creating a started task for the SQL Data Insights application. If you intend to use SQL DI more than once, and maybe have your colleagues also work with it, I think this piece is a must. Be aware that there are a couple of adjustments to make it work:

  • The sample STDENV, as well as the samples that are in the SQLDAPPS STC sample job don’t have all the definitions of the .profile sample, which I added manually.
  • The AI model training failed, complaining that zade wasn’t found in java.library.path and I was only able to fix that by manually setting additional environment variables as described in IBM Z Artificial Intelligence Data Embedding Library environment
  • The _CEE_RUNOPTS=”…” sample didn’t work for me and I had to remove the quotation marks to make it look like _CEE_RUNOPTS=FILETAG(AUTOCVT,AUTOTAG) POSIX(ON)
  • Starting/Stopping SQLDAPPS using SQLDAPPS sample job triggers sqldi.sh with the start/stop option. This is accomplished by two members in the PDS specified by the STDPARM DD card. The STOP is issued by command /s SQLDAPPS,OPTION=’SQLDSTOP’ (for what ever reason not by STOP SQLDAPPS) and correctly refers to member SQLDSTOP. The START is issued by command /s SQLDAPPS,OPTION=’STRT’. However, the member is actually called SQLDSTRT, so it either requires to change the STCs sample job default option to SQLDSTRT, or the member to be renamed as STRT.

There is also an optional step to create an STC for the SPARK cluster in the docu (Creating started tasks for the Spark cluster). Short story: Skip it, because SQL DI will start the SPARK cluster (master and worker) automatically anyways.

Here is a link containing a really good overview:

https://developer.ibm.com/articles/use-ibm-db2-sql-data-insights-to-uncover-hidden-relationships-in-your-data/

Up and Running?

Once installed and ok, you can then kick off the web interface. The first page is a login panel of course:

Time to get Connected

Here you can see that I have already defined my little test Db2 13 to the system:

If you click on “Add connection” you get a pop-up to define how to get to another Db2 using a certificate or a userid and password, very similar to setting up a Data Studio connection. Click on the three dots and you get a drop-down box:

Here you can disconnect, Edit (Which shows you the same window as “add connection”), List AI objects or Delete.

Starting to Feel Intelligent!

Choosing List AI objects you see what I have created:

I’m Sorry, Dave, I’m Afraid I can’t do That.

Sadly, it failed – Click on the downward arrow expands the view:

When I first heard “zade” I thought of Sade – “Smooth Operator” for some reason…The explanation is actually a hint that something is wrong with the JAVA definitions. It could be a classpath problem or a version problem. This is in work as I write this!

Can you hear me, HAL?

Now the Enable AI query I will cover later, first Analyze data takes you to the next window:

Just the Facts, Ma’am

Click on Data statistics for a different view:

Column influence:

Has no function yet as the training failed…

An Overview is Good

Selecting View model shows you the history of this model:

Naturally, Cluster center shows this:

Is There an Export Limit?

Export Columns creates a JSON file:

Which then looks like:

Pretty horrible if you ask me!

Skipping back to HAL

Going back to the List AI Objects window, there are two blue buttons: Add object and Run query.

Add Object is how I added my first table, it is basically a catalog browser where you can pick the table of choice and, once selected, the greyed-out boxes at the bottom turn blue and you may click on Add object or Enable AI query. Having added a table you then do indeed click on Enable AI query to decide which columns have which DI data type. This is the most critical moment in the whole process!

When you select a column, you can decide on a data type:

Categorical, Numeric or Key. Once you have decided this for all of the columns click on Next:

Here you get the Filter chance:

Last Chance to Turn Back…

Now you have the choice to filter again or just click on Enable:

This now kicks off an Apache Spark process in the background that sucks all of the CPU and memory from your system for a while and builds a new vector table in the background. The vector table actually contains the model data to be used by the new BiFs.

In SPUFI you can find your vector table:

It is “AIDB_” concatenated with your table creator an underscore and then table name.

There are only three columns and, naturally, due to the failure of the model training, my vector table is empty… The COLUMN_NAME here is the column name that is referred to from now on in this blog.

Round up the Usual Suspects!

The vector data tables are very, very special… you cannot simply copy them between systems like normal data and you must make sure they stay in step with their partner “real data” tables – and here I specifically mean image copies!

Once a model is trained, you can use the new BiFs. The documentation is a bit sparse and it does not explicitly say you can use numeric data types, but it also does not explicitly say you cannot! In fact, the list of “cannot use types” are always the same:

BINARY, VARBINARY, CLOB, BLOB, DBCLOB, XML or ROWID.

Maximum length of the used column is 1868 Bytes.

Where’s the Beef?

Here are the currently available AI BiFs with examples taken from the docu.

This is the human language equivalent of Source-1 is to Target-1 as Source-2 is to Target-2.

The Column Name is the identifier which points to the model and column name to be used for this AI function (COLUMN_NAME in the vector table) and, if not given, then the expression determines the column name or it is just the actual table column name. The model specified must, obviously, be the same for both Sources and the same for both Targets.

Here’s an example showing the syntax:

The customer with ID ‘1066_JKSGK’ has churned. Given the relationship of that customer to ‘YES’ in the churn column, find customers with the same relationship to ‘NO’ in the churn column, in other words, customers unlikely to churn.

SELECT AI_ANALOGY('YES'        USING MODEL COLUMN CHURN,

'1066_JKSGK' USING MODEL COLUMN CUSTOMERID,

                  'NO'         USING MODEL COLUMN CHURN,

                  CUSTOMERID),

       CHURN.*

FROM CHURN

ORDER BY 1 DESC

FETCH FIRST 5 ROWS ONLY ;

The result is a double precision floating point number. The larger the positive value the better the analogy and a value of -1 is a poor analogy. Caution must be used as the result can also be NULL if any of the args are NULL.

This returns a clustering score for the member-expr value among the cluster of values defined in the, up to three times repeated, clustering expression list. Like AI_ANALOGY the model to be used is either the member-expr column name or the explicit column name.

Here’s an example showing the syntax:

Customers with IDs ‘0280_XJGEX’, ‘6467_CHFZW’ and ‘0093_XWZFY’ have all churned. If we form a semantic cluster of those three customers, find the top 5 customers that would belong in that cluster.

SELECT AI_SEMANTIC_CLUSTER(CUSTOMERID,

                           '0280_XJGEX', '6467_CHFZW', '0093_XWZFY'),

       CHURN.*

FROM CHURN

ORDER BY 1 DESC

FETCH FIRST 5 ROWS ONLY ;

The result is a double precision floating point number between -1.0 and +1.0 that is the semantic clustering score. A larger positive number indicates a better clustering among the list of clusters. Caution must be used, as the result can also be NULL if any of the arguments are NULL or were not seen during training.

This returns a similarity score for the two expressions. Like AI_ANALOGY, the model to be used is either the expression column or the explicit column name.

Here are two examples showing the syntax:

Find the top five customers by ID most similar to the customer with ID ‘3668-QPYBK’.

SELECT AI_SIMILARITY(CUSTOMERID,

                     '3668-QPYBK'),

       CHURN.*

FROM CHURN

ORDER BY 1 DESC

FETCH FIRST 5 ROWS ONLY;

Find the top three payment methods most similar to ‘YES’ in the CHURN column.

SELECT DISTINCT AI_SIMILARITY(PAYMENTMETHOD,

                              'YES' USING MODEL COLUMN CHURN),

                PAYMENTMETHOD

FROM CHURN

ORDER BY 1 DESC

FETCH FIRST 3 ROWS ONLY ;

The result is a double-precision floating point number (FLOAT) that is the similarity score between -1.0 and 1.0, where -1.0 means that the values are least similar, and 1.0 means that they are most similar.

With this BiF you can get NULL returned if the columns of interest are numeric types but the value is outside of the range of FLOAT. Further, NULL is returned if any argument is NULL and also if the value was not available during training and the column is not numeric.

Follow the Money!

This is all you get out-of-the-box in Db2 13 – The real question is, “For which business use cases does it make sense to use these BiFs?” That is naturally a very hard question and next month I hope to bring you real life AI examples from my test data (as long as I can get my Models trained!)

TTFN

Roy Boxwell

2023-01 IDUG EMEA 2022 review

Hi All!

This year (well, strictly speaking last year…) IDUG EMEA changed from running Monday to Wednesday to being from Sunday to Tuesday. This caught a few people out with travel plans, etc. but all in all it was ok as a “once off”. It was also held after a two-year COVID delay in the beautiful city of Edinburgh, where it normally rains a day longer than your stay but I only had rain on one day! Mind you, that did coincide with tornado style, umbrella-shredding winds that made going anywhere more like a swim, but what can I say? The Haggis was excellent, the Whisky’s were simply gorgeous and all the people were incredibly friendly. Not just the visitors to the IDUG either!

All a bit late, I know, but I have been terribly busy doing other “real” work… So, with no further ado, off we go through the Db2 for z/OS IDUG presentations!

Please remember, to use the links below you *must* have been at the IDUG 2022 EMEA and/or you are an IDUG Premium member and remember your IDUG Userid and password!

Starting with Track A: Db2 for z/OS I

A01 Db2 13 for z/OS and More! from Haakon Roberts and Steven Brazil gave a great intro to Db2 13, the history behind it and the AI-driving force within. Finishing off with Db2 13 highlights. Our very own Andre Kuerten rated Haakon as the best overall presenter by the way.

A02 Db2 13 for z/OS Performance Overview, from Akiko Hoshikawa did what it said. A deep dive through all the performance stuff, including updates on Db2 12 and synergy with z Hardware.

A03 Db2 13 for z/OS Migration, Function Levels and Continuous Delivery from “The Dynamic Duo” of Anthony Ciabattoni and John Lyle was a review of all the Db2 12 functions and then an update on how to do the migration to Db2 13 – Which should be faster and less troublesome than to Db2 12.

A04 Now You See It, Unveil New Insights Through SQL Data Insights from Akiko Hoshikawa was the first of many presentations to go into depth all about AI, as various bits of AI are now available out-of-the box with Db2 13. Still a few things to do of course… 50GB of USS data for the SPARK for example … but at least no need for machine learning (ML). At the very end was also a glimpse into the future and the next three BiFs coming our way soon!

A05 Getting Ready for Db2 13 from John Lyle was another review of continuous delivery and how to get to Db2 12 FL510, which is the migration point for Db2 13 FL100.

A06 Db2 for z/OS Utilities – What’s New? from Haakon Roberts was the usual excellent presentation and run down of all the latest new stuff on the utilities front and also any Db2 13 stuff as well. As always well worth a read!

A07 BMC Utilities Update was a complete review of all BMC utilities.

A08 The Latest Updates on Broadcom Db2 Utilities was a complete review of all Broadcom utilities.

A09 Db2 Z Network Encryption: Overview and How to Identify End-Users Using Open Port from Brian Laube was a fascinating presentation all about fully understanding this problem as it is, or will be, a problem in all shops at some point in the future! The presentation also included an example NETSTAT from TSO to help you find the “bad guys” before you switch to 100% SECPORT usage. At the end was a very nice list of AHA requests/ideas/wishes which I would also appreciate if some people voted for!

A10 Do you know? SMF Records, IFCIDs, Trace Classes – What Does it all Mean? from Sally Mir all about traces, SMF, IFCIDS etc etc etc.

A11 The Db2 12 and beyond with the latest real customer experiences … from Ute Kleyensteuber. Ute discussed the experiences with Db2 12 and some of the challenges she had to face, and then some Db2 13 early experiences.

A12 Db2 13 for z/OS Migrations – Major Process Changes Detailed from John Lyle. He explained what the Catalog and Directory are and how they have changed over the years, also how to pre-migrate check and then migrate your sub-system.

A13 Let Your Cloud Applications Get the Most From Your z Data – Introducing IBM Data Fabric from Sowmya Kameswaran was to introduce IBM Data Fabric with Cloud Pak for Data.

A14 Express Yourself from Marcus Davage was an excellent romp through the horrors of REGEX and ended up with a quick way to solve WORDLE…all a bit dubious if you ask me! What with the recursive SQL from Michael Tiefenbacher and Henrik Loeser solving Sudoku and now Marcus solving Wordle what is left for our brains to do??? The link to github for Sudoku is:

https://github.com/data-henrik/sql-recursion

Thanks to Michael and especially to Henrik for the link.

A15 A Row’s Life from Marcus Davage. This is what a row goes through in its daily life – fascinating! A deep technical dive into the data definition of pages etc etc.

A16 Db2 13 for z/OS Install and Migration Using z/OSMF Workflows from Sueli Almeida. This presentation showed how you can indeed use z/OSMF for provisioning now!

Now we switch to Track B Db2 for z/OS II where we start with B02 as there was no B01:

B02 Continuous Delivery – Navigating from Db2 12 to Db2 13 from Chris Crone. This was another review of CD in Db2 12 and 13 but included a review of ZPARMS that have been deleted/deprecated/updated in Db2 13.

B03 Db2 13 for z/OS Application Management Enhancements from Tammie Dang. The highlights for applications were reviewed, highlighting timeouts and deadlocks and the way that SYSTEM PROFILES can easily help you get what you want without massive REBINDs.

B05 Ready Player One for Db2 13! from Emil Kotrc. This was another recap of Db2 13 migration, CD, Db2 12 FL501 to FL510 but also with deprecated functions and incompatible changes.

B06 Getting RID of RID Pool RIDdles from Adrian Collett and Bart Steegmans was an entertaining sprint through What is an RID? and When do I have a real RID problem?

B07 Mastering the Setup for Integrated Synchronization in Data Sharing from Christian Michel was all about network setup for encryption and certificates, as well as high availability for multiple accelerators.

B08 Things About Db2 for z/OS I Wish I’d Remember When…. from Michael Cotignola. A quick run through interesting stuff that has changed over time especially RTS usage.

B09 Speed up your Image Copies consistently and non-disruptive! from Eva Bartulej and Chris Duellmann was a description of one way to Clone data in production down to test.

B10 Db2 for z/OS Data Sharing: Configurations and Common Issues from Mark Rader. Was a very interesting presentation about exactly how data sharing hangs together and the various ways you can break it or make it better. A Must Read if you have, or are planning on, going to a data sharing system! His anecdote about a forklift truck crashing through the back wall of the data center brought home that “disasters” are just waiting to happen…

B11 Get Cozy with Traces in Db2 for z/OS from Denis Tronin was a great intro into the usage of Traces for Db2 for z/OS. Included was also an update of which traces have been changed or introduced for the new Db2 12 and 13 features.

B12 Partitioning Advances: PBR and PBR RPN from Frank Rhodes. This gave an excellent review of the history of tablespaces in Db2. Then the new variant PBR RPN was introduced, and how to get there.

B13 Managing APPLCOMPAT for DRDA Application from Gareth Copplestone-Jones. Another review of CD and APPLCOMPAT but this time looking at NULLID packages and the special problems they give you!

B14 Afraid of Recovery? Redirect Your Fears! from Alex Lehmann. This showed the new redirected recovery feature with some background info and a summary about why it is so useful!

B15 Db2 SWAT Team Perspective on Db2 13 Features that Maximize Continuous Availability from Anthony Ciabattoni. A review of all the features that allow you to change things while the system is running with no outage.

B16 DB2 z/OS Recovery and Restart from Thomas Baumann. This is a *one* day seminar… The IDUG live talk was just the first hour (first 34 slides!) and if you ever wish to consider doing RECOVER – Read it all!

B17 Security and Compliance with Db2 13 for z/OS from Gayathiri Chandran. Was all about the IBM Compliance Center and a list of all Audit relevant info in Db2 (Encryption, Audit policies etc.)

Now off to Track E AppDev & Analytics I:

E01 When Microseconds Matter from Thomas Baumann. This was all about tuning a highly tuned system. Where, even if you have 50 microsecond CPU SQL times, you can squeeze even more out of the Lemon! Included are complete SQLs for examining your DSC for tuning candidates that “stayed below the Radar” before.

E02 Customer Experiences and best practices from Accelerator on Z Deployments from Cüneyt Göksu. This was all about Accelerator usage of course and included a bunch of “Lessons learned” when migrating through to V7.5.

E03 Our Migration journey towards UTS and DB2 12 FL510 : Approach and
Status
from Davy Goethals. This was all about the effort to get all TS’s to UTS and to get to Db2 12 FL510 ready for Db2 13. Especially interesting for the QMF data going to implicit TS’s now.

E04 Db2 for z/OS Locking for Application Developers from Gareth Copplestone-Jones. All about locking from the developer’s POV. Contains Lock Size recommendations and descriptions of Locking, and Cursors. A very good read!

E05 Simplify database management with package rebind in an 24×7 application environment from Tammie Dang. This was all about the REBIND Phase-in which really, really helps keep your system 24×7! Also worth mentioning is the Db2 12 PTF UI73874 (APAR PH28693 Closed 2021-02-09) which *must* be installed!

E06 Mainframe Modernization – What about the data? from Greg DeBo. This was all about data and who has security, who owns it, needs it and what can you do with it when you no longer need it?

E07 Beginners guide to Ansible on z/OS from Sreenivas Javvaji started with the pre-reqs like Ubuntu and then adding Ansible and installing the IBM z/OS Core Collection finishing off with yaml.

E08 Sharing Early Experience with Db2 AI for z/OS from Fatih Aytemiz. This was the customer view of early usage of AI at their bank.

E09 Access Paths Meet Coding from Andy Green. Contains a DBA view of how application developers let access paths “slide” over time until incidents start to happen, and how to correct this by putting back “optimization” into the development process. Extremely useful presentation for Application Developers and SQL coders!

E10 SQL Injection and Db2 – Pathology and Prevention from Petr Plavjaník. A cross platform presentation all about SQL injection and how it can bite you… A very important take away is that it is not “just” an LUW or distributed problem. Dynamic SQL in COBOL can just as easily be injected…

E11 All you ever wanted to know about Accelerator on Z Monitoring from Cüneyt Göksu. Did what the label said!

E12 What Db2 Can Do; I Can Do Too – First Steps Towards Machine Learning from Toine Michielse. An introduction to AI and ML along with explaining all of the current Models and training mechanisms.

E13 How Can Python Help You with Db2? From Markéta Mužíková and Petr Plavjaník. Everything you ever wondered about Python but were afraid to ask! Included an installation list explaining some of the more weird environmental variables of the pyenv.sh

E14 Use Profiles to Monitor and Control Db2 Application Context from Maryela Weihrauch. This was all about one of the, in my personal opinion, most underused features of Db2 on z/OS. They have been around for years and they enable so much e.g. Global Variables, driver upgrades, RELEASE(DEALLOCATE) / RELEASE(COMMIT) etc etc

E15 -805 Explained from Emil Kotrc. This explained the whole background of program preparation Precompile, Compile, Link and BIND. Which consistency token goes where and when is it validated?

E16 The Exciting Journey Towards Devops on the Mainframe from Toine Michielse. This was all about DevOps and how the Open Mainframe is reflected in this environment with Zowe, git etc.

E17 Data, I just can’t get enough, Data Archiving in Db2 from Roberto Cason. Was all about data and when to archive it and more importantly – how?

Then Track F for AppDev & Analytics II:

F03 Is it worth to migrate CICS cobol app to Windows .net ? from Mateusz Książek. Naturally I am a little bit biased here, as I would always say “NO!”. However Mateusz goes on to explain the difficulty of monitoring and comparing the results. It was a bit like apples and oranges after all and he ended on a Pros and Cons slide where you must decide …

F04 COBOL abound from Eric Weyler. This was all about the remarkable life of COBOL and how it is *still* nailed to its perch! There are “new” forms like gnuCOBOL and new front ends like VS Code and of course Zowe and Web GUIs with z/OSMF.

F05 Getting the Most Value from Db2 for z/OS – No Matter what the Version or Function Level from Bob Bersano. This was a “call to arms” to actually use the existing functionality that you have in Db2 for z/OS versions 12 and 13.

F06 Declared Global Temporary Tables (DGTT) User Stories from Kurt Struyf. Explained the differences between CGTT and DGTT, why you would want to use a DGTT and things to know, especially EXPLAIN usage.

F07 War of the Worlds – Monolith vs Microservices from Bjarne Nelson. This session highlighted the intrinsic difficulties of going to microservices (Rest et al) in comparison to the “normal” DBMS ACID style. Finishing with “When to use microservices and when not to!”

F08 SYSCOPY: You cannot live without it! from Ramon Menendez. Detailed everything about this very important member of the Db2 Catalog. It also covered the new things in Db2 12 and 13 as well as a quick look at how SYSIBM.SYSUTILITIES interacts with it.

F09 Playing (with) FETCH from Chris Crone. This was an informative session all about FETCH where even I learnt something … shock, horror!

F10 A Tale of Two Extensions : Db2 Family Support in Visual Studio Code from Philip Nelson. This was a deep dive down into VS Code, IBM Db2 for z/OS Developer Extension and Db2 Connect (not the old one, the new one!)

F13 Your Statistics are Safe with Me, Statistics Profile Revealed from Nilufer Osken. Everything you wanted to know about Db2 statistics and statistic profile usage.

F15 Making Db2 on the Mainframe Great Again with the Linux Foundation Zowe Tooling from Joe Winchester. This was a great Zowe presentation about how you can modernize your mainframe. My personal favorite is naturally slide 35 with the Green Screen emulator…

F17 Explain explained from Julia Carter. This was an introduction in how to use and understand the EXPLAIN statement and its output to help in correcting badly-running SQL.

Finally Track G Daily Special (Sounds like a restaurant…) :

G01 Db2 for z/OS Security – An Introduction from Gayathiri Chandran. This is everything you need to know about security on z/OS.

G02 Back to Basics: High Performance Application Design and Programming from Tony Andrews. This was all about understanding SQL accesses and Db2 access paths using EXPLAIN. Then all the different type of joins were discussed as well as SORTs.

G03 Do I Really Need to Worry about my Commit Frequency? An Introduction to Db2 Logging from Andrew Badgley. Explained the BSDS, the Active and Archive logs and how they all interact with UOW. A recommendation here during Q&A was to COMMIT about every two seconds, and one war story was of a site that had an eight hour batch run which was then deemed to have gone rogue and was duly cancelled… It started rolling back and took a while… Db2 was then shut down – It didn’t of course… IRLM was then cancelled, Db2 came crashing down. Db2 was restarted… Hours passed as it *still* did its ROLLBACK, they cancelled it again… Then restarted and then waited hours for it to actually (re)start properly…

G04 Db2 Logging Basics & Exploitation Beyond Recovery from Steen Rasmussen. This explained how Db2 actually does logging and why we should be interested in it!

G05 The Trilogy of DB2’s “Originating” Address Spaces–Mainframe, DDF & Stored Procedures Measure/Tune from Thomas Halinski. Explained all the z/OS parts of Db2 and then listed out the different “areas” of tuning that are possible.

G17 Esoteric functions in Db2 for z/OS from Roy Boxwell. Naturally the best presentation of the EMEA *cough, cough* Could be a little bit biased here… If you wanted to know about weird stuff in Db2 for z/OS then this was your starting point. Any questions drop me a line!!!

My very own Oscar

I was also very proud, and happy, to be awarded an IBM Community Award as a “Newbie IBM Champion” for 2022! It was at the same time as John Campbell got his for lifetime achievement and was doubly good as I first met John waaaay back in the 1980’s at Westland Helicopters PLC with DB2 1.3 on MVS/XA – Those were the days!

Please drop me a line if you think I missed anything, or got something horribly wrong. I would love to hear from you!

TTFN,

Roy Boxwell

2022-12 Are you ready for Db2 13?

Hi all! Welcome to the end-of-year goody that we traditionally hand out. This year is a relaunch of the Migration HealthCheck that we first did over two years ago. I’ve also provided some news about Db2 13 UTS PBR RPN spaces that might be of interest to you!

DB2 12 UTS PBR RPN

You might well know that this was my single favorite feature of Db2 12. What I did not really appreciate, until now, was the fact that getting these very big partitions can come with a major price!

WHAT IS THE PROBLEM?

If you have a Data Sharing system (Who does not these days?) and you happen to use LOCKSIZE ROW on your UTS PBR RPN then you should take a good look at your performance monitor data. If you see a high number of P-Locks, and the number of false contentions is greater than your IRLM–SUSPENDS, then BINGO!

THE FIX IS?

Documented in the red book „Db2 13 Performance Topics“, Chapter 5 Data Sharing, 5.1 „Partition-by-range table space relative page numbering enhancements“ is the information above, and a lot more. The crux of the matter is a new Hash Algorithm, and to get to it you must simply REORG any UTS PBR RPN spaces that were created prior to Db2 13 FL500.

HERE IS SOME SQL

Here is some SQL to list out any and all of your UTS PBR RPN table partitions that were created prior to Db2 13 FL500 and have not yet been REORGed or LOAD REPLACED.

First check that you are actually *at* Db2 13 R1 FL500!

SELECT LU.EFFECTIVE_TIME 
FROM SYSIBM.SYSLEVELUPDATES LU
WHERE 1 = 1
AND LU.FUNCTION_LVL = 'V13R1M500'
AND LU.OPERATION_TYPE = 'F'
WITH UR
FOR FETCH ONLY
;

This simply returns the time when the FL500 was „activated“ in your Db2 13 system. If it returns no rows then you cannot do anything…

THE USUAL SUSPECTS…

Then we get the Partitions of interest:

SELECT SUBSTR(TP.DBNAME , 1 , 8) AS DBNAME 
,SUBSTR(TP.TSNAME , 1 , 8) AS TSNAME
,TP.PARTITION
,TP.CREATEDTS
,TP.REORG_LR_TS
FROM SYSIBM.SYSTABLESPACE TS
,SYSIBM.SYSTABLEPART TP
WHERE 1 = 1
AND TS.DBNAME = TP.DBNAME
AND TS.NAME = TP.TSNAME
AND TS.TYPE = 'R' -- UTS PBR ONLY
AND TS.LOCKRULE = 'R' -- ROW LEVEL LOCKING ONLY
AND TP.PAGENUM = 'R' -- UTS PBR RPN ONLY
AND TP.CREATEDTS < (SELECT LU.EFFECTIVE_TIME
FROM SYSIBM.SYSLEVELUPDATES LU
WHERE 1 = 1
AND LU.FUNCTION_LVL = 'V13R1M500'
AND LU.OPERATION_TYPE = 'F'
) -- CREATED BEFORE FL500 ACTIVATED
AND TP.REORG_LR_TS < (SELECT LU.EFFECTIVE_TIME
FROM SYSIBM.SYSLEVELUPDATES LU
WHERE 1 = 1
AND LU.FUNCTION_LVL = 'V13R1M500'
AND LU.OPERATION_TYPE = 'F'
) -- LAST REORG/LOAD BEFORE FL500 ACTIVATED
ORDER BY 1 , 2 , 3
WITH UR
FOR FETCH ONLY
;

This query uses the effective timestamp, created timestamp and the last reorg load replace timestamp to filter out all the partitions that do not need to be REORGed or LOAD REPLACEd. 

Please also remember you only need to do all this when you have ROW LEVEL locking in data sharing with high CPU p-locks. The red book shows some very impressive CPU savings!

BACK TO THE HOLIDAYS!

Over the last two years we have added and enhanced our Migration HealthCheck a lot. Improvements include testing what happens with DEFINE NO spaces when  they were created years and releases ago but would now be externalized. All of this means the output has changed, of course.

TELL ME MORE!

Here’s a list of all the deprecated (and semi-deprecated) items that should be checked and verified at your site:

  1. Use of SYNONYMS
  2. Use of HASH objects
  3. Use of segmented spaces
  4. Use of classic partitioned objects (not using table based partitioning)
  5. Use of simple spaces
  6. Use of six-byte RBA
  7. Use of BRF
  8. Use of LARGE objects (This is semi-deprecated)
  9. SQL EXTERNAL Procedures
  10. UNICODE (VARBIN Columns)
  11. Old RLF table defs
  12. Old PLAN_TABLE defs
  13. Old bound packages in use in the last 548 days
  14. Direct bound DBRMs (Yes they can still exist!) 
ANYTHING ELSE?

Well yes! You could also check how many empty implicit databases and how many empty tablespaces you have. While you are scanning your subsystem, it could also be cool to list out all the Db2 subsystem KPIs. What about seeing how many tables you actually have in multi-table tablespaces that, at some point, must also be migrated off into a UTS PBG or UTS PBR tablespace?

WE DO IT ALL!

Our little program does all of this for you. It runs through your Db2 Catalog in the blink of an eye and reports all of the data mentioned above.

WHAT DOES IT COST?

Nothing – It is our freeware for 2022/2023 and you only have to be registered on our website to request it along with a password to run it.

HOW DOES IT LOOK?

Here is an example output from one of my test systems here in Düsseldorf:

Db2 Migration HealthCheck V2.3 for SC1 V12R1M510 started at  
2022-12-14-10.56.00
Lines with *** are deprecated features

Number of DATABASES : 594
# of empty DATABASES : 237
# of implicit DATABASES : 385
# of empty implicit DATABASES: 207

Number of TABLESPACES : 4861
of which HASH organized : 0
of which PARTITIONED CLASSIC : 2 ***
# Partitions : 32 ***
of which SEGMENTED : 294 ***
of which SIMPLE : 0
of which LOB : 67
of which UTS PBG : 4467
# Partitions : 4477
of which UTS PBR (Absolute) : 5
# Partitions : 801
of which UTS PBR (Relative) : 6
# Partitions : 756
of which XML : 20

Number of tablespaces as LARGE : 8 ***
Number of empty tablespaces : 28
Number of multi-table TSs : 55
# of tables within these : 239
Number of incomplete TS : 1 XXX
Number of INSERT ALG 0 TS : 4861
Number of INSERT ALG 1 TS : 0
Number of INSERT ALG 2 TS : 0

Number of tables : 10293
of which ACCELERATOR ONLY : 0
of which ALIASes : 5307
of which ARCHIVEs : 1
of which AUXs : 60
of which CLONEs : 0
of which GTTs : 136
of which HISTORYs : 1
of which MQTs : 1
of which TABLEs : 4765
of which VIEWs : 2
of which XMLs : 20
Number of tables with Audit : 101
Number of tables with Data Cap : 0
Number of tables incomplete : 1 XXX
Number of tables with control : 0

Number of RLF DSNRLMT__ tables : 0
of which columns deprecated : 0
Number of RLF DSNRLST__ tables : 1
of which columns deprecated : 0

Number of PLAN_TABLES : 68
of which deprecated : 3 ***

Number of SYNONYMs : 1 ***

Number of UNICODE V11 Columns : 0

Number of PROCEDURES : 110
of which SQL EXTERNAL : 0
of which EXTERNAL : 108
of which NATIVE SQL : 2

Number of FUNCTIONS : 87
of which EXTERNAL TABLE : 38
of which EXTERNAL SCALAR : 42
of which SOURCED AGGREGATE : 0
of which SOURCED SCALAR : 0
of which SQL TABLE : 1
of which SQL SCALAR : 6
of which SYSTEM-GENERATED : 0

Number of Indexes : 23243
of which HASH : 0
of which type 2 : 23210
# of partitioned IXs : 6
# Partitions : 160
of which DPSI : 18
# Partitions : 164
of which PI : 15
# Partitions : 1138
Number of indexes COPY YES : 38
Number of indexes COMPRESS YES : 0

Number of table partitions : 6606
of which DEFINE NO : 2848
of which six byte RBA <11 NFM: 0
of which six byte RBA Basic : 0
of which ten byte RBA : 3759
Number of TP in BRF : 0
Number of TP with COMPRESS Y : 498
Number of TP with COMPRESS F : 0
Number of TP with COMPRESS H : 0
Number of TP with TRACKMOD YES : 2968

Number of index partitions : 24666
of which DEFINE NO : 20140
of which six byte RBA <11 NFM: 0
of which six byte RBA Basic : 0
of which ten byte RBA : 4527

Number of STOGROUPS : 10
Number of non-SMS VOLUMES : 0

Number of PLANs : 54
 of which DBRMs direct : 0
# of SQL statements : 0
Number of PACKAGES (total) : 5788
of which VALID = A : 42
of which VALID = H : 0
of which VALID = N : 44
of which VALID = Y : 5702
of which VALID = S : 0
of which OPERATIVE = N : 0
of which OPERATIVE = Y : 5788

Old RELBOUND executed packages : 0

Number of PACKAGES (distinct) : 480

Number of Original PACKAGES : 0
Number of Previous PACKAGES : 0
Number of Phased-out PACKAGES : 0
Total number of PACKCOPY : 0
of which VALID = A : 0
of which VALID = H : 0
of which VALID = N : 0
of which VALID = Y : 0
of which VALID = S : 0
of which OPERATIVE = N : 0
of which OPERATIVE = Y : 0
Number of SQL statements : 441833

Db2 Migration HealthCheck V2.3 for SC10 V12R1M510 ended at
2022-12-14-10.56.03

Db2 Migration HealthCheck ended with RC: 0

Any line with *** at the end means that you have something to do at some point in the future.  The names of all the found objects are written to DD card DEPRECAT so you can then start building a „to do“ list. I would start now to slowly „fix“ all of these before it is 03:00 in the morning, someone is migrating to Db2 14 FL 608 and it all goes horribly wrong…

Any line with XXX means that you have an incomplete definition for a tablespace and/or a table. These should be fixed as well, either by completing the definition or dropping the unfinished object(s).

WHAT’S WRONG WITH LARGE?

This is not actually deprecated but any tablespaces marked as LARGE tend to also not have a valid DSSIZE in them. This is fine if you have built a CASE construct to derive the value from the tablespace definition. But what you should do, is an ALTER and a REORG to „move“ the LARGE to a „proper“ tablespace. IBM and 3rd Party Software vendors hate having to remember that ancient tablespaces are still out there!

ALL ON MY OWN?

Naturally not! For example, after all the ALTERs have been done, a lot of the spaces are simply in Advisory REORG pending status and you could use our RealtimeDBAExpert (RTDX) software to automatically generate the required REORGs to action the changes.

SYNONYMS??

Well, you can do them all yourself by reading one of my older newsletters – just remember to watch out for the GRANTs afterwards.

HOW MANY DEPRECATED OBJECTS DO YOU HAVE?

I would love to get screenshots of the output at your sites which I would then all sum up and publish as an addendum to this newsletter. Just so that people can see how many Parrots we all have pining for the fjords!

TTFN, Happy Holidays!

Roy Boxwell

2022-11 First timer report

This month, I am turning over my blog to my colleague Andre Kuerten from Software Engineering’s German Labs based in Düsseldorf, Germany. I challenged him to write a blog all about his experiences as a “first time IDUG EMEA attendee” and all that that entailed as I thought this would make an interesting read for us all!

I’ll be Back!

Have no fear, dear readers, as I will be back next month with our annual Christmas give-away. The first blog of 2023 will be my comprehensive review of the 2022 IDUG EMEA in Edinburgh and the surprises found there-in.

It begins…

IDUG EMEA 2022 – First Timer Report

Pre-Preparation-Phase:

My firm gave me the chance to go the IDUG EMEA, additionally taking a Saturday workshop to educate myself in all things SQL. Surprisingly, my wife gave approval! Therefore, “the guy from techsupp” that I am, planned the trip to Edinburgh with all the hotel and flight bookings etc.

Saturday:

Unexpectedly, everything went smoothly and so I was standing in front of the Edinburgh International Conference Center (EICC) on Saturday morning, ready for my first IDUG in person after working for more than 15 years in the Db2 business, professionally developing software.

The general plan was for me to learn, or refresh my knowledge, about query optimization and finding the cause for poor performing SQL to be ready to work a little bit more intensively in this area. The first decision made was that I will take part in the “Query Optimization and Tuning Workshop” that will take the complete Saturday, covering themes like statistics, cardinalities, optimizer stuff, query EXPLAIN and strategies for performance tuning.

I entered the EICC and was registered by the very friendly and helpful IDUG employees, got my badge and a bag filled with little goodies – I think the practical value of the umbrella was unbeatable.

When looking around I have to admit that the EICC is a nice venue. Right behind the entrance is a big hall on the ground level which opens up for you, they arranged some high tables to put down your water/coffee, just a few more chairs would have been a good idea. But I was impressed by the sheer size of it all.

Sched is Your Friend!

The rooms all had Gaelic names, you just had to locate them on the map, so everything could be easily found. The Sched App was a perfect addition to the printed schedules, it made it very easy to get the day organized and to know where to go next. I really liked it and the connection to the website.

Workshop – z/OS???

Took the escalators to “Carrick” and got a seat in the workshop, looked around and discovered the usual mix of technical geeks, where the average age seems to be a little low… Additionally, there was no work station or material lying around that we would have to work with during the day (I had been told by the experienced colleagues that I would really have to work at the workshop), so I started to wonder… Chatting with some of the people in the room was fine, and I was glad I was not the only first timer (I must admit that I didn’t apply for a first timer badge, I do have my limits). Then it started. Instead of the expected z/OS hands-on workshop it slowly turned into an LUW daily presentation… At least I was not alone, as another z/OS guy was also not expecting LUW. It had not been made clear at early registration time that this was going to be just an LUW workshop. Anyway, since the topic was “SQL”, there was still valuable information here. So I listened and concentrated and got new ideas about how to start with SQL optimization and where to look first, starting from query optimization basics, discussing some database design alternatives and ending with cardinality estimation.

After a small lunch, taken in the big hall, the afternoon session started and was now going for the optimization of various operations like Scans, Sort, INSERT, UPDATE, DELETE and queries with outer joins, aggregation, distinct, correlated subqueries etc. I just had to take care to keep focused while the knowledge was being distributed. I think this would have been much easier if we could have done some practical exercises.

At the end the feelings about the workshop were a little bit mixed, but overall positive.

The Roy arrives

Starting in the late afternoon, I kept getting status messages from Roy (Boxwell, just in case anybody knows him 😊) who was on his way to Edinburgh, so our team size doubled in the evening.

The Booth…

Since my firm was a vendor at the IDUG I also got some experience from building the booth (where I really have to say thanks to the people from the EICC for all their help and kindness) and running it.

Sunday:

On Sunday the first normal IDUG Sessions were starting, I made it to the initial key-note in the “Pentland Main Auditorium” titled “Why Some Teams Are Successful While Others Struggle”, it was interesting, even though it was not a direct technical approach, it was about the people that you are working with and trusting your team. The auditorium (more like a cinema theater) was half full and they mentioned that we were 350+ onsite visitors which I found a good number, however I was told that there used to be many more in the past.

The next session, “Db2 13 for z/OS and More!” was summing up what I already knew from all the announced Db2 13 features, remarkable was one of the presenters, Haakon Roberts, he really did an excellent job and, looking back, I would say that his accent was the best that I heard during IDUG.

Noteworthy is that you soon get used to listening to speakers with different, strange to me, accents very fast, even if you are not a native speaker. This had been one of my fears beforehand which was, luckily, unfounded.

Back to “Carrick” and into the “Back to Basics: High Performance Application Design and Programming” presentation held by Tony Andrews. I mention him because of a reason: As expected the sessions differed in presentation style, quality of presentation etc., this is what you can tick on the evaluation cards. But the best ones, for me, were the ones where you could feel that the speaker was really deeply involved in the work with Db2 and knew what they were talking about like in this session, I heard a lot about the little things like row size, clustering order, all from a practical view, which was all very useful.

Do I Really Need to Worry about my Commit Frequency? An Introduction to Db2 Logging” confirmed my impression that the commit frequency is something to worry about, but the afternoon highlight was “COBOL abound”, demonstrating how you can develop in COBOL these days, of course, not only on the mainframe , but using Zowe (which I also use and explore at my firm) and the containers you can get there. This was really fascinating, although a little bit special…

Monday:

Day 3, Monday, I also went to the key-note “Behind The Birth Of An Accidental Enterprise”, covering some history of Db2’s evolution.
When talking about the next one, “Getting RID of RID Pool RIDdles”, I have to mention that this was done by two speakers and one of them, Adrian Collett, is known to me, we have worked together with some trial installations.
This was one of the big points: To meet people face to face for the first time, or again after the pandemic, especially customers who I have “talked to” for many years via email or telephone. I got the impression from everyone that they liked it very much to attend in person again.
The presentation itself shed some light on RID pool monitoring and tuning.
Sadly I have to say that this session was one of those that had to “speed up” towards the end (which was not Adrians fault!), so I am really looking forward to getting my hands on the PDFs as some speakers simply ran out of time and then rushed through their presentations. Also the notes taken correspond to slides that I need to see again, so this service (providing the presentations) should be continued for attendees, maybe IDUG could be just a little bit quicker making them available.

The next, very interesting, one was “Access Paths Meet Coding” which gave me practical insights about how to control some basic SQL rules which must be respected and how big the effect of it is in a productive environment.

Tuesday:

Then it was already Tuesday, off to listen to “SMF Records, IFCIDs, Trace Classes – What Does it all Mean?” which was obligatory for me, IFCIDs are utilized broadly in our products.

It should also be mentioned that, in my eyes, the technical equipment provided for the speakers was more than sufficient and the audio control was done by, always available, technical stuff from the congress center making a very good impression.

Get Cozy with Traces in Db2 for z/OS” reminded me of one way to get worthwhile information about what is going on (or going wrong) on the system.

I have worked with Db2 for some time “in real life” so the next sessions did not have very much new info for me, but I still took notes for “Partitioning Advances: PBR and PBR RPN”, “How to Keep Bad SQL Out of Production” and “Your Statistics are Safe with Me, Statistics Profile Revealed” and have to be checked again.

And the Oscar goes to…

For the evening, the IBM Db2 appreciation event was announced, so we took a walk after the conference day ended up going to the National Museum. The National Museum was, of course, closed when we arrived, so we ended up waiting in the crowd for something to happen. A bagpiper started to play: a perfect introduction for the evening. Worthy of note was the AC/DC part. Finally we made it to the grand hall, perfect location and ambience, food and drinks really well arranged.
Most impressive for me, was the lifetime award for Mr. John Campbell (he had already been honored previously at a keynote). Even I had read a lot from, and about, him (and the respect shown towards him from all of the audience was also good to see). At the end even Roy got an award: he did well as a “newbie” champion. I think they simply forgot to announce him for “some” years. Deserved without question, but why must he get a trophy for his desk that we have to look at (and that is mentioned by him) every single day?

Wednesday:

Back to the IDUG content on the next, and last day, Wednesday:

The last sessions for me at the IDUG were at “Moorfoot”, “Sidlaw” and “Kilsysth” rooms to be visited, following my path of getting new input to solve performance issues, “Db2 SWAT Team Perspective on Db2 13 Features that Maximize Continuous Availability” gave me an outlook to the future at customers sites.

 “The Exciting Journey Towards DevOps on the Mainframe” also explained some “modern” approaches. I regretted, a little bit, that I didn’t go to Roy’s “Esoteric functions in Db2 for z/OS“, instead I took “Explain explained” which was a bit of a basic session, getting the attention back to the little EXPLAIN data details.

At lunch time a, from my point of view, very interesting conference ended, giving me a lot of valuable information and “face-to-face” contacts, some minor negative items are normal I guess, but generally I really appreciated it.

So there you have it!

Many thanks to Andre for writing all that up! Coming soon will be Roy’s take on the EMEA 2022 where I go into technical details about the sessions etc.

I hope you enjoyed the guest blog this month and, as always, let me know what you think!

TTFN

Roy Boxwell & Andre Kuerten

2022-10 PROFILE Table usage Part Two

In part two of this newsletter, I wish to bring you up to speed on all the changes in the profile arena from Db2 11 right up until Db2 13.

Filters!

Profiles basically need some sort of “limit” to show Db2 which things should get which profile, and thus which keyword and attribute. To do this, we use Filters; for the filtering data there is an order of preference:

Multiple profiles?

When more than one profile applies to a thread or connection, the evaluation of the different profiles is not simultaneous. Instead, the profiles are evaluated in the following order, according to the criteria that are specified in the profile:

1. IP address or domain name, in the LOCATION column.

2. Product identifier, in the PRDID column.

3. Role and authorization identifier, in both ROLE and AUTHID columns.

4. Role, in the ROLE column only.

5. Authorization identifier, in the AUTHID column only.

6. Server location name, location alias, or database name, in the LOCATION column.

7. The location name of a requester, for monitored threads from a Db2 for z/OS requester. This is only for MONITOR THREADS and MONITOR IDLE THREADS.

8. Collection identifier and package name, in both COLLID and PKGNAME columns.

9. Collection identifier, in the COLLID column only.

10. Package name, in the PKGNAME column only.

11. Client application name, in the CLIENT_APPLNAME column.

12. Client user identifier, in the CLIENT_USERID column.

13. Client workstation name, in the CLIENT_WRKSTNNAME column.

First Come, First Served!

Only the first evaluated applicable profile is applied. Because the evaluation of multiple profiles is not simultaneous, the number of connections, or threads, on the subsystem might change during the evaluation of multiple profiles. Any profile that specifies a specific value in a particular column has precedence over a profile that specifies a single-byte asterisk value (‘*’) in the same column.

Destructive Overlap!

Further, each profile entry cannot have overlapping filter categories. From the precedence list there are eight categories formed from the numbers 1, 2, 3 – 5, 6 – 7, 8 – 10, 11 ,12 and 13, otherwise multiple rows must be inserted.

When you have multiple rows with overlapping filters from different categories then Db2 applies them all. Exact values are higher in priority than wildcard (*). As an example, for product id, PRDID, DSN13011 is before DSN* which is before *.

The More the Merrier!

Db2 also assumes that any rows with more filter values are higher priority than rows with some defaults or NULL values.

Finally, if everything is the same in the filter categories, Db2 will take the last inserted as it assumes this is “the most current version”.

Clear as Mud!

As you can easily see, it is very easy to tie yourself up in knots with this system! Good planning and good testing are paramount to a good, glitch-free implementation!

What Was New in Db2 11?

Well, Db2 11 brought in SPECIAL_REGISTER handling to the profile tables which is especially good for remote accessing threads. So you can now issue SET CURRENT APPLICATION COMPATIBILITY or SET CURRENT PACKAGE PATH, for example.

SET What You Want!

SPECIAL_REGISTER in the KEYWORDS column, ATTRIBUTE1 is any of the accepted SET statements, up to a maximum length of 1024 bytes. ATTRIBUTE2 and ATTRIBUTE3 are both NULL. The filtering is also not case sensitive.

Precedence of the SET special register:

1. Special register explicitly set by the application.

2. Special register set through Profile Support as above.

3. Special register set on the connection property level or data source level.

Buyer Beware!

Db2 11 also introduced warnings about not deleting rows from the _HISTORY tables to make sure you only delete rows that are really gone from the “normal” tables.

Additional Details

The MONITOR CONNECTIONS got a _DIAGLEVEL3 added:

MONITOR CONNECTIONS in the KEYWORDS column, ATTRIBUTE1 is a “two part” column value. The first part is either WARNING or EXCEPTION. A warning causes a console message every five minutes depending on the diagnosis level. An exception issues the diagnosis level and fails the connection request. The second part is either not there or it is_DIAGLEVEL1 which issues a DSNT771I console message, _DIAGLEVEL2 which issues a DSNT772I console message with more details every five minutes at most. _DIAGLEVEL3 issues, for a warning, a DSNT773I console message with more thread details for every thread, and for an exception a DSNT774I console message. ATTRIBUTE2 is a positive integer to indicate the threshold for the maximum number of remote connections. It must be less than or equal to CONDBAT. ATTRIBUTE3 is NULL. Filtering is only by the LOCATION column.

MONITOR IDLE THREADS got three new ATTRIBUTE1 values: EXCEPTION_ROLLBACK aborts any active idle threads and issues DSNT771I, EXCEPTION_ROLLBACK_DIAGLEVEL1 which is the same, and EXCEPTION_ROLLBACK_DIAGLEVEL2 with message DSNT772I.

MONITOR THREADS got _DIAGLEVEL3 added, which for an EXCEPTION issues a DSNT774I console message and, depending on the filtering, can be queued or suspended. WARNING issues the console message DSNT773I for every thread that exceeds the profile threshold.

Updated Info!

The info table about which columns can filter etc. got an overhaul with the note “The value is not case sensitive” for all values for MONITOR CONNECTIONS, MONITOR THREADS, MONITOR IDLE THREADS and SPECIAL_REGISTER.

IP6 Support!

It was also in Db2 11 that IPv6 got supported in the LOCATION field. So, it was now an IP Address: IPv4 dotted-decimal, or an IPv6 colon-hex, or a Domain Name, or a Location name.

What Was New in Db2 12?

In Db2 12, some new KEYWORDS options were introduced:

MONITOR ALL CONNECTIONS in the KEYWORDS column, ATTRIBUTE1 is a “two part” column value. The first part is either WARNING or EXCEPTION. A warning causes a console message every five minutes, depending on the diagnosis level. An exception issues the diagnosis level and fails the connection request. The second part is either not there or it is_DIAGLEVEL1 which issues a DSNT771I console message, _DIAGLEVEL2 which issues a DSNT772I console message with more details every five minutes, at most, and _DIAGLEVEL3 which issues a DSNT773I for WARNING and DSNT774I for EXCEPTION console message with more details. ATTRIBUTE2 is a positive integer to indicate the threshold for the total cumulative number of remote connections from all application servers . It must be less than or equal to CONDBAT. ATTRIBUTE3 is NULL. Filtering is only by the LOCATION column which must contain ‘*’, ‘:::0’ or ‘0.0.0.0’

MONITOR ALL THREADS in the KEYWORDS column, ATTRIBUTE1 is a “two part” column value. The first part is either WARNING or EXCEPTION. A warning causes a console message every five minutes, depending on the diagnosis level. An exception issues the diagnosis level and can cancel the thread, depending on the filtering criteria. Otherwise the thread is queued. The second part is either not there or it is _DIAGLEVEL1 which issues a DSNT771I console message, and _DIAGLEVEL2 which issues a DSNT772I console message with more details. _DIAGLEVEL3 issues a DSNT773I for WARNING and DSNT774I for EXCEPTION console message. ATTRIBUTE2 is a positive integer to indicate the threshold for the total cumulative number of active server threads. It must be less than or equal to MAXDBAT. ATTRIBUTE3 is NULL. Filtering is only by the LOCATION column which must contain ‘*’, ‘:::0’ or ‘0.0.0.0’.

Variable Support

GLOBAL_VARIABLE in the KEYWORDS column, ATTRIBUTE1 is a SET statement for a global variable. E.g. SET SYSIBMADM.GET_ARCHIVE = ‘Y’ or, if you are at Db2 12 FL507 or higher, SET SYSIBMADM.MAX_LOCKS_PER_TABLESPACE = 9000. See the SET documentation in the SQL Reference for more details. ATTRIBUTE2 and ATTRIBUTE3 are both NULL. These are only valid for remote applications.

SHARE_LOCKS in KEYWORDS column, ATTRIBUTE1 column contains a property that applies to global transactions in an RRS context, such as CICS through the External CICS interface. The property applies only to remote applications. The value must be in the following format: PROCEDURE_LIST=aaaa,bbbb,… each of the listed procs cannot be an external SQL proc and not a three part name. Maximum length is 1024 bytes. ATTRIBUTE2 and ATTRIBUTE3 are both NULL.

What Is New in Db2 13

FL500 introduced two extra keywords:

RELEASE_PACKAGE in the KEYWORDS column, ATTRIBUTE1 is COMMIT. ATTRIBUTE2 NULL means this is for remote threads only. 1 applies to local threads only (applies at package load) and 2 profile applies to both local and remote threads. ATTRIBUTE3 is NULL.

SPECIAL_REGISTER in the KEYWORDS column, ATTRIBUTE2 NULL means this is for remote threads only. 1 applies to local threads only (applies at package load) and 2 profile applies to both local and remote threads. Note that only SET CURRENT LOCK TIMEOUT (But not the WAIT, MODE or TO syntax) is currently supported for local threads.

FL 501 introduced local global variable support:

GLOBAL_VARIABLE in the KEYWORDS column, ATTRIBUTE2 NULL means this is for remote threads only. 1 applies to local threads only (applies at package load), and 2 profile applies to both local and remote threads. Note that only SET SYSIBMADM.DEADLOCK_RESOLUTION_PRIORITY = xxx is currently supported for local threads.

The Future Is Bright!

It must be assumed that more and more things will end up in these profiles and it will get more and more interesting to use them, but the major problem is that they are not really transparent. It is very easy to incorrectly set them up and they are tricky, if not impossible, to test. All that being said, they are a very important tool in the tool-box of the modern, agile, DBA!

Examples

Use Case 1 : Evaluate a parameter change

You want to evaluate the impact of a modification at a system or application parameter. You can create a specific profile, with the values you want to activate, and an action level of type WARNING.  After starting the new profile, follow the message DSNT773I to monitor the future impact of your modification, without impacting the subsystem behaviour. 

Use Case 2 : Avoid to adapt attributes of your NULLID Collection

For dynamic SQL, it is not recommended to adapt parameters (APPLCOMPAT, CONCENTRATESTMT …) of the NULLID collection, as it would impact every client working with the default collection.  So, if you want an application to use specific options, you could duplicate the packages of the NULLID collection in a specific collection with the appropriate bind parameters.  Defining a profile that identifies the application, you can redirect to the new COLLID with the use of the special register PACKAGE PATH.  Should your new settings not be optimal, a simple stop of the profile will restore the situation.

Use Case3: High Performance DBAT but *not* everywhere!

The problem:
NULLID being used for all remote access and you wish to use High Performance DBATs for some of them but not all.
The solution:
Create a new collection, called e.g. HIGHPERDBAT, and bind into it any and all of the packages you want available for High Performance DBAT (so also with RELEASE(DEALLOCATE) naturally!)
Insert a Profile Id with a filter for the criteria you wish for, (see earlier in this blog). Then insert an attribute keyword of SPECIAL_REGISTER with ATTRIBUTE1 to be SET CURRENT PACKAGE PATH = ‘HIGHPERDBAT’
If the profile is started, then at next connection time, this new collection will be honored. If something goes awry, and you start getting DBAT problems, simply disable this profile entry and you are done!

Cool huh?

Remember that you must also *allow* High Performance DBATs by running with DDF parameters CMSTAT set to “INACTIVE” and PKGREL set to “BNDOPT” or “BNDPOOL”.

As always, I would love to hear any comments or criticism about this topic!

TTFN

Roy Boxwell