2023-01 IDUG EMEA 2022 review

Hi All!

This year (well, strictly speaking last year…) IDUG EMEA changed from running Monday to Wednesday to being from Sunday to Tuesday. This caught a few people out with travel plans, etc. but all in all it was ok as a „once off“. It was also held after a two-year COVID delay in the beautiful city of Edinburgh, where it normally rains a day longer than your stay but I only had rain on one day! Mind you, that did coincide with tornado style, umbrella-shredding winds that made going anywhere more like a swim, but what can I say? The Haggis was excellent, the Whisky’s were simply gorgeous and all the people were incredibly friendly. Not just the visitors to the IDUG either!

All a bit late, I know, but I have been terribly busy doing other „real“ work… So, with no further ado, off we go through the Db2 for z/OS IDUG presentations!

Please remember, to use the links below you *must* have been at the IDUG 2022 EMEA and/or you are an IDUG Premium member and remember your IDUG Userid and password!

Starting with Track A: Db2 for z/OS I

A01 Db2 13 for z/OS and More! from Haakon Roberts and Steven Brazil gave a great intro to Db2 13, the history behind it and the AI-driving force within. Finishing off with Db2 13 highlights. Our very own Andre Kuerten rated Haakon as the best overall presenter by the way.

A02 Db2 13 for z/OS Performance Overview, from Akiko Hoshikawa did what it said. A deep dive through all the performance stuff, including updates on Db2 12 and synergy with z Hardware.

A03 Db2 13 for z/OS Migration, Function Levels and Continuous Delivery from „The Dynamic Duo“ of Anthony Ciabattoni and John Lyle was a review of all the Db2 12 functions and then an update on how to do the migration to Db2 13 – Which should be faster and less troublesome than to Db2 12.

A04 Now You See It, Unveil New Insights Through SQL Data Insights from Akiko Hoshikawa was the first of many presentations to go into depth all about AI, as various bits of AI are now available out-of-the box with Db2 13. Still a few things to do of course… 50GB of USS data for the SPARK for example … but at least no need for machine learning (ML). At the very end was also a glimpse into the future and the next three BiFs coming our way soon!

A05 Getting Ready for Db2 13 from John Lyle was another review of continuous delivery and how to get to Db2 12 FL510, which is the migration point for Db2 13 FL100.

A06 Db2 for z/OS Utilities – What’s New? from Haakon Roberts was the usual excellent presentation and run down of all the latest new stuff on the utilities front and also any Db2 13 stuff as well. As always well worth a read!

A07 BMC Utilities Update was a complete review of all BMC utilities.

A08 The Latest Updates on Broadcom Db2 Utilities was a complete review of all Broadcom utilities.

A09 Db2 Z Network Encryption: Overview and How to Identify End-Users Using Open Port from Brian Laube was a fascinating presentation all about fully understanding this problem as it is, or will be, a problem in all shops at some point in the future! The presentation also included an example NETSTAT from TSO to help you find the „bad guys“ before you switch to 100% SECPORT usage. At the end was a very nice list of AHA requests/ideas/wishes which I would also appreciate if some people voted for!

A10 Do you know? SMF Records, IFCIDs, Trace Classes – What Does it all Mean? from Sally Mir all about traces, SMF, IFCIDS etc etc etc.

A11 The Db2 12 and beyond with the latest real customer experiences … from Ute Kleyensteuber. Ute discussed the experiences with Db2 12 and some of the challenges she had to face, and then some Db2 13 early experiences.

A12 Db2 13 for z/OS Migrations – Major Process Changes Detailed from John Lyle. He explained what the Catalog and Directory are and how they have changed over the years, also how to pre-migrate check and then migrate your sub-system.

A13 Let Your Cloud Applications Get the Most From Your z Data – Introducing IBM Data Fabric from Sowmya Kameswaran was to introduce IBM Data Fabric with Cloud Pak for Data.

A14 Express Yourself from Marcus Davage was an excellent romp through the horrors of REGEX and ended up with a quick way to solve WORDLE…all a bit dubious if you ask me! What with the recursive SQL from Michael Tiefenbacher and Henrik Loeser solving Sudoku and now Marcus solving Wordle what is left for our brains to do??? The link to github for Sudoku is:

https://github.com/data-henrik/sql-recursion

Thanks to Michael and especially to Henrik for the link.

A15 A Row’s Life from Marcus Davage. This is what a row goes through in its daily life – fascinating! A deep technical dive into the data definition of pages etc etc.

A16 Db2 13 for z/OS Install and Migration Using z/OSMF Workflows from Sueli Almeida. This presentation showed how you can indeed use z/OSMF for provisioning now!

Now we switch to Track B Db2 for z/OS II where we start with B02 as there was no B01:

B02 Continuous Delivery – Navigating from Db2 12 to Db2 13 from Chris Crone. This was another review of CD in Db2 12 and 13 but included a review of ZPARMS that have been deleted/deprecated/updated in Db2 13.

B03 Db2 13 for z/OS Application Management Enhancements from Tammie Dang. The highlights for applications were reviewed, highlighting timeouts and deadlocks and the way that SYSTEM PROFILES can easily help you get what you want without massive REBINDs.

B05 Ready Player One for Db2 13! from Emil Kotrc. This was another recap of Db2 13 migration, CD, Db2 12 FL501 to FL510 but also with deprecated functions and incompatible changes.

B06 Getting RID of RID Pool RIDdles from Adrian Collett and Bart Steegmans was an entertaining sprint through What is an RID? and When do I have a real RID problem?

B07 Mastering the Setup for Integrated Synchronization in Data Sharing from Christian Michel was all about network setup for encryption and certificates, as well as high availability for multiple accelerators.

B08 Things About Db2 for z/OS I Wish I’d Remember When…. from Michael Cotignola. A quick run through interesting stuff that has changed over time especially RTS usage.

B09 Speed up your Image Copies consistently and non-disruptive! from Eva Bartulej and Chris Duellmann was a description of one way to Clone data in production down to test.

B10 Db2 for z/OS Data Sharing: Configurations and Common Issues from Mark Rader. Was a very interesting presentation about exactly how data sharing hangs together and the various ways you can break it or make it better. A Must Read if you have, or are planning on, going to a data sharing system! His anecdote about a forklift truck crashing through the back wall of the data center brought home that „disasters“ are just waiting to happen…

B11 Get Cozy with Traces in Db2 for z/OS from Denis Tronin was a great intro into the usage of Traces for Db2 for z/OS. Included was also an update of which traces have been changed or introduced for the new Db2 12 and 13 features.

B12 Partitioning Advances: PBR and PBR RPN from Frank Rhodes. This gave an excellent review of the history of tablespaces in Db2. Then the new variant PBR RPN was introduced, and how to get there.

B13 Managing APPLCOMPAT for DRDA Application from Gareth Copplestone-Jones. Another review of CD and APPLCOMPAT but this time looking at NULLID packages and the special problems they give you!

B14 Afraid of Recovery? Redirect Your Fears! from Alex Lehmann. This showed the new redirected recovery feature with some background info and a summary about why it is so useful!

B15 Db2 SWAT Team Perspective on Db2 13 Features that Maximize Continuous Availability from Anthony Ciabattoni. A review of all the features that allow you to change things while the system is running with no outage.

B16 DB2 z/OS Recovery and Restart from Thomas Baumann. This is a *one* day seminar… The IDUG live talk was just the first hour (first 34 slides!) and if you ever wish to consider doing RECOVER – Read it all!

B17 Security and Compliance with Db2 13 for z/OS from Gayathiri Chandran. Was all about the IBM Compliance Center and a list of all Audit relevant info in Db2 (Encryption, Audit policies etc.)

Now off to Track E AppDev & Analytics I:

E01 When Microseconds Matter from Thomas Baumann. This was all about tuning a highly tuned system. Where, even if you have 50 microsecond CPU SQL times, you can squeeze even more out of the Lemon! Included are complete SQLs for examining your DSC for tuning candidates that „stayed below the Radar“ before.

E02 Customer Experiences and best practices from Accelerator on Z Deployments from Cüneyt Göksu. This was all about Accelerator usage of course and included a bunch of „Lessons learned“ when migrating through to V7.5.

E03 Our Migration journey towards UTS and DB2 12 FL510 : Approach and
Status
from Davy Goethals. This was all about the effort to get all TS’s to UTS and to get to Db2 12 FL510 ready for Db2 13. Especially interesting for the QMF data going to implicit TS’s now.

E04 Db2 for z/OS Locking for Application Developers from Gareth Copplestone-Jones. All about locking from the developer’s POV. Contains Lock Size recommendations and descriptions of Locking, and Cursors. A very good read!

E05 Simplify database management with package rebind in an 24×7 application environment from Tammie Dang. This was all about the REBIND Phase-in which really, really helps keep your system 24×7! Also worth mentioning is the Db2 12 PTF UI73874 (APAR PH28693 Closed 2021-02-09) which *must* be installed!

E06 Mainframe Modernization – What about the data? from Greg DeBo. This was all about data and who has security, who owns it, needs it and what can you do with it when you no longer need it?

E07 Beginners guide to Ansible on z/OS from Sreenivas Javvaji started with the pre-reqs like Ubuntu and then adding Ansible and installing the IBM z/OS Core Collection finishing off with yaml.

E08 Sharing Early Experience with Db2 AI for z/OS from Fatih Aytemiz. This was the customer view of early usage of AI at their bank.

E09 Access Paths Meet Coding from Andy Green. Contains a DBA view of how application developers let access paths „slide“ over time until incidents start to happen, and how to correct this by putting back „optimization“ into the development process. Extremely useful presentation for Application Developers and SQL coders!

E10 SQL Injection and Db2 – Pathology and Prevention from Petr Plavjaník. A cross platform presentation all about SQL injection and how it can bite you… A very important take away is that it is not „just“ an LUW or distributed problem. Dynamic SQL in COBOL can just as easily be injected…

E11 All you ever wanted to know about Accelerator on Z Monitoring from Cüneyt Göksu. Did what the label said!

E12 What Db2 Can Do; I Can Do Too – First Steps Towards Machine Learning from Toine Michielse. An introduction to AI and ML along with explaining all of the current Models and training mechanisms.

E13 How Can Python Help You with Db2? From Markéta Mužíková and Petr Plavjaník. Everything you ever wondered about Python but were afraid to ask! Included an installation list explaining some of the more weird environmental variables of the pyenv.sh

E14 Use Profiles to Monitor and Control Db2 Application Context from Maryela Weihrauch. This was all about one of the, in my personal opinion, most underused features of Db2 on z/OS. They have been around for years and they enable so much e.g. Global Variables, driver upgrades, RELEASE(DEALLOCATE) / RELEASE(COMMIT) etc etc

E15 -805 Explained from Emil Kotrc. This explained the whole background of program preparation Precompile, Compile, Link and BIND. Which consistency token goes where and when is it validated?

E16 The Exciting Journey Towards Devops on the Mainframe from Toine Michielse. This was all about DevOps and how the Open Mainframe is reflected in this environment with Zowe, git etc.

E17 Data, I just can’t get enough, Data Archiving in Db2 from Roberto Cason. Was all about data and when to archive it and more importantly – how?

Then Track F for AppDev & Analytics II:

F03 Is it worth to migrate CICS cobol app to Windows .net ? from Mateusz Książek. Naturally I am a little bit biased here, as I would always say „NO!“. However Mateusz goes on to explain the difficulty of monitoring and comparing the results. It was a bit like apples and oranges after all and he ended on a Pros and Cons slide where you must decide …

F04 COBOL abound from Eric Weyler. This was all about the remarkable life of COBOL and how it is *still* nailed to its perch! There are „new“ forms like gnuCOBOL and new front ends like VS Code and of course Zowe and Web GUIs with z/OSMF.

F05 Getting the Most Value from Db2 for z/OS – No Matter what the Version or Function Level from Bob Bersano. This was a „call to arms“ to actually use the existing functionality that you have in Db2 for z/OS versions 12 and 13.

F06 Declared Global Temporary Tables (DGTT) User Stories from Kurt Struyf. Explained the differences between CGTT and DGTT, why you would want to use a DGTT and things to know, especially EXPLAIN usage.

F07 War of the Worlds – Monolith vs Microservices from Bjarne Nelson. This session highlighted the intrinsic difficulties of going to microservices (Rest et al) in comparison to the „normal“ DBMS ACID style. Finishing with „When to use microservices and when not to!“

F08 SYSCOPY: You cannot live without it! from Ramon Menendez. Detailed everything about this very important member of the Db2 Catalog. It also covered the new things in Db2 12 and 13 as well as a quick look at how SYSIBM.SYSUTILITIES interacts with it.

F09 Playing (with) FETCH from Chris Crone. This was an informative session all about FETCH where even I learnt something … shock, horror!

F10 A Tale of Two Extensions : Db2 Family Support in Visual Studio Code from Philip Nelson. This was a deep dive down into VS Code, IBM Db2 for z/OS Developer Extension and Db2 Connect (not the old one, the new one!)

F13 Your Statistics are Safe with Me, Statistics Profile Revealed from Nilufer Osken. Everything you wanted to know about Db2 statistics and statistic profile usage.

F15 Making Db2 on the Mainframe Great Again with the Linux Foundation Zowe Tooling from Joe Winchester. This was a great Zowe presentation about how you can modernize your mainframe. My personal favorite is naturally slide 35 with the Green Screen emulator…

F17 Explain explained from Julia Carter. This was an introduction in how to use and understand the EXPLAIN statement and its output to help in correcting badly-running SQL.

Finally Track G Daily Special (Sounds like a restaurant…) :

G01 Db2 for z/OS Security – An Introduction from Gayathiri Chandran. This is everything you need to know about security on z/OS.

G02 Back to Basics: High Performance Application Design and Programming from Tony Andrews. This was all about understanding SQL accesses and Db2 access paths using EXPLAIN. Then all the different type of joins were discussed as well as SORTs.

G03 Do I Really Need to Worry about my Commit Frequency? An Introduction to Db2 Logging from Andrew Badgley. Explained the BSDS, the Active and Archive logs and how they all interact with UOW. A recommendation here during Q&A was to COMMIT about every two seconds, and one war story was of a site that had an eight hour batch run which was then deemed to have gone rogue and was duly cancelled… It started rolling back and took a while… Db2 was then shut down – It didn’t of course… IRLM was then cancelled, Db2 came crashing down. Db2 was restarted… Hours passed as it *still* did its ROLLBACK, they cancelled it again… Then restarted and then waited hours for it to actually (re)start properly…

G04 Db2 Logging Basics & Exploitation Beyond Recovery from Steen Rasmussen. This explained how Db2 actually does logging and why we should be interested in it!

G05 The Trilogy of DB2’s “Originating” Address Spaces–Mainframe, DDF & Stored Procedures Measure/Tune from Thomas Halinski. Explained all the z/OS parts of Db2 and then listed out the different „areas“ of tuning that are possible.

G17 Esoteric functions in Db2 for z/OS from Roy Boxwell. Naturally the best presentation of the EMEA *cough, cough* Could be a little bit biased here… If you wanted to know about weird stuff in Db2 for z/OS then this was your starting point. Any questions drop me a line!!!

My very own Oscar

I was also very proud, and happy, to be awarded an IBM Community Award as a „Newbie IBM Champion“ for 2022! It was at the same time as John Campbell got his for lifetime achievement and was doubly good as I first met John waaaay back in the 1980’s at Westland Helicopters PLC with DB2 1.3 on MVS/XA – Those were the days!

Please drop me a line if you think I missed anything, or got something horribly wrong. I would love to hear from you!

TTFN,

Roy Boxwell

2021-12 What’s in a Log anyway?

Hi! This month I wish to briefly delve into the inner workings of the D2b Log. The topic is very broad and complex and so this newsletter is only really skimming the surface of this topic!

What is the Log?

The Db2 Log is the central area where all data changes, plus a whole lot more, gets written away by Db2 as it does its normal work. Most shops allocate several large VSAM datasets for the Active Logs which, when full, get written off to Archive Logs.

Rules of the Log

How much Log do you need? The golden rules are all time based:

1) At least 24 hours of coverage on your Active Logs

2) At least 30 days of coverage on your Archive Logs

Any less and you could seriously run into trouble, breaking the 24 hour rule means that possibly normal ROLLBACKs might suddenly be requesting tape mounts which is not pretty and breaking the 30 days rule might put you into a world of pain when data sets get migrated off and scratched in a recovery scenario.

Take Care of Your Logs!

These Logs must be looked after and cared for as they save your company’s life on a daily and hourly basis. Normally, no-one really cares about the logs as they just „work“ and that’s it! However, the Db2 Log is actually a bit of a bottleneck these days.

Remember LOG NO?

Many years ago, Roger Miller said „LOG NO datasets will be implemented in DB2 over my dead body“ as he was pretty adament that LOG NO was a bad idea. The driver behind the requirement was just the sheer size of the logs being written by Db2 and the belief that writing fewer logs would make application faster.

How Many?

When you look at the history of Db2 you can see it started with between two and 31 active logs and between 10 and 1000 archive logs. We all thought „that will be more than enough“. Nowadays we have between two and 93 active and between 10 and 10,000 archives! Some shops rotate through their six byte RBAs in 24 hours and go through 1000’s of logs so we have grown up a little bit!

Before it Hits the Log…

Remember that there is another really important ZPARM that affects the LOG before the log is even hit – OUTBUFF it started out at a value between 40K and 4000K and is now between 400(K) and 400000(K). Just set it to the highest value you can! Db2 will always look here first before even looking at the active log and so if the data is here it is much faster than VSAM access!

Bottleneck?

Some customers were convinced that one area that was slowing down Db2 was the log write and the externalization of the data within. Roger was 100% against this but even he lost this battle and so the LOG NO space was born. However, if you ever do a ROLLBACK then the space goes into COPY pending which is horrible!

Checkpoint Charlie

The number of system checkpoints also has a direct impact on log size and usage. You must decide whether you wish to go time based (IBM-recommended way is three minutes) or transaction based – or even a mix of the two methods. I am in favour of keeping my life simple so I would always recommend the checkpoint every three minutes rule. You could argue that at „quiet times“ overnight too many checkpoints will be taken, but I counter that argument with „When was the last time you ever had a quiet time on your machine?“

Index Split?

Believe it or not, index splits seem to take up some 55% of the Db2 Log at one shop I visited. We decided to try different index pages sizes , including compression, and this all really helped in reducing the log load pressure. This has a direct effect on DASD, elapsed and CPU time which was an allround winner!

Crystal Ball?

So, how do you look into the Db2 Log? If you have no tooling then you only get the IBM stuff which is – shall we say – basic. It was here that I decided to write a little COBOL program, that I have called Db2 Archive Log Viewer for Db2 z/OS, that would read all of the Archive logs (No VSAM – keep it simple!) and handle normal non-spanned log records to give me a glimpse into what on earth Db2 was writing into the darn things!

Log HealthCheck

So what does Db2 Archive Log Viewer for Db2 z/OS then do? Well, it reads as many Archive Logs as you can give it and reports on the contents of the Logs in absolute numbers and in size. Using this data enables you to get a new view into what your Db2 system is actually doing and who is causing all your logs to be written.

Surprise!

I was pretty surprised by what I saw inside my logs and I hope you find it interesting to peer inside your logs!

If you have any ideas or desires about log datasets, feel free to email me!

TTFN

Roy Boxwell

2021-10 Creating Clones

This month I’m reviewing CLONE Table usage. It is one of several, what I call „esoteric“, Db2 abilities/functions that I will be running through over the coming months, plus some blogs that are either badly misunderstood, not used by anyone, or just very odd.

Attack of the Clones

Clones arrived in a blaze of glory way back in DB2 9 (remember that capital B?) and then promptly disappeared. I have had nothing to do with them – ever – and I only received one question about their usage. Until now…

What Changed?

Well, what happened, is that I was asked if our RealTime DBAExpert utility generating software worked with CLONE tables, and I had to do some quick checking in my head about *what* these things were!

How Do They Work?

So what is a CLONE Table? It is basically a duplicate table that lives in the „same“ tablespace but with a different INSTANCE. This is the first place where people make mistakes. You read a lot about renaming the VSAM LDS. That *never* happens with CLONEs. The „trick“ that IBM uses is the INSTANCE, but I am getting ahead of my self here!

In the Beginning…

Create a Database, Tablespace and a Table with a couple of indexes:

CREATE DATABASE "TESTDB"
BUFFERPOOL BP0
INDEXBP BP0
STOGROUP SYSDEFLT
;
COMMIT ;
CREATE TABLESPACE "TESTTS" IN "TESTDB"
USING STOGROUP SYSDEFLT
PRIQTY -1
SECQTY -1
ERASE NO
FREEPAGE 5
PCTFREE 10
GBPCACHE CHANGED
TRACKMOD YES
LOG YES
DEFINE YES
DSSIZE 1 G
MAXPARTITIONS 1
BUFFERPOOL BP0
LOCKSIZE ANY
LOCKMAX SYSTEM
CLOSE YES
COMPRESS NO
MAXROWS 255
SEGSIZE 32
;
COMMIT ;
CREATE TABLE BOXWELL.TEST_BASE
(COL1 CHAR(12) NOT NULL
,COL2 INTEGER NOT NULL
,COL3 INTEGER NOT NULL)
IN TESTDB.TESTTS
;
COMMIT ;
CREATE UNIQUE INDEX BOXWELL.TEST_BASE_IX1 ON BOXWELL.TEST_BASE
(COL1, COL2, COL3)
USING STOGROUP SYSDEFLT
PRIQTY -1
SECQTY -1
ERASE NO
FREEPAGE 5
PCTFREE 10
GBPCACHE CHANGED
BUFFERPOOL BP0
CLOSE YES
COPY NO
;
COMMIT ;
CREATE INDEX BOXWELL.TEST_BASE_IX2 ON BOXWELL.TEST_BASE
(COL2, COL3)
USING STOGROUP SYSDEFLT
PRIQTY -1
SECQTY -1
ERASE NO
FREEPAGE 5
PCTFREE 10
GBPCACHE CHANGED
BUFFERPOOL BP0
CLOSE YES
COPY NO
;
COMMIT ;

Insert some data:

INSERT INTO BOXWELL.TEST_BASE VALUES ('A', 1 , 2);
INSERT INTO BOXWELL.TEST_BASE VALUES ('B', 1 , 2);
INSERT INTO BOXWELL.TEST_BASE VALUES ('C', 1 , 2);
INSERT INTO BOXWELL.TEST_BASE VALUES ('D', 1 , 2);
INSERT INTO BOXWELL.TEST_BASE VALUES ('E', 1 , 2);
INSERT INTO BOXWELL.TEST_BASE VALUES ('F', 2 , 2);
INSERT INTO BOXWELL.TEST_BASE VALUES ('G', 2 , 2);
INSERT INTO BOXWELL.TEST_BASE VALUES ('H', 2 , 3);
INSERT INTO BOXWELL.TEST_BASE VALUES ('I', 2 , 3);
INSERT INTO BOXWELL.TEST_BASE VALUES ('J', 2 , 3);
COMMIT ;

What Says RTS?

First, make sure the real-time statistics (RTS) have all been externalized:

-ACCESS DATABASE(TESTDB) SPACENAM(*) MODE(STATS)

Then run a little SQL:

SELECT *
FROM SYSIBM.SYSTABLESPACESTATS
WHERE DBNAME = 'TESTDB'
;
SELECT *
FROM SYSIBM.SYSINDEXSPACESTATS
WHERE DBNAME = 'TESTDB'
;

You should see one row from SYSTABLESPACESTATS with 10 TOTALROWS and 10 REORGINSERTS etc. and two rows from SYSINDEXSPACESTATS with 10 TOTALENTRIES and 10 REORGINSERTS etc. Now we have what I call the „base“ table.

Use ISPF as well…

In ISPF 3.4 you should see datasets like this:

DB2DC1.DSNDBD.TESTDB.TESTRBAS.I0001.A001
DB2DC1.DSNDBD.TESTDB.TESTTS.I0001.A001
DB2DC1.DSNDBD.TESTDB.TEST1BZC.I0001.A001

The Fun Begins …

Now we create a CLONE. To do this, you do *not* CREATE – that would be way too easy – a CLONE Table. You actually issue an ALTER statement like this:

ALTER TABLE BOXWELL.TEST_BASE
  ADD CLONE RINGO.AARDVARK
;
COMMIT ;

Now do that RTS select and the ISPF 3.4 again … As if by magic you will now see double the rows in the RTS … Check out the INSTANCE column:

------+---------+---------+
PSID  PARTITION  INSTANCE
------+---------+---------+
   2          1         1
   2          1         2

Aha! We now have two sets of RTS Counters – This is a good thing! ISPF also looks different:

DB2DC1.DSNDBD.TESTDB.TESTRBAS.I0001.A001
DB2DC1.DSNDBD.TESTDB.TESTRBAS.I0002.A001
DB2DC1.DSNDBD.TESTDB.TESTTS.I0001.A001
DB2DC1.DSNDBD.TESTDB.TESTTS.I0002.A001
DB2DC1.DSNDBD.TESTDB.TEST1BZC.I0001.A001
DB2DC1.DSNDBD.TESTDB.TEST1BZC.I0002.A001

Notice all the INSTANCE values here?

Finally the Boss Guy – SYSTABLESPACE. Here is where the access is controlled using, yet again, INSTANCE and its good friend CLONE:

SELECT *
FROM SYSIBM.SYSTABLESPACE
WHERE DBNAME = 'TESTDB'
;
--+---------+--
INSTANCE CLONE
--+---------+--
       1 Y

This is showing you all the information you need. The current base table is still the original table and this tablespace is in a „clone relationship“ – slightly better than „it’s complicated“ but close!

Test Select

Run this to see what you get back:

SELECT COUNT(*) FROM BOXWELL.TEST_BASE ; 
SELECT COUNT(*) FROM RINGO.AARDVARK    ;

You should get ten from the first count and zero from the second.

So What Is the Point?

Now we, finally, get to the raison d’être of CLONEs. The idea is that using table name ringo.aardvark you can INSERT data, perhaps very slowly over a period of days, into the CLONE TABLE and the application is not aware of and cannot be affected by it. Once the INSERT processing is completed you may then do the EXCHANGE DATA command to instantaneously swap the tables around. OK, it must actually just do a one byte update of the INSTANCE column in the SYSTABLESPACE, but I digress…

Here’s How it Looks

EXCHANGE DATA BETWEEN TABLE BOXWELL.TEST_BASE
                        AND RINGO.AARDVARK
;
COMMIT ;

Now do those COUNT(*) SQLs again:

SELECT COUNT(*) FROM BOXWELL.TEST_BASE
---------+---------+---------+--------
0
DSNE610I NUMBER OF ROWS DISPLAYED IS 1
SELECT COUNT(*) FROM RINGO.AARDVARK
---------+---------+---------+--------
10
DSNE610I NUMBER OF ROWS DISPLAYED IS 1

Whoopee! You can see that the table name has not changed but all the data has! This is pretty cool!

Downsides …

Never a day without pain, my mother used to say, and CLONEs come with a bunch of pain points!

Pain Point Number One

Reduced utility support. You can only run MODIFY RECOVERY, COPY, REORG (without inline statistics!) and QUIESCE against these. Why? Because there is only one set of catalog statistics for them. A RUNSTATS would destroy all of the data for *both* objects and the current object access paths might all go south; further, you absolutely *must* add the keyword CLONE to the utility control cards. You *cannot* rely on LISTDEF to do this for you and it is documented:

This utility processes clone data only if the CLONE keyword is specified. The use of CLONED YES on the LISTDEF statement is not sufficient.

but people still miss this and then *think* they are working with their clones but they are not! This can get very embarrassing…

Pain Point Number Two

You must remember to RUNSTATS the „new“ base after the EXCHANGE has been done. The RTS is always in step, the Catalog is probably way out of line…

When You Are Quite Finished

Once a CLONE table is no longer required you can easily drop it but naturally not with a DROP but with another ALTER statement:

ALTER TABLE BOXWELL.TEST_BASE
      DROP CLONE
;
COMMIT ;

Pain Point Number Three

The problem here is not that bad, but, depending on when you do the DROP CLONE, your „base“ could be the instance two! You, and your vendors, must make sure your Db2 and non-Db2 utilities are happy with this state of affairs!

RealTime DBAExpert?

Yep, we are in the clear! Our software does indeed support these esoteric beasts.

Over To You!

Do you use CLONEs? If so, why? Any war stories or is everything hunky dory?

As usual I would love to hear from you!

TTFN,

Roy Boxwell

Updates

I got quite a few updates about clones:

DDL Disaster

One of the major issues that I missed was another pain point: DDL Changes. These are really nasty as you must throw away your clone before you can do the ALTER and then recreate the CLONE relationship.

Commands

I also did not mention that various commands also need the CLONE keyword to be applied to CLONE spaces. For example -START DATABASE(xxx) SPCENAM(yyy) CLONE

GIVE and TAKE Programme 1, 2, 3


Give and Take 2020

Information on the Give and Take Programs 4,5,6,7


 

Previous Give & Take

We have „GIVEn“ various free-of-charge Use Cases from our SQL WorkloadExpert for Db2 z/OS like:

  1  Index Maintenance Costs

  2  EXPLAIN Suppression

  3  BIF Usage

Limited free-of-Charge Db2 Application

This Program started in Europe, during our 30th anniversary was such a success, that it is now being Extended for the benefit of North American Db2 z/OS sites.

SQL WorkloadExpert for Db2 z/OS (WLX) contains several “Use Cases”. We provided three of them, free of charge, for one month to different sites.

In return, we received their results. We’d like to share this inspiring experiences with you now.


Inspiring experiences

We TAKE the anonymized results for research

and will communicate with the local User Groups for discussions

Kundenmeinungen

3BIF USAGE
News
Lesen Sie die Kundenmeinungen aus unterschiedlichen Industrie Sektoren     [Customer Comments]

 

  • Gesetzliche Krankenversicherung
  • Automobil Industrie
  • IT Provider für Banken
  • Versicherungen

Erste Ergebnisse von Db2 z/OS Kunden

1Index Mantenance CostsNearly all the data we got back showed a positive result for created Indexes…
2EXPLAIN SuppressionAbout 10% of SQLs are actually “left over”…
3BIF Usage When migrating to a new Db2 version, the BIFs are not always compatible and an extreme amount of data is produced.

 

The difficulty of near-time analysis to track down BIFs within dynamic SQL have been solved with the BIF Usage Use Case…

[Ergebnisse von Db2 z/OS Kunden (engl)]

Program 3 – BIF Usage   

BIF-Usage

 

Präsentation

BIF KompatibilitätDb2 10 Kompatibilität Mode

 

Änderungen bei der STRING Formatierung von Decimal Data bei der CHAR und VARCHAR built-in Funktion und bei der CAST Spezifikation mit CHAR und VARCHAR Ergebnis Typen sowie UNSUPPORTED TIMESTAMP STRINGs.

White PaperWo sind die BIFs?
Finding BIFs (engl)
Wo sind die BIFs? Und wie können wir in Zukunft problemlos mit BIFs leben?
Wege aus der mangelnden Aufwärtskompatibilität bei der Migration von Db2 Versionen
Newsletter2015-01 – BIFCIDS – Where’s the BIF? (engl.)How will you deal with loop-hole usage in production code?
VideoBIF Usage (engl)(11min.) Trap  and correct the BIFs that will cause belly-ache one day soon

BIF Usage Video