2016-08 – Is it Safe? How to recover accidently dropped tables

Do you have the DDL anywhere?
Was there a “recent” image copy or disk back-up? Who can you call for help?

The newsletter title this month is really nothing to do with the film “Marathon Man”, but sometimes backup and recovery can feel just like having your teeth drilled… Anyway, the title is actually meant to get you to think again about your site’s back-up and recovery definitions—specifically that age-old chestnut about “accidently” dropped objects.

RECOVER from DROP?

I saw in LISTSERV a discussion about recovering from dropped tables and tablespaces. It is a pretty horrible situation when you realize that, you just confirmed the drop of a test table to then suddenly see that as your finger is descending towards the ENTER key that it is, in fact, a different name…
recover accidently dropped db2 zos table based on DDL extraction

DDL and Back-ups Handy?

Normally, at this point, the air is filled with colorful language and interesting local metaphors. Once it has calmed down a bit the real work starts: Do you have the DDL anywhere? Was there a “recent” image copy, or a disk back-up? Who can you call for help?

Mirror Mirror on the wall

Remember that mirroring etc. will not help you as the DROP has also been successfully mirrored. So within a moment, the data was also dropped at your disaster recovery site. (Argh!)

Newbies then start looking in SYSIBM.SYSCOPY for the last image copies, while us grey-haired oldies more mature experts start looking in production control copy libraries and BETA92!

The heat is on!

At this point a couple of things happen: The telephone starts ringing and a manager-type person materializes to ask annoying questions all the time. Typically: “How long will it take to get the data back?”

Now you have two possibilities:

1.You are in luck! Someone somewhere extracted all the DDL for the table(s) with DBID, PSID, and OBID(s) so you could generate a DSN1COPY job from the last found Image Copy dataset

recover accidently dropped db2 zos table based on DDL extraction

2. You are *not* in luck! You have no idea how the table looks, and you cannot see if anyone ALTERed it in the last five years or so.

Crashed and Burned

If you are in position 2, it is now a good time to update your CV and make sure your desk is neat and tidy… Now you might have access to some nifty third party vendor tools, but for that you must at least have the dataset name of the last Image Copy and—of course—the third party tool itself! Or the ability to read the DB2 log and “resurrect” the table defs from there—Not a pretty place to go I assure you!

Back up that chain of thought for a Moment

So let’s rewind and imagine that you are doing this all differently… What about beginning today with an extract of all the DDL on your system? Then capture all of the IFCID 62’s that show any DDL changes. Going further: what about getting all of the IFCID 220’s to get dataset allocations correlated with Utility IDs and DBIDs and PSIDs.

Imagine what you could then do?

Wow! Cavalry over the hill

Yep, you have a ”history” of all the DDL that has happened on your machine right up to the point when your object was DROPed *and* you have the dataset name(s) of the last image copies as well as all the internal IDs to enable a successful DSN1COPY job complete with OBID translation! Cool huh? So suddenly you are now a hero instead of a villain!

recover accidently dropped db2 zos table based on DDL extraction

recover accidently dropped db2 zos table based on DDL extraction

DIY or Buy in?

So much for the theory – What about in practice? Well you can write it all yourself or you could use this as a sort of useful side effect from our WorkloadExpert (WLX) software which has all this built in! We already get all these IFCIDs, we already have a DDL Generator bundled with the WLX software for the Audit Use Cases and so it really kills two birds with one stone!

Restricted movement?

Now, of course, you could argue:“Wait! I have RESTRICT ON DROP set for all my productive tables!” Now this works really well for accidental drops, but I have seen lots of places where it should be used but was in fact forgotten. How can you check? Run this little SQL to validate that what you think is true really is true:

recover accidently dropped db2 zos table based on DDL extraction

SELECT COUNT(*) AS TABLES                                     
       ,COUNT(CASE WHEN CLUSTERTYPE = 'Y' THEN 1 ELSE NULL END) 
                 AS DROP_RESTRICT                               
       ,COUNT(CASE WHEN CLUSTERTYPE = ' ' THEN 1 ELSE NULL END) 
                 AS NOT_DROP_RESTRICT                           
FROM SYSIBM.SYSTABLES                                         
WHERE     TYPE   IN ('T' , 'M' , 'H' , 'R' )

  AND NOT DBNAME IN ('DSNDB01' , 'DSNDB06' )

WITH UR
 ;

 

I hope the results don’t have you feeling like Dustin Hoffman in the movie.

As usual any questions or comments are welcome,

TTFN Roy Boxwell

2015-09 A real CLUSTER Buster

 

Are you using the “default” clustering INDEX or are you defining the correct INDEX with the CLUSTER Attribute?

This newsletter is dedicated to all DDL designers who do their best but then omit that last tiny bit. No-one really notices, or even cares, for years and years until…

Imagine a huge table up in the billions of rows. Imagine now that you have SQL that accesses this table and it must, as always, run fast. So what do you do? You create an index, RUNSTATS it and Hey Presto! Everything is sweet and dandy! Now imagine this happening again and again over time… What you finally end up with is a huge table with billions of rows now with ten indexes! Not too brilliant for insert and update but that is not the point of this newsletter.

 

An SQL, that had worked perfectly well, suddenly went pear shaped…

So now stop imagining, as this had already really happened at a customer site. We come to the crux: An SQL, that had worked perfectly well, suddenly went pear shaped (belly-up for the non-British English readers!) and started using a two column index with one matching column instead of a six column index with six matching columns! This change in access path caused death by random-IO to occur and it all went horribly wrong.

 

Now the question is why? What on earth happened for the DB2 Optimizer to make such a terrible decision?

1- RUNSTATS review

First idea was, of course, my favourite – Incomplete or not Full RUNSTATS data. In fact there were “bogus stats” from 2003 in the SYSCOLDIST, but even after all the bogus stats were deleted and a complete RUNSTATS with HISTOGRAM and FREQVAL performed, the access path remained stuck on the “bad” index.

 

2- DDL review

I then reviewed the DDL that created all of the objects and noticed that none of the indexes was defined with the CLUSTER attribute. The table itself was “as old as the hills,” but all of the indexes had been created and/or altered many times over the last ten years or so.

 

3- Redefine the “bad” Index as CLUSTER

 The dummy CLUSTER

Now, as we all know, if no index is defined as CLUSTER DB2 picks one to be a dummy CLUSTER when it does a REORG. So you can end up with 100% clustering non-clustered indexes. In this case that was exactly what was happening. The “bad” index was, purely by fluke after many years of index maintenance, the “default” clustering index, however it was a *terrible* choice for a clustering index. Worse still: because of the fact it was non-unique with two columns and therefore small (well small in this case was still 40,000 pages!) it looked positively “good” to the DB2 Optimizer—hence the decision to abandon a six column matching index in favour of a single column one…

“Cleansing” with ALTER, REORG and the DB2 10 INCLUDE syntax

A quick ALTER of the original “first” index to get the CLUSTER attribute, a REORG scheduled for the weekend to get the data into *proper* CLUSTERing sequence and – Bob’s your uncle! Access path swapped back to the “good” index.

Now there’s still work to be done here as ten indexes is about seven too many, if you ask me.With DB2 10 it is possible to use the INCLUDE syntax to weed out some of the extra indexes and thus speed up all usage of this mega-table. But, for the right here and now, the job is done!

So now I am at the end of this sad story of how a little design “error” of just forgetting one little attribute on an index create statement caused major mayhem many years down the line… remember to check *all* of your tables and see if you have any beauties like this in your shop (surely not!)

 

 

Here’s a little SQL that will do the job for you:

--                                                                     
-- QUERY TO LIST OUT ALL TABLES WITH TWO OR MORE INDEXES WHERE NO INDEX
-- IS DEFINED AS CLUSTER                                               
--                                                                     
SELECT A.CREATOR                                                       
      ,A.NAME                                                          
FROM SYSIBM.SYSTABLES  A                                               
WHERE NOT A.CREATOR = 'SYSIBM'                                         
  AND NOT EXISTS                                                       
          (SELECT 1                                                    
           FROM SYSIBM.SYSINDEXES B                                    
           WHERE A.NAME       = B.TBNAME                               
             AND A.CREATOR    = B.TBCREATOR                            
             AND B.CLUSTERING = 'Y')                                   
  AND 1 < (SELECT COALESCE(COUNT(*) , 0)                               
           FROM SYSIBM.SYSINDEXES B                                    
           WHERE A.NAME    = B.TBNAME                                  
             AND A.CREATOR = B.TBCREATOR)                              
ORDER BY 1 , 2                                                         
;

Note that this query excludes the SYSIBM indexes as IBM also forgot to CLUSTER them!

 

As usual, any comments or questions please mail me!

 

TTFN

Roy Boxwell