2021-08 ICI – Db2 12 Update

This month, I would like to review the ICIs that we have had for a few releases plus those that have recently appeared, and then the trouble with twelve …

It began …

Db2 development realised that something had gone horribly wrong when a bunch of Db2 users suddenly found that the output from their queries was no longer what it should be … After a bit of digging, the CHAR format rewrite was found to be the root cause and the fix was hastily created – BIF_COMPATIBILITY ZPARM.

What’s in a name?

Well, then VARCHAR happened and along came a very unpleasent problem with JAVA timestamps and, as I have documented in earlier BLOGs, it all started getting silly with one ZPARM being used for multiple format problems.

Along came the ICI (Incompatible Change Indicator)

So in Db2 10 we got a new IFCID, the 366, which was spat out at *every* prepare (bind) of any SQL that, possibly, contained an ICI. Now we started off pretty small with just three ICI’s: the first two being the reformatted output of CHAR and VARCHAR and the third being the TIMESTAMP format problem.

Db2 10 updates for Db2 11

Here they brought out numbers four to nine to handle all the little changes in Db2 11 so that you got the alert in Db2 10 before it bit you in Db2 11 – all well and good.

Db2 11 updates

The big change, was the brand new IFCID 376 – which is the evil twin of the 366. The only difference being that Db2 cached the entries, so you basically got a rolled up 366 – apart from one tiny little detail. The Execution count was missing. For the 366 it is 1:1, but for the 376 it is 1:nnnnn which could be any positive integer. They then added the 11nn range, going all the way up to 1111, and then they brought in 1112 for empty XML tags. Now all of these have been discussed in my earlier blogs.

What’s new in the ICI World?

Db2 12 of course! They brought out 1201 very early on due to POWER causing a problem. The output on overflow changed from a negative to a positive SQLCODE, which can of course cause „problems“… Why did this change even happen? IBM rewrote the code from using LE 32 bit assembler math calls to using C, and so the function „knew“ if it overflowed and could return a warning saying so, whereas the 31 bit assembler just died a death and you got a negative SQLCODE.

Naming Convention?

Then it went quiet for a while until something weird happened: 1215031 and 1215032 appeared. Now, at first, I liked the idea of putting the FL into the ICI, but then I realised it was actually pretty pointless and just made it more confusing !

1215031 is issued when you could qualify a row with NULL in the DATA CHANGE OPERATION column using the FOR SYSTEM_TIME FROM/BETWEEN predicate on a system period temporal table with AUDITING.

1215032 is issued when you attempt to call stored procedure SYSPROC.SET_MAINT_MODE_RECORD_NO_TEMPORALHISTORY as this is no longer supported for data replication calls.

1204 (note the FL has gone…) is issued when you use CURRENT_SERVER or CURRENT_TIMEZONE as a column or variable name.

In the Docu it says 1202, but you actually get a 1215031 – and there’s no mention of 1215032 and 1204!

More new ones

Meanwhile, not (nothing?) to do with Db2 12, IBM also brought out

11 for using SELECT INTO syntax with a UNION

[12 was thankfully skipped!]

13 for INSERT/UPDATE/DELETE using an attribute WITH UR

Both of these were more parsing bugs than anything evil, but both require code changes if they appear!

Utility Time

Now IBM have enhanced the LOAD utility with LOAD FORMAT DELIMITED for correct packed numeric data support when one or more virtual decimal digits exist. This has caused another problem, as the data loaded could be viewed (as it was by one of our customers), as inconsistent. IBM then created another APAR to roll back the change and introduce a new ZPARM LOAD_DEL_IMPLICIT_SCALE to control how these numbers should be loaded. Default is NO, like it used to be, with an implied decimal point at the far right of the data. At the same time, it will now alert users that they could have an incompatibility with the LOAD by changing IFCID 25 to set a new bit. This warns you that you have done a LOAD into a table where there is packed decimal data with one or more digits after the virtual decimal point. If YES, then load interprets the Scale setting in the LOAD statement. For details please see APARs PH28104 and PH36908.

Pain Point for you?

The above mentioned new notification is a bit strange (pardon the pun), as there is already the IFCID 376 for incompatibilities. Now you must also start the IFCID 25 and go checking bits. So my question to you all is: Have you got this problem and, if so, do you think it is worth it to integrate IFCID 25 bit checking into SOFTWARE ENGINEERINGs/SEGUSs current ICI/BIF Use Case in our WorkLoad Expert and/or our BIF/ICI Freeware software?

The future is bright

As far as ICIs are concerned they just keep on rolling!

As always if you have any comments, especially with regard to IFCID 25, please feel free to e-mail!

TTFN

Roy Boxwell

2020-04 Four Flavors of Db2 Audit

These days there is a lot of talk about audit, specifically regarding Db2 on z/OS. So, in this newsletter, I wish to run through four different ways that you can “Get Audit Done”.

As well as simply getting it done, I will also run through the four different ways that you can process the gathered data.


Four ways to get a Db2 z/OS Audit done


1- First up

First option is the simplest, cheapest and quickest:

Do nothing.

Whether or not this will help your company is a non-trivial question of course!

Naturally this is an absolute No No.


2- Then we have

Next option is relatively simple and cheap, but requires a bit of work: 

Write it all yourself but based on existing data that some other process already extracts for you, (SMF for example). 

If you happen to have the skills for extracting the required audit data from existing data that is being collected anyway, then this might well be the best method if you are really strapped for resources. 


3- Getting there 

Then we have not so simple, still cheap, but a ton of work: 

Write it all yourself and add all the IFCIDs you actually need to audit your system as well as capturing all the SQL. 

This needs a serious amount of skills to get and keep up with the agile world of Db2. You will also need to take care of the amount of data that you will be collecting.

However, the auditor will be happy as you have everything they could ask for.


4- Aha! The only true way 

Last option is simple, not so cheap but very quick: 

Third party software that does it all for you.

This is my preferred solution, especially as we just happen to sell one (WorkLoadExpert Audit).

This is actually the only real way to go. You probably don’t have the time to keep all these things up-to-date and running correctly. 

Data Collected – Now what? 

So, you have chosen one of these ways to gather the data. Now you must evaluate what you got. Here again we have four separate ways to go forward: 

First up 

There it is! 

Do nothing. Just point at the datasets, print outs, database objects and say “It is all in there…” 

This is not really a solution and any auditor worth his, or her, salt would quite rightly be extremely upset! 

Then we have 

A whole bunch of pre-written SQLs. 

SPUFI is ok, but much better would be to see these in a GUI where graphical viewing is built in and saving and sharing results is much easier.  

This is not bad, but still a manual “island” process. Just Db2 and nothing else plus it must be triggered by humans. 

Getting there

A whole bunch of pre-written and custom SQLs.

This time, all run in Batch and the results are emailed to the auditor directly. These emails can “just sit there” until the auditor checks the results. Naturally, if anything is found, then the underlying data must still be there for a detailed analysis.

Better, as it is getting automatic but still not really “round”, as it is still Db2 in isolation…

Aha! The only true way

Use of LEEF or SYSLOGGER-style formats to export all audit data.

The data is then in a data-lake where SPLUNK, QRADAR et al can happily slice and dice their way through the data.

This is the best way!

You also get an extra bonus point for *removing* the data from the mainframe. As auditors *love* a single point of control, this is the only real way forward. It also pushes the Db2 data into the world of other data that auditors use and require.


Db2 Audit with „GIVE&TAKE“ :


Software Engineering GmbH and SEGUS Inc are launching a new free Give&Take which this time is the Audit support from WorkLoadExpert.

If you would like to take part, then please just fire off an email to db2support@segus.com telling us who you are and which firm you work for and we will get in touch!

Give and Take 

By the way, it is called “Give&Take” because :

  • we Give you the software, for free, to run for a trial period, and
  • we would like to Take away what you think, feel, and find about the software after the trial period. 

More about Give&Take


TTFN, 

Roy Boxwell 

2018-05 Audit 2.0 – GDPR Audit guide and checklist for Db2 z/OS

Or: How I Learned to Stop Worrying and Love the Auditor

A guideline in 5 big Steps to Audit for Personal Data Protection

First up, this is not another Audit review and health check sell! We, and I mean especially in the European Union (EU), are about to get the biggest update to “my data”, “my right to know” and “my right to forget (delete)” that has ever happened, when the General Data Protection Regulation (GDPRKicks off on May 25th, 2018.

Who cares?

Well, we all should really! Very soon hordes of people could well start demanding to know if you have any data about them and if so where it is and who you give it to. Personal Data will be big business!

Age of Consent

Just assuming all is ok is also no longer valid or lawful – any personal data that you keep must be kept with the *active* consent of the data owner… This is going to get awfully messy awfully quickly! Do not forget that the definition of personal data has also changed quite a bit e.g. IP Addresses. For the record here is the actual text in the GDPR regulations:

Art. 4 GDPR Definitions

For the purposes of this Regulation:

1.      ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

And

Art. 9 GDPR Processing of special categories of personal data

1.      Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.

Security by design and Security of processing

Very important are Articles 25 and 32:

Art. 25 GDPR Data protection by design and by default

1.       Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.

2.       The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.

Last part of Paragraph two is important here.

Art. 32 GDPR Security of processing

1.      Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

1.      the pseudonymisation and encryption of personal data;

2.      the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;

3.      the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;

4.      a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

2.      In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.

Here paragraphs 1.2, 1.4 and 2 are the biggies!

A small or large fine for you today?

The fines are also amazingly high. First, for the “minor” problem of being over 72 hours late when data leaks have occurred (a breach), is 2% of global turnover or €10,000,000 – whichever is *higher* and, if you are really naughty, like disregarding basic data laws, moving data abroad or ignoring an individual’s rights then you get hit for 4% of global turnover or €20,000,000 – again whichever is *higher*

First up?

I, for one, do not want to be the first company that makes the headlines… How about you?

Who are you afraid of?

Now I know that most of the talk about GDPR is “right to data” for the general public but we all know, as IT specialists, that the people to be really afraid of are ex-employees who did not leave on happy terms. They know *exactly* where the knife could best be put… They know exactly where, and on which platforms, all of the data really is.

Protect yourself – Due diligence

What can you do? Well the first thing is to actually understand what GDPR means to you and your firm’s data. Then you must be able to prove to auditors that you tried your very best. We will all probably get hacked at some time, but if you tried your best it is enough. Using all of the data you gather in a post-mortem, or for forensics, is also a very good idea as can be seen in paragraphs 2.3 and 2.4 below:

Art. 83 GDPR General conditions for imposing administrative fines

1. Each supervisory authority shall ensure that the imposition of administrative fines pursuant to this Article in respect of infringements of this Regulation referred to in paragraphs 4, 5 and 6 shall in each individual case be effective, proportionate and dissuasive.

 2. Administrative fines shall, depending on the circumstances of each individual case, be imposed in addition to, or instead of, measures referred to in points (a) to (h) and (j) of Article 58(2). When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case due regard shall be given to the following:

1. the nature, gravity and duration of the infringement taking into account the nature scope or purpose of the processing concerned as well as the number of data subjects affected and the level of damage suffered by them;

2. the intentional or negligent character of the infringement;

3. any action taken by the controller or processor to mitigate the damage suffered by data subjects;

4. the degree of responsibility of the controller or processor taking into account technical and organisational measures implemented by them pursuant to Articles 25 and 32;

The list goes on after these of course…

Db2 Audit – a new way?

Most sites use SMF and do daily cuts to then offload to a repository system of some sort on some sort of hardware. The problem here is that the amount of SMF data you are generating can swamp you and this method is not nearly quick enough. Waiting a day is 24 hours of the 72 that are available to you…

Faster, Better, Securer?

Can you do it faster?  Can you do it yourself faster?  Can you do it better? Maybe…

 

If you can handle OPx and a bit of High Level Assembler then “Bob’s your uncle!” as long as Bob is your Auditor of course…There are more than enough examples in the web about how to write and do OPx reading (There are even vendors that are willing to sell you their software 🙂 )


A guideline in 5 big steps to Audit in a new way for Personal Data protection :


1 – To do list – IFI commands, IFCIDS & Audit Class

Create a program that can issue the required IFI commands to start and stop traces and to issue the READA and READS IFI calls that you wish to have. If you don’t know which Audit Class is which IFCID – Here’s my handy list:

Audit ClassIFCIDsWhatWhy
1140Authorization failuresPossible brute force password attack
2141GRANTs and REVOKEsPossible temporary raising of privilege
3142CREATE/ALTER/DROP of table with AUDIT attributeRemoving the AUDIT attribute is a big red flag
41431st Change of table with AUDIT attributeSensitive data updates
51441st Select of table with AUDIT attributeSensitive data usage
6145BIND/PREPARE using table with AUDIT attributePossible usage of sensitive data
755, 83, 87, 169, 319SET CURRENT SQLID, end of identify, end of signon CICS/IMSElevation of privilege
823, 24, 25, 219, 220Db2 UtilitiesUNLOADing data to where?
9146, 392For customer usen/a
10269, 270, 271*

 

271 is not started automatically with Class 10

Establish reuse trusted context, trusted context CREATE/ALTER and Column Mask/Row Permission CREATE/DROP/ALTERAnything with MASKs or ROW Permissions must be checked
11361Successful accessOnly makes sense if working with AUDIT POLICY SYSADMIN or DBADMIN

So, what is missing from this picture?

Well, what about the COMMANDs to actually see what people are issuing on the machine? So you must also add IFCIDs 90 and 91. Then you do not see any of the DDL so you must start IFCID 62 to get that as you do want to see what people are ALTERing, creating and dropping don’t you?


2 – All done in design?

Once you have a system that gathers all these you must buffer them in memory and then do one of two things:

1) Write them out to a file for post-processing and enrichment later
2) Directly write them to your SIEM system of choice

I am not a fan of number 2 because if the SIEM system cannot accept the data then your data is lost…

With No. 1 you can keep processing even if the SIEM system cannot ingest, for whatever reason, and as long as you trigger yourself every 5 – 10 minutes or when the OPx buffer is full then that is “real time” enough for me!


3 – Care for something else for the weekend?


Plus you can enrich the data with extra stuff that the naked IFCIDs do not have attached to them e.g. map dbid/psid to a real database and tablespace etc.


Store all of this data in a bunch of Db2 tables


then kick-off a Db2 to a LEEF re-formatter program that reads all the audit data that has been collected since last time


writes it all out.

4 – Code Page – don’t forget the Code Page

Then you must do an ICONV for code page conversion, swiftly followed by a gzip to get it down to size and then copy it down to a USS file ready for your SIEM system to ingest its prey.


5 – Done, Finished and Complete

To prove that this system is always available you should also schedule a simple

GRANT SELECT ON SYSIBM.SYSDUMMY1 TO PUBLIC;

Every morning at 06:00 which you should see in the output every day. Remember – Due Diligence!


 

So now you are done and finished and can let the SIEM system do the completion!

 

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

PS: Please note that the bold of parts of the GDPR text is only from me!

2017-02 Why SIZE still matters in Db2 12

What has changed for space management in Db2 12?

How to avoid SIZE limits in Db2 12 like in previous Db2 Releases?

How to monitor the maximum possible SIZE of table and index spaces and table and index partitions?

SIZE in Db2 12: Now that Db2 12 has gone GA, I thought it would be nice to do a quick re-recap of Space management and its problems over the releases. My “old” newsletter 2014-05: Why SIZE matters for Db2 still receives a lot of hits on our website, so I know that this is a big topic of interest for many of you. Some nifty things have been introduced in Db2 12 to make space a lot easier to use and manage.

In the beginning…secondary allocation for tablespaces and indexes since Db2 V7, Db2 V8…

Since Db2 V8, the DBAs of this world have all had the ability to forget about PQTY and SQTY in the DDL for Tablespaces and Indexes. At first, nearly no-one trusted the sliding scale algorithm, and SOFTWARE ENGINEERING’s product Space AssuranceExpert (aka SAX) monitored and reacted instantly to secondary allocations.

However, we now have Db2 12, and I thought it would be interesting to review what was done in Db2 V7 (when our SAX was launched), and the difference nowadays in the Db2 12 world.

IFCID issuing for space extents

Every time a secondary allocation is done in Db2, it can be made to spit out an IFCID. SAX runs as a started task, active 24×7, from Db2 start up until just before Db2 shut down. It catches all of these IFCIDs thrown by Db2, and performs an analysis with six basic questions:

1Can this dataset reach its maximum physical size *before* running out of physical extents? (The actual size is dependent on the “geometry” of the object of course!)
2Will this object run out of datasets? (The number of datasets an object can have is, once again, dependent on the “geometry” of the object)
3Is this partition nearing its maximum size?
4Did Db2 ask for one extent but got more back?
5Are any of my SMS disk storage pools running out of space?
6Are there any SEQUENCES that are about to hit the wall?

(Numbers five and six are actually triggered by a timer, naturally.)

Can this dataset reach its maximum size before running out of extents?

Remember, back in those old days of Db2 V7? We only had 255 extents and 254 partitions, but datasets could still get pretty big pretty fast.

The problem lots of shops had, was that an important dataset would “hit the buffers” of maximum number of extents *way* before it ever ran out of physical space. Thus causing grief, wailing and gnashing of teeth! SAX stopped all this by giving WTO “heads-up” style messages in two flavors. First, a warning message, and then a critical message. This gave DBAs and space managers much needed time to plan for the outage and the, inevitably, long running REORG to actually action the required ALTER, or perhaps even any DROP/RECREATE that had to be done.

IBM also noticed this problem and so introduced in Db2 V8 the “sliding scale” of secondary allocations, as long as the ZPARM OPTIMIZE EXTENT SIZING field (MGEXTSZ) was set to YES (this is the default from Db2 9, by the way). Of course, to really use this, you then had to ALTER all of the existing spaces PQTY and SQTY to be -1, and then remember to delete all PRIQTY and SECQTY lines in your DDL and also rely on the TSQTY and IXQTY ZPARMs giving a big enough “first default”. (By the way, defaults for these two ZPARMS are 0, which is actually translated to be 720k or one cylinder for normal spaces and 7200k or 10 cylinders for LOB spaces). This all probably explains why the take up of this great feature has not been that spectacular and, in fact, Listserv *still* gets questions about “How good is this feature?” This also explains why the primary reason for having SAX is still valid at most shops today!

However, most shops these days tend to ignore the extents problem and only REORG when over 1000 extents have been allocated. This is no problem for SAX, as it knows the SECQTY and the MGEXTSZ ZPARM settings and can decide to “ignore” an IFCID for extent and ALTER SECQTY processing if the SECQTY is -1 and the MGEXTSZ is YES.

Will this object run out of datasets?

Now the problem of running out of datasets is very, very evil indeed… For a non-partitioned space, you can have up to 32 datasets.  Db2 will happily allocate away and you will never know, or even be informed, if, and when, the last possible dataset has just been allocated and, of course, you will not know that the 33rd one cannot be allocated until you get a -904 unavailable resource error! By definition this is “not good”, as you must do a HUGE REORG with a bunch of managers breathing down your neck and *not* make any mistakes with the new allocations. (Again, this is a very good reason to have SAX doing all the monitoring and triggering early warning “heads-up” style messages!)

Is this partition nearing its maximum size?

A partition running out of space is rare, but when it does happen it is, of course, a disaster! The idea in SAX, is to warn when “the end is near” for any of the partitions in use and thus, as before, allow time for the ALTER etc.

Did Db2 ask for one extent but got more back?

Degenerated extents are annoying as well. You have only 255 or 7,257 extents, Db2 requests one but gets up to five back! This is “wasting” your precious supply of extents and so SAX can also warn you if this starts happening. Remedial action can again be planned to correct the problem, (normally a volume defrag in this case). Now in z/OS 1.7 “Extent Constraint Removal” was introduced for the DATACLAS which, if set to “Y”, allows 7,257 extents but still limits you to 123 extents per volume and 59 volumes. So watch out if you are using huge “virtual” disks (E.g. MOD 54 or EAV), as you can end up wasting space because you still cannot exceed 123 extents per volume.

SAX also takes care of duplicate recording – This is where an Extent is registered but SMS “consolidates it into the primary/existing extent – This would normally get logged as an extent but SAX sees this and does not report it as an extent.

Are any of my SMS disk storage pools running out of space?

When an SMS Pool runs out of space, either for sort/work or image copy, it is *not* good! The idea here, is to also give a “heads-up” style alert. The DBA can trigger the space management people to have a look at the state and size of the SMS storage groups this time alerted by percentage used or GBs of space free.

Are there any SEQUENCES that are about to hit the wall?

The usage of SEQUENCES has taken off. Nowadays shops can run into the problem of SEQUENCES hitting the maximum/minimum number for a NOCYCLE defined sequence. SAX tests sequences at the same time as the SMS groups to warn about any encroaching problem with WTO/MSG and reporting.


What was new in Db2 V8?

Db2 V8 introduced a big change – Partitions went up to a maximum of 4,096, and the calculation about how many pieces your NPI can have got “a little bit complex” (see also my previous newsletter: “2014-04 Are you going to pieces”).


What was new in Db2 9?  PBG and UTS spaces

In Db2 9 the next major advance came with UTS spaces. The one that caused the most grief was, of course, PBG. Why? Well, the first problem was that some people went mad and used MAXPARTITIONS 4096 right from the get-go. They then found out this could not simply be changed and ended up being a huge problem. IBM came out with a bunch of fixes for these people, but the recommendation is still true today: “Use the number you expect to use!”

PBGs, however, came with a new set of space management problems:

1By definition every partition is full, and so a TP REORG is “dangerous” -especially if you have VARCHAR, and even more so if compressed.
2ALTER at TP level is not supported for PBG.
3Getting rid of empty partitions was not supported
4Adding partitions dynamically (by command) was not supported.
5What to do if the partition that is “in use” is growing and is

a – The last allowed Partition
b – MAXPARTITIONS is set to one?

Now these are “non trivial” because the Db2 catalog is so defined and you would not want an alert every time someone created a table or index!

The trick here, is to treat these conditions as if it was a normal space and so, instead of warning that you are using the last part, it waits until you are using, e.g. 80% of that part. Then, e.g. at 90% comes the critical threshold warning.

Big changes happened here in Db2 12.


What was new in Db2 10?

With Db2 10 came the ability to ALTER PBGs to add parts which made using DSN1COPY to clone data around a lot better!


What was new in Db2 11?

In Db2 11 the REORG utility can be used to remove any empty parts in PBGs by the use of the ZPARM REORG_DROP_PBG_PARTS being set to ENABLE (DISABLE is the default).


What is new in Db2 12?

Now in Db2 12 there is partition independence for DSSIZE. Before, all partitions had to have the same maximum size (DSSIZE). Now you can have different sizes for different parts. This requires either making a new tablespace (UTS Relative Page Numbering), or an ALTER and TS level reorg of an existing UTS space. The tablespace goes relative page numbering and the RID increases in size to seven bytes hence the need for a TS level REORG. The Partitioning indexes also get DSSIZE so they can vary in size as well. Once you are there, all of the Partitions can then be ALTERed up in size with no outage! This is really, really good!

REORG of a PBG can “spill” into a new partition. This is also really good, as it was the major problem with PBG TP level reorgs. The chance of LOB data going into COPYP during the log apply phase has been stopped – Thankfully! Finally, delete of empty partitions is controlled with a utility DROP PART syntax.


The SAX way for Space monitoring

The SAX tool way of processing all this info is neatly summarised in the help panel of the tool itself:

SUPERVISE LPS

Supervise linear pagesets. If specified, a warning is issued
in case of high allocated reaches this percentage of the
maximum data set size for partitioned objects.

For non-partitioned objects, a warning is issued for every
newly allocated data set as soon as the data set number
reaches this percentage of the maximum number of data sets:

Two different values may be entered for warning and critical
values with different message ids. This may be useful for
automation reasons (see below).

Object type: TABLESPACE      ! Maximum number of data sets
-----------------------------+----------------------------
LOB tablespaces              ! 254
-----------------------------+----------------------------
Non-partitioned tablespaces  ! 32
-----------------------------+----------------------------
Partitioned tablespaces      ! 1 (Percent used check)
-----------------------------+----------------------------
Partitioned By Growth        ! MAXPARTITIONS. LPS check if
tablespaces                  ! more than one. If on last
                             ! partition then percent used.
-----------------------------+----------------------------
Object type: INDEX           ! Maximum number of data sets
-----------------------------+----------------------------
Non-partitioned indexes on   ! MIN ( 4096 , 2 power 32 /
tablespace with LARGE,       !      ( DSSIZE / TS PGSIZE))
DSSIZE, or more than 64      ! Eg: 128 GB DSSIZE with
Partitions                   !       8 KB Tablespace Page
                             ! gives 256 Pieces (datasets)
                             ! Or    4 GB DSSIZE with
                             !       4 KB Tablespace Page
                             ! gives 4096 Pieces (datasets)
-----------------------------+----------------------------
Non-partitioned indexes      ! 32
otherwise                    !
-----------------------------+----------------------------
Partitioned indexes          ! 1 (Percent used check)
-----------------------------+----------------------------
To support automation based on WTO ids two different
thresholds may be specified:
Field (1) specifies a warning threshold using WTO ids
O2RTSU04 - 12W  (non-partitioned spaces)
O2RTSU04 - 14W  (partitioned spaces)
O2RTSU04 - 16W  (partition by growth spaces)
Field (2) specifies a critical threshold using WTO ids
O2RTSU04 - 13W  (non-partitioned spaces)
O2RTSU04 - 15W  (partitioned spaces)
O2RTSU04 - 17W  (partition by growth spaces)

AUDIT DEGENERATED XTS
Audit secondary quantity for de-generated extents. If
specified, a warning is issued in case of the last extent
does not reach this percentage of the SECQTY specified
in the Db2 catalog. If this field is left blank, no
auditing is performed.

AUDIT SMS STOGROUPS
Should the Space AssuranceExpert audit SMS stogroups. Y/N
If Y is entered, a pop-up window will allow you to enter up
to 24 SMS storage groups which will be audited.
If WARN IF % ALLOC > or WARN IF GB FREE < is specified and
exceeded, a warning (WTO) will be issued.

CHECK SYSSEQUENCES
Should the Monitor also check for SYSIBM.SYSSEQUENCES that
are running out of room every PING minutes?

N  - do nothing.  This is the default.
I  - check Identity Columns and Doc Ids for XML.
S  - check User Defined Sequences.
B  - do both.

PERCENT USED
If checking of SEQUENCES is desired then a threshold
percentage must be given from 1 to 99. If this percentage of
the available sequences is exceeded then an action is
triggered.

EXCEEDED ACTION
When a percentage is exceeded this specifies what type and
and which style of message should be externalized.

N  - do nothing.  This is the default.
W  - to write out a WTO.
M  - to write a message to the job log.
B  - do both.

To support automation based on WTO ids the following
messages are output:

O2RTS000 - 20W  (SEQUENCES MAXVALUE)
O2RTS000 - 21W  (SEQUENCES MINVALUE)
O2RTS000 - 22W  (IDENTITY MAXVALUE)
O2RTS000 - 23W  (IDENTITY MINVALUE)

So now you know why size still matters for Db2 12! The big question now is: “Are your space management and monitoring tools up-to-date, or are they still Db2 V7?”

As usual, if you have any comments or queries please feel free to drop me a line!

TTFN

Roy Boxwell

Atlanta DB2 Users Group März 2017

Atlanta DB2 Users Group, März 2017 in Atlanta, Georgia, USA. ADUG

SEGUS & SOFTWARE ENGINEERING präsentieren

Don’t let ICIs put your DB2 application in the ICU! – How to discover incompatible changespdf

The what and why of ICIs

If you’ve been paying close attention, you will have noticed that over the course of the past couple releases of DB2 for z/OS, IBM has been making changes that can modify the behavior of your DB2 application programs. The number of incompatible changes being introduced by IBM started off slowly in DB2 9 but has grown to a significant number today. So much so that there are ways to trace incompatible changes (using IFCIDs and ICIs, or Incompatible Change Indicators), as well as methods to repress the changes, even if only for a period of time.

Ways to deal with incompatibilities

This session will discuss the incompatible changes, their potential impact on your applications, as well as provide guidance on how to tackle the whole experience. And you’ll also see how SEG’s Workload Expert technology makes it easier to manage and control these incompatible changes. With all of this information at your disposal, you can make sure that your DB2 applications do not wind up in the ICU (intensive care unit)!

Customer experiences 

Now that we all know everything about ICI’s, let us have a look at how the separately licensable BIF/ICI Use Case of  WorkloadExpert can help you find where the “bad guys” are and how to continuously check that everything is ok. This presentation will show how you can find out ICI Details for static and dynamic SQL, and then we will show real customer results from a “BiF hunt”.

 

Speaker Biography

Ulf Heinrich is the Director of Solutions Delivery at SOFTWARE ENGINEERING GmbH. He specializes in DB2 recovery issues and database maintenance, focusing on the growing requirement for cost reduction and 24×7 operations. As a consultant, at large customer sites, he has implemented database maintenance procedures and recovery strategies, and also experienced the pitfalls of recovery scenarios under real‐world recovery pressure. His activities cover EMEA, as well as North America through SE’s U.S. subsidiary, SEGUS Inc. He’s involved in the development of SE’s maintenance and recovery solutions. And of course a big support of the Atlanta DB2 Users Group.

2016-08 – Is it Safe? How to recover accidently dropped tables

Do you have the DDL anywhere?
Was there a “recent” image copy or disk back-up? Who can you call for help?

The newsletter title this month is really nothing to do with the film “Marathon Man”, but sometimes backup and recovery can feel just like having your teeth drilled… Anyway, the title is actually meant to get you to think again about your site’s back-up and recovery definitions—specifically that age-old chestnut about “accidently” dropped objects.

RECOVER from DROP?

I saw in LISTSERV a discussion about recovering from dropped tables and tablespaces. It is a pretty horrible situation when you realize that you just confirmed the drop of a test table to then suddenly see that as your finger is descending towards the ENTER key, that it is, in fact, a different name…

recover accidently dropped db2 zos table based on DDL extraction

DDL and Back-ups Handy?

Normally, at this point, the air is filled with colorful language and interesting local metaphors. Once it has calmed down a bit the real work starts: Do you have the DDL anywhere? Was there a “recent” image copy, or a disk back-up? Who can you call for help?

Mirror Mirror on the wall

Remember that mirroring etc. will not help you as the DROP has also been successfully mirrored. So within a moment, the data was also dropped at your disaster recovery site. (Argh!)

Newbies then start looking in SYSIBM.SYSCOPY for the last image copies, while us grey-haired oldies more mature experts start looking in production control copy libraries and BETA92!

The heat is on!

At this point a couple of things happen: The telephone starts ringing and a manager-type person materializes to ask annoying questions all the time. Typically: “How long will it take to get the data back?”

Now you have two possibilities:

1. You are in luck! Someone somewhere extracted all the DDL for the table(s) with DBID, PSID, and OBID(s) so you could generate a DSN1COPY job from the last found Image Copy dataset

recover accidently dropped db2 zos table based on DDL extraction

2. You are *not* in luck! You have no idea how the table looks, and you cannot see if anyone ALTERed it in the last five years or so.

Crashed and Burned

If you are in position 2, it is now a good time to update your CV and make sure your desk is neat and tidy…Now you might have access to some nifty third party vendor tools, but for that you must at least have the dataset name of the last Image Copy and—of course—the third party tool itself! Or the ability to read the DB2 log and “resurrect” the table defs from there—Not a pretty place to go I assure you!

Back up that chain of thought for a moment

So let’s rewind and imagine that you are doing this all differently… What about beginning today with an extract of all the DDL on your system and then capturing all of the IFCID 62’s that list out any DDL changes, going further what about getting all of the IFCID 220’s to get dataset allocations correlated with Utility IDs and DBIDs and PSIDs.

Imagine what you could then do?

Wow! Cavalry over the hill

Yep, you have a ”history” of all the DDL that has happened on your machine right up to the point when your object was DROPed *and* you have the dataset name(s) of the last image copies as well as all the internal IDs to enable a successful DSN1COPY job complete with OBID translation! Cool huh? So suddenly you are now a hero instead of a villain!
recover accidently dropped db2 zos table based on DDL extraction

recover accidently dropped db2 zos table based on DDL extraction

DIY or Buy in?

So much for the theory – What about in practice? Well you can write it all yourself or you could use this as a sort of useful side effect from our WorkloadExpert (WLX) software which has all this built in! We already get all these IFCIDs, we already have a DDL Generator bundled with the WLX Software, for the Audit Use Cases, and so it really kills two birds with one stone!

Restricted movement?

Now, of course, you could argue:“Wait! I have RESTRICT ON DROP set for all my productive tables!” Now this works really well for accidental drops, but I have seen lots of places where it should be used but was in fact forgotten. How can you check? Run this little SQL to validate that what you think is true really is true:

recover accidently dropped db2 zos table based on DDL extraction

SELECT COUNT(*) AS TABLES                                     
       ,COUNT(CASE WHEN CLUSTERTYPE = 'Y' THEN 1 ELSE NULL END) 
                 AS DROP_RESTRICT                               
       ,COUNT(CASE WHEN CLUSTERTYPE = ' ' THEN 1 ELSE NULL END) 
                 AS NOT_DROP_RESTRICT                           
FROM SYSIBM.SYSTABLES                                         
WHERE     TYPE   IN ('T' , 'M' , 'H' , 'R' )

  AND NOT DBNAME IN ('DSNDB01' , 'DSNDB06' )

WITH UR
 ;

 

I hope the results don’t have you feeling like Dustin Hoffman in the movie.

As usual any questions or comments are welcome,

TTFN Roy Boxwell

Tridex September 2016

SEGUS & SOFTWARE ENGINEERING präsentieren

TRIDEX – am Dienstag den 27. September 2016

Compliance with compliments! Viable DB2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces DB2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a DB2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.

1-Audit needs and musts
Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
2-Solution overview and their Pros/Cons
Get an overview about the existing solutions and understand how they work.
3-The viable way – let DB2 do the magic!
Learn about DB2 enhancements in DB2 10/11 that deliver the DB2 workload being processed and understand why it’s so efficient.
4-Customer results from the banking industry
Receive some experience from a large banking company and how they successfully replaced their DB2 Audit feature based reporting by a modern SQL tracking and analytics process.

 

The Delaware Valley DB2 Users Group

SEGUS & SOFTWARE ENGINEERING präsentieren

DVDUG – am Mittwoch den 28. September 2016

Compliance with compliments! Viable DB2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces DB2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a DB2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.

1-Audit needs and musts
Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
2-Solution overview and their Pros/Cons
Get an overview about the existing solutions and understand how they work.
3-The viable way – let DB2 do the magic!
Learn about DB2 enhancements in DB2 10/11 that deliver the DB2 workload being processed and understand why it’s so efficient.
4-Customer results from the banking industry
Receive some experience from a large banking company and how they successfully replaced their DB2 Audit feature based reporting by a modern SQL tracking and analytics process.

 

Wisconsin DB2 Users Group September 2016

SEGUS & SOFTWARE ENGINEERING präsentieren

WDUG – am Dienstag den 14. September 2016

Don’t let ICIs put your DB2 application in the ICU!

How to discover incompatible changes in your DB2 System

The what and why of ICIs
If you’ve been paying close attention, you will have noticed that over the course of the past couple releases of DB2 for z/OS, IBM has been making changes that can modify the behavior of your DB2 application programs. The number of incompatible changes being introduced by IBM started off slowly in DB2 9 but has grown to a significant number today. So much so that there are ways to trace incompatible changes (using IFCIDs and ICIs, or Incompatible Change Indicators), as well as methods to repress the changes, even if only for a period of time.
Ways to deal with incompatibilities
This session will discuss the incompatible changes, their potential impact on your applications, as well as provide guidance on how to tackle the whole experience. And you’ll also see how SEG’s Workload Expert technology makes it easier to manage and control these incompatible changes. With all of this information at your disposal, you can make sure that your DB2 applications do not wind up in the ICU (intensive care unit)!
Customer experiences
Now that we all know everything about ICI’s, let us have a look at how the separatly licensable BIF/ICI Use Case of  WorkloadExpert can help you find where the “bad guys” are and how to continuously check that everything is ok. This presentation will show how you can find out ICI Details for static and dynamic SQL, and then we will show real customer results from a “BiF hunt”.

Speaker’s biography

Ulf Heinrich is the Director of Solutions Delivery at SOFTWARE ENGINEERING GmbH. He specializes in DB2 recovery issues and database maintenance, focusing on the growing requirement for cost reduction and 24×7 operations. As a consultant at large customer sites, he has implemented database maintenance procedures and recovery strategies, and also experienced the pitfalls of recovery scenarios under real‐world recovery pressure. His activities cover EMEA, as well as North America through SE’ U.S. subsidiary, SEGUS Inc. He’s involved in the development of SE’ maintenance and recovery Solutions.

Heart of America DB2 Users Group September 2016

SEGUS & SOFTWARE ENGINEERING präsentieren

HOADUG – am Montag den 12. September 2016

Compliance with compliments! Viable DB2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces DB2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a DB2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.

1-Audit needs and musts
Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
2-Solution overview and their Pros/Cons
Get an overview about the existing solutions and understand how they work.
3-The viable way – let DB2 do the magic!
Learn about DB2 enhancements in DB2 10/11 that deliver the DB2 workload being processed and understand why it’s so efficient.
4-Customer results from the banking industry
Receive some experience from a large banking company and how they successfully replaced their DB2 Audit feature based reporting by a modern SQL tracking and analytics process.

Don’t let ICIs put your DB2 application in the ICU!

How to discover incompatible changes in your DB2 System

The what and why of ICIs
If you’ve been paying close attention, you will have noticed that over the course of the past couple releases of DB2 for z/OS, IBM has been making changes that can modify the behavior of your DB2 application programs. The number of incompatible changes being introduced by IBM started off slowly in DB2 9 but has grown to a significant number today. So much so that there are ways to trace incompatible changes (using IFCIDs and ICIs, or Incompatible Change Indicators), as well as methods to repress the changes, even if only for a period of time.
Ways to deal with incompatibilities
This session will discuss the incompatible changes, their potential impact on your applications, as well as provide guidance on how to tackle the whole experience. And you’ll also see how SEG’s Workload Expert technology makes it easier to manage and control these incompatible changes. With all of this information at your disposal, you can make sure that your DB2 applications do not wind up in the ICU (intensive care unit)!
Customer experiences
Now that we all know everything about ICI’s, let us have a look at how the separatly licensable BIF/ICI Use Case of  WorkloadExpert can help you find where the “bad guys” are and how to continuously check that everything is ok. This presentation will show how you can find out ICI Details for static and dynamic SQL, and then we will show real customer results from a “BiF hunt”.

Speaker’s biography

Ulf Heinrich is the Director of Solutions Delivery at SOFTWARE ENGINEERING GmbH. He specializes in DB2 recovery issues and database maintenance, focusing on the growing requirement for cost reduction and 24×7 operations. As a consultant at large customer sites, he has implemented database maintenance procedures and recovery strategies, and also experienced the pitfalls of recovery scenarios under real‐world recovery pressure. His activities cover EMEA, as well as North America through SE’ U.S. subsidiary, SEGUS Inc. He’s involved in the development of SE’ maintenance and recovery Solutions.