2020-04 Four Flavors of Db2 Audit

As well as simply getting it done, I will also run through the four different ways that you can process the gathered data.


Four ways to get a Db2 z/OS Audit done


1- First up

First option is the simplest, cheapest and quickest:

Do nothing.

Whether or not this will help your company is a non-trivial question of course!

Naturally this is an absolute No No.


2- Then we have

Next option is relatively simple and cheap, but requires a bit of work: 

Write it all yourself but based on existing data that some other process already extracts for you, (SMF for example). 

If you happen to have the skills for extracting the required audit data from existing data that is being collected anyway, then this might well be the best method if you are really strapped for resources. 


3- Getting there 

Then we have not so simple, still cheap, but a ton of work: 

Write it all yourself and add all the IFCIDs you actually need to audit your system as well as capturing all the SQL. 

This needs a serious amount of skills to get and keep up with the agile world of Db2. You will also need to take care of the amount of data that you will be collecting.

However, the auditor will be happy as you have everything they could ask for.


4- Aha! The only true way 

Last option is simple, not so cheap but very quick: 

Third party software that does it all for you.

This is my preferred solution, especially as we just happen to sell one (WorkLoadExpert Audit).

This is actually the only real way to go. You probably don’t have the time to keep all these things up-to-date and running correctly. 

Data Collected – Now what? 

So, you have chosen one of these ways to gather the data. Now you must evaluate what you got. Here again we have four separate ways to go forward: 

First up 

There it is! 

Do nothing. Just point at the datasets, print outs, database objects and say “It is all in there…” 

This is not really a solution and any auditor worth his, or her, salt would quite rightly be extremely upset! 

Then we have 

A whole bunch of pre-written SQLs. 

SPUFI is ok, but much better would be to see these in a GUI where graphical viewing is built in and saving and sharing results is much easier.  

This is not bad, but still a manual “island” process. Just Db2 and nothing else plus it must be triggered by humans. 

Getting there

A whole bunch of pre-written and custom SQLs.

This time, all run in Batch and the results are emailed to the auditor directly. These emails can “just sit there” until the auditor checks the results. Naturally, if anything is found, then the underlying data must still be there for a detailed analysis.

Better, as it is getting automatic but still not really “round”, as it is still Db2 in isolation…

Aha! The only true way

Use of LEEF or SYSLOGGER-style formats to export all audit data.

The data is then in a data-lake where SPLUNK, QRADAR et al can happily slice and dice their way through the data.

This is the best way!

You also get an extra bonus point for *removing* the data from the mainframe. As auditors *love* a single point of control, this is the only real way forward. It also pushes the Db2 data into the world of other data that auditors use and require.


Db2 Audit with “GIVE&TAKE” :


Software Engineering GmbH and SEGUS Inc are launching a new free Give&Take which this time is the Audit support from WorkLoadExpert.

If you would like to take part, then please just fire off an email to db2support@segus.com telling us who you are and which firm you work for and we will get in touch!

Give and Take 

By the way, it is called “Give&Take” because :

  • we Give you the software, for free, to run for a trial period, and
  • we would like to Take away what you think, feel, and find about the software after the trial period. 

More about Give&Take


TTFN, 

Roy Boxwell 

Tridex March 2019

TRIDEX  – Tri-State Db2 Technology Exchange – NY, USA – March 2019

SEGUS & SOFTWARE ENGINEERING present


1 – An Audit a day keeps the lawyers at bay!

> Pdf Presentation


2 – ZOWE – The zGui (r)evolution – First hands on experience and best practices

> Pdf Presentation


1 – An Audit a day keeps the lawyers at bay!

GDPR, GLB, HIPAA, PCI-DSS, Basel III, Sarbanes-Oxley, CA SB1386, Federal Information Security Management Act, “Red Flags”Rules confront us us with serious requirements to protect the data and to fulfil Auditors requests.

There are different ways and tools that promise they are able to do it, but what can they really do and what are the associated costs?

This presentation introduces Db2 technology exploitation that delivers DML, DDL, DCL activity in a Db2 environment along with identification details.

This presentation helps you understand the way Auditors look at Db2 and what they require in order to do their daily work. Learn how you can satisfy your Auditors needs, by interfacing with an SIEM system, like:

  • QRadar, Splunk, AlienVault, et al,
  • combining the Db2 information with RACF, SMF and Master Log data.

More about Audit for Db2 z/OS

Speaker biography

Roy Boxwell has more than 33 years of experience in MVS, OS/390, and z/OS environments – 31 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.


2 – „ZOWE – The zGUI (r)evolution – First hands-on experience and best practices“

For a couple of years, the importance of a GUI for z/OS has seemed to grow significantly. This may be one of the most important factors if the z platform is to remain strategic over the next decade and beyond – probably less because of the potential benefit of a specific GUI implementation, but simply because recent generations of DBAs, SYSPROGS, and Programmers aren’t that familiar with the beloved green screen of those who’ve been working with ISPF for decades.

DS, RDz, DSM, z/OSMF -to name just IBMs recent ones- always had a common downside, which I believe customers didn’t like at all: Users were unable to access all mainframe products from, and through, a common interface.


Bottom line:

We need a GUI that not only comes with a monitoring dashboard, job submission capabilities, includes a JES explorer, data set explorer and editor, scripting for automation and interaction with all major product, like CICS, Db2,

BUT it should be used by ALL vendors!

Of course, this GUI should also seamlessly integrate into all those robust security and resource management components we’re already familiar with. It needs to be the preferred interface for administrative tasks, development, test, and operation, no matter if using old applications, or brand-new ones.


Am I asking for too much?

ZOWE – THE z ecosystem to securely manage, control, script and develop – seems to be the way to go.

It comes with a RESTful API, an extensible command line interface and an HTML 5 web-based UI framework designed to fuse everything we already have with anything we want to do today and tomorrow. It is exactly what many of us have been asking for years. We see a clear commitment from some of the most important companies in the mainframe world and we also see the technical components required to do our daily work and to open up new opportunities for tomorrows modern apps.

  • All well-integrated into RLF, SAF and USS and
  • interfacing with MVS, Db2 , CICS, and JES
  • as well as products from a variety of other vendors
  • and by the way, it’s EPL-2.0 (Eclipse Public License)

Late 2018 our company strategically decided to build UIs based on the ZOWE ecosystem.

Learn how you can quickly define entire workflows and interact and control them using a Db2 z/OS cloning example along with workload capture and replay in a simulation environment – fully automated quality assurance as part of continuous delivery in an agile world.

Speaker biography

Ulf Heinrich is the Director of Solutions Delivery at SOFTWARE ENGINEERING GmbH. He specializes in Db2 operations and performance tuning, focusing on the growing requirement for cost reduction and 24×7 operations. As a consultant at large customer sites, he has implemented database maintenance procedures and recovery strategies, and also experienced the pitfalls of recovery scenarios under realworld recovery pressure. His activities cover EMEA, as well as North America through SE’s U.S. subsidiary, SEGUS Inc. As a member of SE’s Request Board he’s working closely with customers and the development labs.

2018-06 – DST Db2 timestamp problems: I really hate Daylight Saving Time

How to avoid timestamp problems while going from winter to summer time in a Db2 for z/OS system ?

Is the CHAR or Timestamp use, the safest timestamp procedure ?

How do you fix it?

This year, as every year, the moment arrives for most of us when the clocks go forward and then in autumn back again. I really hate this, as I still have a bunch of clocks that do not automatically do it for me. My PC, phone, laptop, TV etc. all do it for me but the rest…anyway what has this got to do with Db2 I hear you all wonder? Well it really is quite a horrible little story coming up…

Same procedure as every year

At precisely 02:00 on the 25th of March a SET CLOCK console command was issued to change the UTC Offset to +2 thus leaping from 02:00 to 03:00 in the blink of an eye.

How long?

Now “how long” is the blink of an eye? For Db2 these days – too long!

Day of reckoning

At 2018-03-25-02.00.00.006999 a transaction was logged in the Audit system, in fact *lots* of transactions were in-flight at this time. Normally it is not a problem and, in fact, nothing happened until nearly three months later when someone found that there was possibly some data missing.

Alarm!

Alarm bells are ringing as these inventory checks cannot have missing data. The code is nowadays all JAVA and the developer in charge of the problem found out that the data was indeed missing!

Oh no it isn’t!

The DBA group were then involved, as it could be data corruption, and they looked and found the data – but it was not the same data as the developers had… then the penny dropped!

Clever old JAVA

In fact, the data the developer had was *exactly* one hour later than the data found by the DBA group. I mentioned earlier that the 25th March was the switch to summer time and, perhaps, the JAVA Driver is “helping” us, a bit too much help if you ask me!

Date Check

Here is a bit of SQL for you to recreate the problem and gaze in wonder at how cool/horrible (delete what is not applicable) JAVA really is.

CREATE TABLE BOXWELL.DAY_LIGHT                           
  (COL1 SMALLINT     NOT NULL                            
  ,COL2 TIMESTAMP    NOT NULL)                           
;                                                         
INSERT INTO BOXWELL.DAY_LIGHT                            
VALUES (1 , '2018-03-25-01.59.59.999999');               
INSERT INTO BOXWELL.DAY_LIGHT                            
VALUES (2 , '2018-03-25-02.00.00.006999');               
INSERT INTO BOXWELL.DAY_LIGHT                            
VALUES (3 , '2018-03-25-03.00.00.000099');               
COMMIT ;                                                 

SELECT * FROM BOXWELL.DAY_LIGHT                           
ORDER BY 1                                               
;
---------+---------+---------+---------+---------+--------
  COL1   COL2                                         
---------+---------+---------+---------+---------+--------
     1   2018-03-25-01.59.59.999999                     
     2   2018-03-25-02.00.00.006999                      
     3   2018-03-25-03.00.00.000099                      
DSNE610I NUMBER OF ROWS DISPLAYED IS 3

The output of SPUFI looks great! Timestamps are correct and all is fine.

It is a GUI world

Now do the select using a JAVA driver of your choice, here I am using DataStudio:

DST Db2 timestamp problems - Char- Daylight Saving Time

And then running it gives:

DST Db2 timestamp problems - Char- Daylight Saving Time

Spot the difference!

Isn’t that

great/terrifying (delete what is not applicable)

as the JAVA driver is “looking” at the timestamp data and seeing “oh oh! That timestamp is impossible! I know – I will add one hour to correct it!”

This scares me a little…actually quite a lot!

Docu – What Docu?

The only place I could find anything about this was in a chapter about not using 24 as midnight and the problem of using timestamps between October 5th 1582 and October 14th 1582:

https://www.ibm.com/support/knowledgecenter/en/SSEPEK_11.0.0/java/src/tpc/imjcc_r0053436.html

If you read it you can find this one sentence:

If a string representation of a date, time, or timestamp value does not correspond to a real date or time, Java adjusts the value to a real date or time value.

Which, of course, explains everything!

The quick fix…

There is no quick fix!

1 – The customer must either change all SQL to use the CHAR function – Not good!

Or

2 – Check all of their important timestamp columns for the range 02.00.00.000000 -> 02.59.59.999999 data and then update them with plus one hour – Not good!

Faster and Faster : the best fix ?

This problem will get worse the faster the machines get and so my idea to solve it next year is simply issue a

SET LOG SUSPEND

at one second before 02:00 which flushes the log, issues a system checkpoint (non data-sharing), updates the BSDS and basically pauses the system. Then do the SET CLOCK command and then do a

SET LOG RESUME

It all takes about three seconds and so should not cause any timeouts.

 

I really hope that, one day, we simply get rid of daylight saving time…

 

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

2018-05 Audit 2.0 – GDPR Audit guide and checklist for Db2 z/OS

Or: How I Learned to Stop Worrying and Love the Auditor

A guideline in 5 big Steps to Audit for Personal Data Protection

First up, this is not another Audit review and health check sell! We, and I mean especially in the European Union (EU), are about to get the biggest update to “my data”, “my right to know” and “my right to forget (delete)” that has ever happened, when the General Data Protection Regulation (GDPRKicks off on May 25th, 2018.

Who cares?

Well, we all should really! Very soon hordes of people could well start demanding to know if you have any data about them and if so where it is and who you give it to. Personal Data will be big business!

Age of Consent

Just assuming all is ok is also no longer valid or lawful – any personal data that you keep must be kept with the *active* consent of the data owner… This is going to get awfully messy awfully quickly! Do not forget that the definition of personal data has also changed quite a bit e.g. IP Addresses. For the record here is the actual text in the GDPR regulations:

Art. 4 GDPR Definitions

For the purposes of this Regulation:

1.      ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

And

Art. 9 GDPR Processing of special categories of personal data

1.      Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.

Security by design and Security of processing

Very important are Articles 25 and 32:

Art. 25 GDPR Data protection by design and by default

1.       Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.

2.       The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.

Last part of Paragraph two is important here.

Art. 32 GDPR Security of processing

1.      Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:

1.      the pseudonymisation and encryption of personal data;

2.      the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;

3.      the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;

4.      a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

2.      In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.

Here paragraphs 1.2, 1.4 and 2 are the biggies!

A small or large fine for you today?

The fines are also amazingly high. First, for the “minor” problem of being over 72 hours late when data leaks have occurred (a breach), is 2% of global turnover or €10,000,000 – whichever is *higher* and, if you are really naughty, like disregarding basic data laws, moving data abroad or ignoring an individual’s rights then you get hit for 4% of global turnover or €20,000,000 – again whichever is *higher*

First up?

I, for one, do not want to be the first company that makes the headlines… How about you?

Who are you afraid of?

Now I know that most of the talk about GDPR is “right to data” for the general public but we all know, as IT specialists, that the people to be really afraid of are ex-employees who did not leave on happy terms. They know *exactly* where the knife could best be put… They know exactly where, and on which platforms, all of the data really is.

Protect yourself – Due diligence

What can you do? Well the first thing is to actually understand what GDPR means to you and your firm’s data. Then you must be able to prove to auditors that you tried your very best. We will all probably get hacked at some time, but if you tried your best it is enough. Using all of the data you gather in a post-mortem, or for forensics, is also a very good idea as can be seen in paragraphs 2.3 and 2.4 below:

Art. 83 GDPR General conditions for imposing administrative fines

1. Each supervisory authority shall ensure that the imposition of administrative fines pursuant to this Article in respect of infringements of this Regulation referred to in paragraphs 4, 5 and 6 shall in each individual case be effective, proportionate and dissuasive.

 2. Administrative fines shall, depending on the circumstances of each individual case, be imposed in addition to, or instead of, measures referred to in points (a) to (h) and (j) of Article 58(2). When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case due regard shall be given to the following:

1. the nature, gravity and duration of the infringement taking into account the nature scope or purpose of the processing concerned as well as the number of data subjects affected and the level of damage suffered by them;

2. the intentional or negligent character of the infringement;

3. any action taken by the controller or processor to mitigate the damage suffered by data subjects;

4. the degree of responsibility of the controller or processor taking into account technical and organisational measures implemented by them pursuant to Articles 25 and 32;

The list goes on after these of course…

Db2 Audit – a new way?

Most sites use SMF and do daily cuts to then offload to a repository system of some sort on some sort of hardware. The problem here is that the amount of SMF data you are generating can swamp you and this method is not nearly quick enough. Waiting a day is 24 hours of the 72 that are available to you…

Faster, Better, Securer?

Can you do it faster?  Can you do it yourself faster?  Can you do it better? Maybe…

 

If you can handle OPx and a bit of High Level Assembler then “Bob’s your uncle!” as long as Bob is your Auditor of course…There are more than enough examples in the web about how to write and do OPx reading (There are even vendors that are willing to sell you their software 🙂 )


A guideline in 5 big steps to Audit in a new way for Personal Data protection :


1 – To do list – IFI commands, IFCIDS & Audit Class

Create a program that can issue the required IFI commands to start and stop traces and to issue the READA and READS IFI calls that you wish to have. If you don’t know which Audit Class is which IFCID – Here’s my handy list:

Audit ClassIFCIDsWhatWhy
1140Authorization failuresPossible brute force password attack
2141GRANTs and REVOKEsPossible temporary raising of privilege
3142CREATE/ALTER/DROP of table with AUDIT attributeRemoving the AUDIT attribute is a big red flag
41431st Change of table with AUDIT attributeSensitive data updates
51441st Select of table with AUDIT attributeSensitive data usage
6145BIND/PREPARE using table with AUDIT attributePossible usage of sensitive data
755, 83, 87, 169, 319SET CURRENT SQLID, end of identify, end of signon CICS/IMSElevation of privilege
823, 24, 25, 219, 220Db2 UtilitiesUNLOADing data to where?
9146, 392For customer usen/a
10269, 270, 271*

 

271 is not started automatically with Class 10

Establish reuse trusted context, trusted context CREATE/ALTER and Column Mask/Row Permission CREATE/DROP/ALTERAnything with MASKs or ROW Permissions must be checked
11361Successful accessOnly makes sense if working with AUDIT POLICY SYSADMIN or DBADMIN

So, what is missing from this picture?

Well, what about the COMMANDs to actually see what people are issuing on the machine? So you must also add IFCIDs 90 and 91. Then you do not see any of the DDL so you must start IFCID 62 to get that as you do want to see what people are ALTERing, creating and dropping don’t you?


2 – All done in design?

Once you have a system that gathers all these you must buffer them in memory and then do one of two things:

1) Write them out to a file for post-processing and enrichment later
2) Directly write them to your SIEM system of choice

I am not a fan of number 2 because if the SIEM system cannot accept the data then your data is lost…

With No. 1 you can keep processing even if the SIEM system cannot ingest, for whatever reason, and as long as you trigger yourself every 5 – 10 minutes or when the OPx buffer is full then that is “real time” enough for me!


3 – Care for something else for the weekend?


Plus you can enrich the data with extra stuff that the naked IFCIDs do not have attached to them e.g. map dbid/psid to a real database and tablespace etc.


Store all of this data in a bunch of Db2 tables


then kick-off a Db2 to a LEEF re-formatter program that reads all the audit data that has been collected since last time


writes it all out.


4 – Code Page – don’t forget the Code Page

Then you must do an ICONV for code page conversion, swiftly followed by a gzip to get it down to size and then copy it down to a USS file ready for your SIEM system to ingest its prey.


5 – Done, Finished and Complete

To prove that this system is always available you should also schedule a simple

GRANT SELECT ON SYSIBM.SYSDUMMY1 TO PUBLIC;

Every morning at 06:00 which you should see in the output every day. Remember – Due Diligence!


 

So now you are done and finished and can let the SIEM system do the completion!

 

As always, any questions or comments would be most welcome!

TTFN,

Roy Boxwell

PS: Please note that the bold of parts of the GDPR text is only from me!

Southwest Db2 Users Group – February 2018

Db2 Forum.  Southwest Db2 Users Group – February 2018 – Grapevine (Dallas), TX, USA

SEGUS & SOFTWARE ENGINEERING proudly sponsor this event & present:

1 – Pdf Präsentation  –  Compliance with compliments! Viable Db2 z/OS workload tracking.

2 – Pdf Präsentation –  Db2 12 Continuous Delivery – New challenges for deployment.

3 – Pdf Präsentation   Db2 z/OS Lies, Damn Lies, and Statistics… 


1 – Db2 z/OS Security Audit: Compliance with compliments! Viable Db2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces Db2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a Db2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.


More about Db2 Audit

Presentation Outline

  • Audit needs and musts Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
  • Solution overview and their Pros/Cons Get an overview about the existing solutions and understand how they work.
  • The viable way – let Db2 do the magic! Learn about Db2 enhancements in Db2 10/11 that deliver the Db2 workload being processed and understand why it’s so efficient.
  • Customer results from the banking industry Receive some experience from a large banking company and how they successfully replaced their Db2 Audit feature based reporting by a modern SQL tracking and analytics process.

 


2 Db2 12 Continuous Delivery – New challenges for deployment.

Fundamental changes in the Db2 z world often lead to concerns. Let’s face it – some changes force us to change! While a Db2 version migration usually took months, or even years, there will be no new Db2 version after 12, but continuous code drops.

This will have a tremendous impact on migration strategies, because we have to find a reliable way to test these code deliveries in a fraction of time. If we make it, Business Divisions will become enthused at how quickly new technology becomes available for new applications. This presentation will describe the difference between Code, Catalog, Function and Application Levels, how you can control them and how you can fallback in case of anomalies. It also illustrates how we still can be pro-active in testing without burning weeks and months.
Learn how to choose from four different levels of testing and a new way of automation. CD-Screening allows you to pick and choose from KPI based test automation. The levels include simple anomaly alerting, access path verification, clone Pre-apply and even workload capture/replay to easily discover different behaviour resulting from a new code Level.


More about Db2 Continuous Delivery – CD

Presentation Outline

Joining this presentation, you’ll learn how to align Continuous Delivery to your Continuous Availability.

  • Agile, Continuous Delivery, DevOps – just buzz words, or new methodologies?
  • Db2 Code, Catalog, Function and Application Levels – differences and dependencies.
  • Activation/Deactivation of new code and how to fallback and when you can’t.
  • Different flavors of (pro-active) CD-Screening and how it can be automated:

* Anomaly alerting based on Incompatibility Change Indicators (ICIs)
* Dyn./Stat.Access Path Change Detection e.g.via Plan Management
* Clone based code change pre-apply exploiting Backup System
* Workload-KPI verification using SQL replay and KPI comparison

Audience Experience:   Intermediate Advanced
Platform:                        Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration Performance Management Db2 Migration

 


3 Db2 z/OS Lies, Damn Lies, and Statistics…

– Benjamin Disraeli, Prime Minister of England (1868, 1874-1880)

The above line may, or may not, have been spoken well over 100 years ago, but the need for statistics and, above all else, accurate statistics is more important than ever in the Db2 world of today.


More about Db2 RUNSTATS

Presentation Outline

  • Db2 RUNSTATS basics & catalog tables and Columns used for access path
  • IBM recommendations through the ages : from Db2 V3 to Db2 12
  • Db2 RUNSTATS advanced
  • SYSCOLDIST explained
  • RUNSTATS real world Q&A :
    use of SAMPLE, COLGROUP, PROFILE, REOPT (ONCE), TABLESAMPLE SYSTEM, HISTOGRAM, …
  • RUNSTATS reversal

Speaker biography

Roy Boxwell has more than 32 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

Heart of Texas Db2 Users Group – February 2018

HOTDUG – Heart of Texas Db2 User Group – February 2018 – Austin, TX, USA

SEGUS & SOFTWARE ENGINEERING proudly sponsor this event & present

1 – Pdf Presentation : Compliance with compliments! Viable Db2 z/OS workload tracking.

2 – Pdf Presentation : Db2 12 Continuous Delivery – New challenges for deployment.

3 Pdf Presentation : Db2 z/OS Lies, Damn lies, and Statistics… 


1 – Db2 z/OS Security Audit: Compliance with compliments! Viable Db2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces Db2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a Db2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.


More about Db2 Audit

Presentation Outline

  • Audit needs and musts Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
  • Solution overview and their Pros/Cons Get an overview about the existing solutions and understand how they work.
  • The viable way – let Db2 do the magic! Learn about Db2 enhancements in Db2 10/11 that deliver the Db2 workload being processed and understand why it’s so efficient.
  • Customer results from the banking industry Receive some experience from a large banking company and how they successfully replaced their Db2 Audit feature based reporting by a modern SQL tracking and analytics process.

 


2Db2 12 Continuous Delivery – New challenges for deployment.

Fundamental changes in the Db2 z world often lead to concerns. Let’s face it – some changes force us to change! While a Db2 version migration usually took months, or even years, there will be no new Db2 version after 12, but continuous code drops.

This will have a tremendous impact on migration strategies, because we have to find a reliable way to test these code deliveries in a fraction of time. If we make it, Business Divisions will become enthused at how quickly new technology becomes available for new applications. This presentation will describe the difference between Code, Catalog, Function and Application Levels, how you can control them and how you can fallback in case of anomalies. It also illustrates how we still can be pro-active in testing without burning weeks and months.
Learn how to choose from four different levels of testing and a new way of automation. CD-Screening allows you to pick and choose from KPI based test automation. The levels include simple anomaly alerting, access path verification, clone Pre-apply and even workload capture/replay to easily discover different behaviour resulting from a new code Level.


More about Db2 Continuous Delivery – CD

Presentation Outline

Joining this presentation, you’ll learn how to align Continuous Delivery to your Continuous Availability.

  • Agile, Continuous Delivery, DevOps – just buzz words, or new methodologies?
  • Db2 Code, Catalog, Function and Application Levels – differences and dependencies.
  • Activation/Deactivation of new code and how to fallback and when you can’t.
  • Different flavors of (pro-active) CD-Screening and how it can be automated:

* Anomaly alerting based on Incompatibility Change Indicators (ICIs)
* Dyn./Stat.Access Path Change Detection e.g.via Plan Management
* Clone based code change pre-apply exploiting Backup System
* Workload-KPI verification using SQL replay and KPI comparison

Audience Experience:   Intermediate Advanced
Platform:                        Db2 z/OS
Presentation Length:     60 minutes
Presentation Category:  Database Administration Performance Management Db2 Migration

 


3Db2 z/OS Lies, Damn lies, and Statistics…

– Benjamin Disraeli, Prime Minister of England (1868, 1874-1880)

The above line may, or may not, have been spoken well over 100 years ago, but the need for statistics and, above all else, accurate statistics is more important than ever in the Db2 world of today.


Meore about Db2 RUNSTATS

Presentation Outline

  • Db2 RUNSTATS basics & catalog tables and Columns used for access path
  • IBM recommendations through the ages : from Db2 V3 to Db2 12
  • Db2 RUNSTATS advanced
  • SYSCOLDIST explained
  • RUNSTATS real world Q&A :
    use of SAMPLE, COLGROUP, PROFILE, REOPT (ONCE), TABLESAMPLE SYSTEM, HISTOGRAM, …
  • RUNSTATS reversal

Speaker biography

Roy Boxwell has more than 32 years of experience in MVS, OS/390, and z/OS environments – 30 of those in Db2. He specializes in installation, migration, and performance monitoring and tuning. Roy leads the SEG development team responsible for the real time database maintenance solutions. He is also an active participant, speaker and contributor on the IDUG Db2 Listserv and sends out a monthly Db2 z/OS Newsletter.

2017-01 Db2 12 technical overview: Roy’s first features review

This Db2 12 technical overview presents in an “easy to read” table list a review of new Db2 12 features

Have you encountered any other Db2 12 changes you’d like to discuss?

 

Now that Db2 12 has gone GA I can finally talk about it. So here’s another new Features “first look” at what I think is cool, great, or odd !

This is my personal list for a Db2 12 technical overview – in no particular order :

  •  Db2 12 SQL Optimizer, triggers, Arrays, Merge, UNICODE Columns. Temporal, SQL pagination, SQL Stability, Log, Partitions,…
  •  Data Sharing
  •  Utilities DSN1COPY, Alternate Copy Tools, Audit, REORG, PBG reorgs, COMPRESSRATIO, RELOAD, RO tablespaces, LOAD, BACKUP and Recovery, PiT, RUNSTATS…

 

AGILE This release of Db2 will be ”the last” release, as Db2 Development has gone all agile on us and will be doing Continuous Delivery (CD) from now on. CD promises Easier, Cheaper, Faster and Simpler Db2 maintenance and the quick realization of new functionality.

 


Db2 12 – SQL


Optimizer


MQT or Table expression columns are “trimmed” if they are not used in the outer query.

In LEFT OUTER JOIN, if columns are not used, they can be Pruned.

UNION ALL gets major work when pushing down join predicates as well as pushing down ORDER BY and FETCH FIRST

Outer table joins can get reordered to avoid unnecessary materializations

User-defined functions get two improvements with merge and the introduction of indexes on the join or correlation predicates that are passed in as parameters

Adaptive Index is designed for Multi Index and single index list prefetch to determine at execute time the filtering of each index. This ensures the optimal execution sequence of indexes or, perhaps, a quicker fallback to Tablespace scan if no filtering index exists.


TRIGGERS

The new “advanced” triggers enable SQL and Global variable usage and SQL PL.


ARRAYS

Get a couple of nice new features, specifically the use of a global variable as an array type and the ability to use the ARRAY_AGG without forcing an ORDER BY.


GLOBAL VARIABLES

Get LOB support and in a SET they can be the target.


PureXML

The XMLMODIFY can do multiple document updates in a single invocation. Various XML performance boosts are also included, e.g. XMLTable and the XSLTRANSFORM allows transformations to different formats.


JSON

When using the JSON_VAL function the first argument must not now always be a BLOB. It can be a view, CASE, table expression, trigger transition variable or SQL PL variable or parameter.


MERGE

Is now a full MERGE with the ability to use table references with multiple MATCHED clauses, including DELETE operations.


SQL PAGINATION

The ability of Db2 to “understand” typical paging has been greatly boosted. Typically it was always coded like:

SELECT blah blah blah
FROM mytable
WHERE (SURNAME = ‘BOXWELL’ AND FORENAME > ‘ROY’)
        OR (SURNAME > ‘BOXWELL’)

This is pretty horrible for the Db2 optimizer but we *all* know what we really mean! Now in Db2 12, so does the optimizer! Sadly you must rewrite your queries a little so this example becomes:

SELECT blah blah blah
FROM mytable
WHERE (SURNAME, FORENAME) > (‘BOXWELL’, ‘ROY’)

Also with this comes a nice little feature called OFFSET ROWS. Typically, this is for when the connection to the server is a bit shaky and so after some paging, when the cursor is reopened, the code “knows” it can miss the first 60 rows, so the cursor changes to be:

SELECT * FROM mytable OFFSET 60 ROWS

Nice feature, but beware of polluting the DSC! It is much better to use a parameter marker for these Offsets!


UNICODE Columns

In DB2 11, we got a “fix” for UNICODE columns that was really a “crutch”. This has now been fixed with real UNICODE columns in DB2 12. You must migrate your existing data though!


Piece-wise DELETE

This is a feature I have wanted for decades! Simply add the FETCH FIRST nnnn ROWS ONLY within a DELETE and then programmatically loop around until you are done. Much easier than the method we have today of DECLAREing a CURSOR with an UPDATE of a dummy column and the DELETE WHERE CURRENT OF and after 5000 or so issue a COMMIT.


TEMPORAL RI

You can now add RI as normal and not be forced to use a trigger or stored procedure.


TEMPORAL TABLES

Get the ability to not just be inclusive-exclusive but also inclusive-inclusive.


TEMPORAL Logical Transactions

Another new feature with temporal tables, is the ability to support logical units of work for SYSTEM_TIME. These logical units of work are not determined by COMMIT or ROLLBACK but by using a built-in Global Variable.


PERCENTILE functions

Two new functions PERCENTILE_CONT and PERCENTIL_DISC are new BIFs.


DRDA Fast Load

Is the ability to load data into z/OS DB2 from files sitting on distributed clients.


ODBC

Gets a new INI keyword KEEPDYNAMIC and the connection attribute of SQL_ATTR_KEEP_DYNAMIC.


Obfuscated Code

Mainly of interest to Vendors is the ability to hide your stored procedure, TRIGGER or UDF coding from prying eyes.


RLF for Static SQL

This is a big one! The Resource Limit Facility has always only been available for dynamic SQL. Now you can also use it to cap Static SQL.


TRANSFER OWNERSHIP

This is a very handy way of clearing out all the old owners from a DB2 system.


SQL Stability

Dynamic Plan Stability is nearly the same as BIND QUERY, but the hope is that it will be easier and better to use! But beware of saving all of your dynamic SQL away!

Static Plan Stability gets a good enhancement that allows FREE on the original or previous. What is really good, is that the current version can be in use so there is no application outage anymore.


Insert

New Insert algorythm can be used for faster unclustered insert processing in some cases. Only for UTS MEMBER CLUSTER (This is actually the default for these spaces).


CONCENTARTE LITERALS

Now supported at the Package Level.


FTB

Fast Index Traversal – Especially good for randomly accessed indexes. If the index is unique, and 64 bytes or less, it is eligible. Index is controlled with the new Catalog table SYSIBM.SYSINDEXCONTROL and the -DISPLAY STATS(INDEXMEMORYUSAGE) command.


Log

Active log size can go from 4GB now up to 768GB ! Be careful here!


In-Memory bufferpools

by using PAGESTEAL(NONE) keyword.


PARTITIONS

Finally we get the chance to give each partition its own DSSIZE as well as the Partitioning indexes! This is great, but sadly is only available to an existing space once you have reorged the whole tablespace…However, once you are there, you can then have data and index parts up to 1TB in size, plus, when you do an ALTER of the DSSIZE, it does not cause an outage (as long as you make it bigger!). A side effect of this is that the RID is now seven bytes (see REORG mapping table for other changes). You can now also add partitions in the middle of an existing PBR table.


 

 


Data Sharing


Recovery

of retained locks from a failed member can be handled automatically


LPL and GRECP recovery

LPL and GRECP recovery auto retries three times after waiting three minutes


 


Db2 12 Utilities


DSN1COPY

In DB2 11 this utility got a few sanity checks and now the REPAIR CATALOG utility can fix some of these. The REPAIR CATALOG TEST also looks for some problems caused by misuse/abuse of DSN1COPY.


ALTERNATE COPY POOLS

The usage of BACKUP SYTEM is growing. So is the amount of storage required! The idea here, is to define a set of copy pools, but only one for many DB2 subsystems. The alternate copy pool uses as many volumes as it needs and leaves the other volume free for a different subsystem backup. This reduces the amount of space that must be allocated.


Audit

A new Authorization arrived: UNLOADAUTH to “replace” the “Does the user have SELECT auth on the table?” check that has run up to now. UNLOAD is special and should be controlled over this auth and no longer over just SELECT.


REORG

PBG tablespaces get the best news here!

PBG reorgs can now spill over into a new PBG if the row(s) do not fit back into the original partition. Classic case here, is compressed data that no longer fits back. This forced people to use a TS level reorg or not use compression.

If the PBG contains LOB data and it extended to a new partition in the log apply phase, then the LOB space was left in COPY Pending… pretty horrible and that no longer happens in DB2 12.

Another PBG bonus, is the delete of “emptied” Partitions after a REORG has completed.

Improved FlashCopy support – You can now decide to stop the REORG if the flash copy fails.

New Catalog column COMPRESSRATIO for use by utilities that records the compression savings at the record instead of at the page level.

RELOAD phase can now be offloaded to zIIP.

RO tablespaces can now be REORGed at any SHRLEVEL.

The mapping table gets changed again due to the relative page numbering in the new PARTITION support (seven byte RID).


LOAD


PART REPLACE with dummy input against an empty (PBR) partition could be quicker.

LOAD SHRLEVEL CHANGE PARALLEL support for PBG for SHRLEVEL CHANGE.

Additional zIIP offload, like in REORG, in the RELOAD phase, including the data conversion and loading of the record into the page set.

LOAD RESUME BACKOUT YES to avoid RECP on failure. Adds a new option on LOAD RESUME SHRLEVEL NONE to allow LOAD to back out the rows already loaded upon encountering an error (such as conversion, LOB/XML, duplicate key, referential integrity violation) without leaving the page set in RECP.

PREFORMAT support for auxiliary tables. Support is extended to LOB table spaces and auxiliary indexes.

Maintain MAXASSIGNEDVAL for identity columns. LOAD now maintains the MAXASSIGNEDVAL for user-provided input and resets the value if a LOAD REPLACE is run on the table space.

LOAD REPLACE support for the COMPRESSRATIO column for use by utilities that records the compression savings at the record instead of at the page level column.


BACKUP and RECOVERY

Point-in-Time support for PBGs, Flashcopy FLASHCOPY_PPRCP keyword. As mentioned the default is changed to not recover unchanged objects. MODIFY RECOVERY gets two new options: DELETEDS to delete the datasets and NOCOPYPEND to not set COPY pending after doing the MODIFY.


PiT

Has been improved with the ability to skip unnecessary recoveries. SCOPE UPDATE only processes objects that have been updated up to the TOLOGPOINT or TORBA.


RUNSTATS

New CLUSTERRATIO formula which should better reflect dynamic prefetch. Terry Purcell has stated that it is not a huge change and does not require a RUNSTATS of all tablespaces!


FREQVAL COUNT nn

The COUNT nn is now optional and, if not used, then RUNSTATS will work out the best number for you. This is really, really nice and I would recommend this in an instant! It has also been retro fitted to DB2 11.


Autonomic Statistics with PROFILEs

I am no fan of this, as I believe it makes for a pretty nasty feedback loop where anyone’s “dumb” QMF/SAS/DSNTEP2/SPUFI will get inserted as a PROFILE COLGROUP, and then these PROFILEs will get bigger and bigger until no-one knows which are really useful and which are just fluff! I would recommend setting the ZPARMs STATFDBK_SCOPE to ALL (Default) STATFDBK_PROFILE to NO (Default is YES). When YES is used DB2 12 will create and/or maintain a PROFILE for you. Finally, validate that the SYSTABLES column STATS_FEEDBACK is set to “N” (Default is “Y”) for any and all tables where you do *not* want SYSSTATSFEEDBACK data. E.g. All the DSNDB01 tables where a RUNSTATS is not even allowed!


DSC

DSC Invalidation got switched off by default. In the past *any* RUNSTATS flushed the cache. Now you must add the key word INVALIDATECACHE YES to get this to occur. (Unless you use the REPORT NO UPDATE NONE syntax this still just flushes the DSC)


Inline Stats

Inline Stats got a huge boost with PROFILE support, MOST/BOTH/LEAST and LOAD PARALLEL got inline stats.


 

Have you encountered any other Db2 12 changes you’d like to discuss?

As usual, feel free to email me with questions or comments.

TTFN

Roy Boxwell

 

5

Tridex September 2016

SEGUS & SOFTWARE ENGINEERING present

TRIDEX – on Tuesday September 27, 2016

Compliance with compliments! Viable DB2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces DB2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a DB2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.

1-Audit needs and musts
Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
2-Solution overview and their Pros/Cons
Get an overview about the existing solutions and understand how they work.
3-The viable way – let DB2 do the magic!
Learn about DB2 enhancements in DB2 10/11 that deliver the DB2 workload being processed and understand why it’s so efficient.
4-Customer results from the banking industry
Receive some experience from a large banking company and how they successfully replaced their DB2 Audit feature based reporting by a modern SQL tracking and analytics process.

 

The Delaware Valley DB2 Users Group

SEGUS & SOFTWARE ENGINEERING present

DVDUG – on Wednesday September 28th 2016

Compliance with compliments! Viable DB2 z/OS workload tracking.

Audit and Compliance is a need that many companies want and have to fulfill.

There’s different ways and tools that promise to be able to do it, but what can they really do and what are the associated costs? This presentation introduces DB2 10/11 technology exploitation that delivers any DML, DDL, DCL being executed in a DB2 environment along with identification details. Learn how you can run Audit analytics against a long‐term repository, pinpointing who executed a query, when and from where. Analyze your entire workload to understand access patterns and abnormalities.

1-Audit needs and musts
Take a journey to GLB HIPAA PCI‐DSS Basel III Sarbanes‐Oxley CA SB1386 Federal Information Security Management Act “ed Flag”Rules (FRCA)5.
2-Solution overview and their Pros/Cons
Get an overview about the existing solutions and understand how they work.
3-The viable way – let DB2 do the magic!
Learn about DB2 enhancements in DB2 10/11 that deliver the DB2 workload being processed and understand why it’s so efficient.
4-Customer results from the banking industry
Receive some experience from a large banking company and how they successfully replaced their DB2 Audit feature based reporting by a modern SQL tracking and analytics process.