When to REORG on Db2 for z/OS

Background

The DB2 engine is cost based, and access path selection is dependent upon statistics that are stored in the DB2 system catalog. Accurate statistics are a necessity if you want to get the best level of performance for your SQL statements. There are many different types and levels of statistics that can be gathered, and there are trade-offs to consider. The more statistics you gather, the more information will become available to the optimizer for calculating access paths. However, not all of these statistics are necessary for every type of query and data. Therefore, you should not just gather all available statistics for your tables and indexes since the quantity of statistics is going to influence the cost of binding or preparing SQL statements. You are also going to need an understanding of any special data or application requirements before gathering anything beyond the basic statistics.

I strongly recommend that when you do collect statistics you do so for the complete set of objects related to a table space. That is, you should not collect index statistics separate from table, column, and table space statistics. This can lead to conflicts, which can distort optimizer calculations and impact access path selection. I almost never gather statistics as part of the execution of other utilities, and never gather index statistics separate from other statistics related to the same table space.

The RUNSTATS utility gathers statistics based upon the data stored in your tables as well as the key values and entries in the indexes referencing those tables. The RUNSTATS utility will collect statistics based upon options set for table space, tables, columns, and indexes. The following is an example of an extremely common RUNSTATS utility control statement that gathers the minimal base line statistics for a table space, associated table, and associated indexes.

RUNSTATS TABLE SPACE DB1.TS1 TABLE(ALL) INDEX(ALL)

These base line statistics are important for transaction processing systems where the vast majority of the SQL statements are static embedded statements that use host variables in predicates or dynamic SQL statements where the vast majority of the SQL statements use parameter markers in predicates. In addition, if the majority of statements in your application are very simple single table accesses then there probably is not much need for any type of statistics beyond this.

I am typically only utilizing base line statistics until I have a special situation that may require additional statistics. These types of statistics may become more important as we expand access to table data via modern cloud based applications.

  • Frequency Distribution Statistics
  • Column Correlation Statistics
  • Histogram Statistics

A new tool for getting information about missing or conflicting statistics is the SYSIBM.SYSSTATFEEDBACK system catalog table, and the <schema>.DSN_STAT_FEEDBACK EXPLAIN table if it has been defined.

In the past for the large table spaces the SAMPLE keyword was used as an attempt to reduce CPU and elapsed time for a RUNSTATS utility. Today we use the following instead:

RUNSTATS TABLE SPACE DB1.TS1 TABLE(ALL) TALBESAMPLE SYSTEM AUTO INDEX(ALL)

When to Gather Statistics

There are three basic thoughts when it comes to gathering statistics.

  • Gather statistics on a regular basis. This is quite a common practice, and it could be a good thing or a bad thing. In my opinion, this is a bit reckless. Since statistics can cause access path changes, the regular refreshing of statistics creates a certain level of unpredictability to access path changes. Some shops install a regular policy of the “Three R’s”, REORG, RUNSTATS, and REBIND. This is only a good practice for enterprises where DBA support is limited and the DBAs do not have a good relationship with development and know nothing about the applications that are using the data. In this case, you are putting 100% of your faith in the DB2 engine and do not expect to do proactive SQL analysis and tuning and fully anticipate have a tuning strategy that involves simply fighting fires. If this is not your situation and it is your policy then I strongly recommend that you explore one of the other choices for gathering statistics. This may very well be a better policy for data warehouses.
  • Gather statistics once and forget about it. This technique is most appropriate when the database and application is understood by the DBAs and plenty of proactive SQL performance analysis has taken place. This is also a good policy for transaction processing systems with extremely high data volume and extremely high transaction rates. In these situations, you should have done your homework and the access paths should be understood and predictable. There is no need to gather statistics beyond what has been originally gathered to get the desired access paths. Understand that in this situation the CPU costs associated with regular RUNSTATS and the unpredictability of production access path selection is being exchanged for the additional cost of DBA time and effort to fully understand the database and application. It does require that the application developers and DBAs have a strong relationship, and that as changes are made to the application, database or data, the developers and DBAs are communicating, testing, and documenting the impact. RUNSTATS will have to be executed when changes are implemented. This is the technique I use in the vast majority of high volume, high transaction rate OLTP implementations.
  • Gather statistics when something changes. This technique is pretty much a balance between the first two techniques. It requires an initial gathering of representative statistics and then a policy in place whereas if data, database, or processes change, the DBA is informed and a RUNSTATS executed. This is a very common recommendation and a smarter thing to do than the option of running regular RUNSTATS, but only when you have the DBA resources available to deal with the coordination of change.

As far as the large high-volume databases I employ bullet point #2 above, as most of the transaction-oriented tables I deal with have data that is consistent and predictable, and the SQL statements used to access them are stable. This could change dramatically as we expand the use of these tables into the modern application environment.

Real-Time Statistics and Object Reorganization

Catalog statistics are critical to performance and reflect the condition of the data at the moment they are gathered. The REORG utility is useful for organizing the data in tables and the entries in an index, and hopefully the organization of the data in your tables and the catalog statistics about those tables are in harmony. It is important to gather catalog statistics when objects are properly organized in situations where there is a good standard practice of keeping objects properly organized and awareness of expected application performance. If no good standard exists, then the “Three R’s” might just be the way to go.

DB2 gathers and reports on object organization and activity in the real-time statistics catalog tables, SYSIBM.SYSTABLESPACESTATS and SYSIBM.SYSINDEXSPACESTATS.

For high volume transaction processing systems, I typically do the “one and done” approach to collecting catalog statistics, and then employ an intelligent policy of monitoring my production objects for organization, executing REORGs on only the objects that require reorganization. I utilize the real-time statistics tables to tell me when objects need to be reorganized, and only reorganize those objects when needed. The chances are that if you do this type of analysis you will be surprised at the reduction in REORGs compared to a regular REORG policy. It is quite likely that real-time statistics will expose the need for index REORGs far more often than table space REORGs.

I have developed a set of queries against the real-time statistics tables that I execute on a regular basis. These queries tell me which tables and indexes to reorganize. Below are examples of basic rules that run against the statistics gathered since the last REORG. These rules are subject to change and preferences, and samples can be found by following the links at the end of this article.

SYSIBM.SYSTABLESPACESTATS:

  • Indirect references greater than 5%
  • Inserts greater than 25%
  • Deletes greater than 25%
  • Unclustered inserts greater than 10% (be mindful of intentional unclustered inserts)
  • Disorganized LOBs greater than 50%
  • Mass deletes
  • More than 30 extents
  • Table is unclustered or has poor clustering (accomplished by joining to SYSTABLES and SYSINDEXES)

This SYSTABLESPACESTATS query does a good job of indicating when a REORG is needed, and how badly. However, if a particular table space (except system-period history) consistently shows up on the report, then that may require adjustments to free space. This requires a closer look into whether or not it is inserts or updates causing the disorganization. If inserts then maybe more PCTFREE. If updates then maybe an increase in PCTFREE FOR UPDATE and possibly a change to PCTFREE. The SYSTABLESPACESTATS query can be found at towards the end of the post or click here.

SYSIBM.SYSINDEXSPACESTATS:

  • Far away leaf pages greater than 10%
  • Extents greater than 30
  • Inserts plus deletes greater than 20%
  • Pseudodeletes greater than 10%
  • Appended inserts greater than 10% (be aware of potential intended appended inserts)
  • Any mass deletes
  • Any change in number of levels
  • This SYSINDEXSPACESTATS query does a good job of indicating when a REORG is needed, and how badly. If something is showing up often, it may indicate a need to adjust PCTFREE. This is especially true for some system-period temporal history indexes. The SYSINDEXSPACESTATS query can be found at towards the end of the post or click here.

SMF Accounting Reports

If you take regular accounting reports of the major processes interacting with your databases then you will become acutely familiar with the expected behavior of these processes. Remember, the main objective for REORGing the data is improved application performance. By understanding application behavior it becomes quite easy to detect when something is going wrong. Typically, a dramatic increase in CPU consumption, Synchronous I/O’s, or Synchronous Log Writes, without a corresponding increase in SQL activity is an indication something might be disorganized. If you have separate buffer pools for the major table spaces and indexes then looking at buffer activity maybe be another indication of something having gone bad. Creating and saving an SMF accounting report after a REORG will get you a baseline of optimal performance you can use for comparison.

While the REORG queries may tell you which objects need to be REORGed, the accounting reports will tell you how serious the application impact really is and thus the urgency of the REORG.

Information on account reports and traces can be found here.

Sample REORG queries can be found here:

SYSTABLESPACESTATS Query

SELECT TBSP.DBNAME,
TBSP.NAME,
TB.NAME,
TBSP.PARTITION,
TBSP.NACTIVE,
TBSP.NPAGES,
TBSP.TOTALROWS,
CASE WHEN (((TBSP.REORGNEARINDREF+TBSP.REORGFARINDREF+TBSP.REORGUNCLUSTINS)*100)/TBSP.TOTALROWS)>5 THEN (((TBSP.REORGNEARINDREF+TBSP.REORGFARINDREF+TBSP.REORGUNCLUSTINS)*100)/TBSP.TOTALROWS)
ELSE 0 END AS UNCREF,
CASE WHEN ((TBSP.REORGINSERTS*100)/TBSP.TOTALROWS)>25 THEN ((TBSP.REORGINSERTS*100)/TBSP.TOTALROWS)
ELSE 0 END AS INSERTS,
CASE WHEN ((TBSP.REORGDELETES*100)/TBSP.TOTALROWS)>25 THEN ((TBSP.REORGDELETES*100)/TBSP.TOTALROWS)
ELSE 0 END AS DELETES,
CASE WHEN ((TBSP.REORGDISORGLOB*100)/TBSP.TOTALROWS)>50 THEN ((TBSP.REORGDISORGLOB*100)/TBSP.TOTALROWS)
ELSE 0 END AS DLOB,
CASE WHEN TBSP.REORGMASSDELETE>0
THEN TBSP.REORGMASSDELETE
ELSE 0 END AS MDEL,
CASE WHEN TBSP.EXTENTS>30
THEN TBSP.EXTENTS
ELSE 0 END AS EXT,
CASE WHEN IX.CLUSTERED = ‘N’
THEN IX.CLUSTERED
ELSE ‘ ‘ END AS CLUS,
CASE WHEN (IX.CLUSTERRATIOF*100) < 95 THEN INT((IX.CLUSTERRATIOF*100))
ELSE 0 END AS CLUSR,
TBSP.EXTENTS,
TBSP.LOADRLASTTIME,
TBSP.REORGLASTTIME,
TBSP.REORGINSERTS,
TBSP.REORGDELETES,
TBSP.REORGUPDATES,
TBSP.REORGUNCLUSTINS,
TBSP.REORGDISORGLOB,
TBSP.REORGMASSDELETE,
TBSP.REORGNEARINDREF,
TBSP.REORGFARINDREF,
TBSP.INSTANCE,
TBSP.SPACE,
TBSP.DATASIZE
FROM SYSIBM.SYSTABLESPACESTATS TBSP
INNER JOIN
SYSIBM.SYSTABLES TB
ON TBSP.DBNAME = TB.DBNAME
AND TBSP.NAME = TB.TSNAME
INNER JOIN
SYSIBM.SYSTABLESPACE TS
ON TBSP.DBNAME = TS.DBNAME
AND TBSP.NAME = TS.NAME
LEFT OUTER JOIN
SYSIBM.SYSINDEXES IX
ON TB.CREATOR = IX.TBCREATOR
AND TB.NAME = IX.TBNAME
AND IX.CLUSTERING = ‘Y’
WHERE TBSP.DBNAME IN
(”)
AND TB.CREATOR = ”
AND (TB.TYPE = ‘T’ OR TB.TYPE = ‘H’)
AND TBSP.NACTIVE > 100
AND TBSP.TOTALROWS > 0
AND TS.LOG = ‘Y’
AND (DATE(TBSP.LOADRLASTTIME) IS DISTINCT FROM CURRENT DATE)
AND (
((((TBSP.REORGNEARINDREF+TBSP.REORGFARINDREF+TBSP.REORGUNCLUSTINS)*100)/TBSP.TOTALROWS)>5) OR (((TBSP.REORGINSERTS*100)/TBSP.TOTALROWS)>25)
OR
(((TBSP.REORGDELETES*100)/TBSP.TOTALROWS)>25) OR (((TBSP.REORGDISORGLOB*100)/TBSP.TOTALROWS)>50)
OR
(TBSP.REORGMASSDELETE>0)
OR
(TBSP.EXTENTS>30)
OR
(IX.CLUSTERED = ‘N’)
OR
((IX.CLUSTERRATIOF*100) < 95)
)
ORDER BY TBSP.DBNAME,
TBSP.NAME,
TBSP.PARTITION
WITH UR;

SYSINDEXSPACESTATS Query

SELECT SUBSTR(INDX.TBCREATOR,1,10) AS TBCREATOR,
SUBSTR(INDX.TBNAME,1,20) AS TBNAME,
SUBSTR(RTSI.CREATOR,1,10) AS CREATOR,
SUBSTR(RTSI.NAME,1,20) AS NAME,
RTSI.PARTITION,
SUBSTR(RTSI.DBNAME,1,10) AS DBNAME,NACTIVE,TOTALENTRIES,
CASE WHEN ((((1+RTSI.REORGLEAFFAR)*100)-99)/RTSI.NACTIVE > 10) THEN ((((1+RTSI.REORGLEAFFAR)*100)-99)/RTSI.NACTIVE)
ELSE 0 END AS LEAFFAR,
CASE WHEN (RTSI.EXTENTS > 30)
THEN (RTSI.EXTENTS)
ELSE 0 END AS EXTENTS,
CASE WHEN ((((1+RTSI.REORGINSERTS+RTSI.REORGDELETES)*100)-99)/RTSI.TOTALENTRIES > 20) THEN ((((1+RTSI.REORGINSERTS+RTSI.REORGDELETES)*100)-99)/RTSI.TOTALENTRIES)
ELSE 0 END AS INS_DEL,
CASE WHEN ((((1+RTSI.REORGPSEUDODELETES)*100)-99)/RTSI.TOTALENTRIES > 10) THEN ((((1+RTSI.REORGPSEUDODELETES)*100)-99)/RTSI.TOTALENTRIES)
ELSE 0 END AS PDEL,
CASE WHEN ((((1+RTSI.REORGAPPENDINSERT)*100)-99)/RTSI.TOTALENTRIES > 10) THEN ((((1+RTSI.REORGAPPENDINSERT)*100)-99)/RTSI.TOTALENTRIES)
ELSE 0 END AS AINS,
CASE WHEN (RTSI.REORGMASSDELETE > 0)
THEN (RTSI.REORGMASSDELETE)
ELSE 0 END AS MDEL,
CASE WHEN (RTSI.REORGNUMLEVELS <> 0)
THEN (RTSI.REORGNUMLEVELS)
ELSE 0 END AS LEVELS,
RTSI.REBUILDLASTTIME,
RTSI.REORGLASTTIME,
RTSI.REORGINSERTS,
RTSI.REORGDELETES,
RTSI.REORGAPPENDINSERT,
RTSI.REORGPSEUDODELETES,
RTSI.REORGMASSDELETE,
RTSI.REORGLEAFNEAR, RTSI.REORGLEAFFAR,
RTSI.REORGNUMLEVELS
FROM SYSIBM.SYSINDEXSPACESTATS RTSI
INNER JOIN
SYSIBM.SYSINDEXES INDX
ON RTSI.CREATOR = INDX.CREATOR
AND RTSI.NAME = INDX.NAME
WHERE RTSI.CREATOR = ”
AND RTSI.NACTIVE > 10
AND RTSI.TOTALENTRIES > 0
AND (
((((1+RTSI.REORGLEAFFAR)*100)-99)/RTSI.NACTIVE > 10) OR (RTSI.EXTENTS > 30) OR ((((1+RTSI.REORGINSERTS+RTSI.REORGDELETES)*100)-99)/RTSI.TOTALENTRIES > 20)
OR
((((1+RTSI.REORGPSEUDODELETES)*100)-99)/RTSI.TOTALENTRIES > 10) OR ((((1+RTSI.REORGAPPENDINSERT)*100)-99)/RTSI.TOTALENTRIES > 10)
OR
(RTSI.REORGMASSDELETE > 0)
OR
(RTSI.REORGNUMLEVELS <> 0)
)
ORDER BY INDX.TBCREATOR,
INDX.TBNAME,
RTSI.PARTITION
WITH UR;

9 Comments

  1. Pingback: Db2 for z/OS Index Performance Recommendations – Db2expert.com

  2. I could not get the SPUFI to run!
    SPUFI is on 80 characters…..
    DSNT408I SQLCODE = -104, ERROR: ILLEGAL SYMBOL “.”. SOME SYMBOLS THAT MIGHT
    BE LEGAL ARE: + – ( TRIM XMLSERIALIZE CAST XMLCAST CASE CURRENT_DATE
    DSNT418I SQLSTATE = 42601 SQLSTATE RETURN CODE
    DSNT415I SQLERRP = DSNHPARS SQL PROCEDURE DETECTING ERROR
    DSNT416I SQLERRD = 3 0 0 -1 2330 502 SQL DIAGNOSTIC INFORMATION
    DSNT416I SQLERRD = X’00000003′ X’00000000′ X’00000000′ X’FFFFFFFF’
    X’0000091A’ X’000001F6′ SQL DIAGNOSTIC INFORMATION

  3. Fernando Rodriguez

    Geeting this errror for the SYSTABLESPACESTATS query:
    ILLEGAL SYMBOL “”. SOME SYMBOLS THAT MIGHT BE LEGAL ARE: + – ( TRIM XMLSERIALIZE CAST XMLCAST CASE CURRENT CLIENT. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.19.56

    Could be related to this section of the query:

    ::::::::::
    ::::::::::

    CASE
    WHEN TBSP.EXTENTS>30
    THEN TBSP.EXTENTS
    ELSE 0 END AS EXT,

    CASE
    WHEN IX.CLUSTERED = ‘N’
    THEN IX.CLUSTERED
    ELSE ” <====== maybe here
    END AS CLUS,

    CASE
    WHEN (IX.CLUSTERRATIOF*100) < 95
    THEN INT((IX.CLUSTERRATIOF*100))
    ELSE 0
    END AS CLUSR,

    • danadmin

      It really depends on how you copied the query and where you are running it. If you are running it in TSO then while in edit do a FIND P’.’ command to find any illegal characters. Otherwise, I would cut the query up and run it in pieces until you find whatever part did not copy over correctly.

  4. Fernando Rodriguez

    danadmin,
    it worked, I’ve changed all ” by ” and it ran.

    Thanks,
    Fernando.

  5. Fernando Rodriguez

    for the SYSINDEXSPACESTATS query, getting this error, any idea how to fix it?

    EXCEPTION ERROR FIXED POINT OVERFLOW HAS OCCURRED DURING MULTIPLICATION OPERATION ON INTEGER DATA, POSITION. SQLCODE=-802, SQLSTATE=22003, DRIVER=4.19.56

Leave Comment

Your email address will not be published. Required fields are marked *