Tuning log switches

When an MQ z/OS queue manager does a log switch a lot of activity can take place.  Everything from the expected internal checkpoint, to shunting active units of recovery forward, to flushing messages that have been in bufferpools for 3 checkpoints to the pagesets – incurring real I/O.  Log switches that take place during critical periods cannot always be avoided, the combination of message volume and size may mean that logs have to switch, but there are times when the frequency can be influenced even if it cannot be completely controlled.

As an example, I was looking at some customer SMF data and JES logs to help with a performance problem that is plaguing them.  This customer has a very predictable daily peak, and they are experiencing some significant slowdowns on a regular basis.  From the JES log it was easy to see that during this peak period they are doing a number of log switches, and in their case some of the switches can be avoided.

A quick search for the CSQJ002I message (‘END OF ACTIVE LOG DATA SET’) showed that the full active log that was in use for one of the time periods we were examining had a start RBA of 3DD6A998F000 and an end RBA of 3DD6A998EFFF.  Calculating the number of log records that can be written is very simple the formula is (EndRBA+1)-Start RBA/1000), and I typically convert it to decimal.  What I found for this customer was that the log file would only hold 144,000 4K records – which is not very much.  I checked other logs and found the same thing.  The first suggestion I made to help with performance was to increase the log file size to something more reasonable, like 3Gb which will hold around 6 times more records than their current allocation.

The second suggestion was to issue an ‘ARCHIVE LOG’ command about 15-30 minutes prior to the peak.  At their current volume and log utilization, the combination of increasing the log file size and issuing this command shortly before the peak time should help them avoid a log switch during the time they need maximum speed and throughput.

Leave a Reply

Your email address will not be published. Required fields are marked *