In working with a customer who is adding a new queue sharing group to support increased availability requirements there has been a great deal of discussion on how to size the structures. There is guidance available on the CFSizer tool (please see here: https://www-947.ibm.com/systems/support/z/cfsizer/mqseries/mqapplhelp.html ), but that information is dated – and may lead to incorrect conclusions at times.
For example there is a reference to messages larger than 63K residing on DB2 LOB structures. As of IBM MQ V7.1, the recommended placement for offloading is Shared Message Data sets. And there is no mention of the impact of offloading rules that were also added in that release.
However, that is not my primary concern with the information provided. I am far more worried about the recommendation to use the accounting class 1 data.
This referenced calculation based on class 1 accounting date may not generate accurate information, especially when many of the messages are over 10,000 bytes and/or are quite variable in size . The class 1 data contains counts of message put in limited size ranges, the last one being ‘over 9999.’ As messages range upwards from there to 100MB, that is, to me, too wide a range for an accurate calculation for many queues. And that data does not contain the queue names, so unless all queues are being converted to shared queues and will be in the same structure (both of which are unusual and not recommended implementations) this calculation may not provide enough a differentiation for these decisions.
.Another reason for my cautionary words, the class one data has not been updated since version 5.3.1 or maybe 6, so the data may not be entirely reliable. The current MP1B tool does not report on this data, so the customer has to have their own post processor, a back level version of MP1B, or use the MQSMFCSV post processor to work with the class 1 accounting data. I have had customers make some significant decisions based on this data when using the ‘old’ MP1B, for CPU use – though these incidents were not around sizing of structures or involved in implementing shared queues, it has left me with a level of suspicion about the data captured in the class 1 records. .
Using the recommended calculation is easier than what I prefer to use – if you have a processor for it, and gathering the class one accounting data is less expensive than the class 3 data. And for some implementations a broad brush is fine if structures can be ‘oversized’ because memory is not a constraint.
A more accurate calculation comes from the MQ accounting class(3) data, where there is more granular data available. As an example, when looking at a performance problem for a customer the queue analysis for a specific queue manager, I was looking at the ways queues were used in a specific bufferpool that was showing signs of stress. The query I used was the same query I would use for evaluating queue use for moving it to a shared queue and for sizing the CF structure. Note that I have hidden some columns, and is this query I did not capture maximum message size (because I knew that value was less than 6K from the programs) – which is also important when evaluating a queue.
The result of a query against Class 3 data processed by MQSMFCSV and loaded into DB2 to extract the totals for gets and puts for the queue in bufferpool 3 looks as follows:
If the queue that needs to be shared is ELKINSC.QUEUE6, then I would size the structure to hold messages that averaged 5706 bytes, can remain on the queue for almost a full second, and will hold at least 300 messages (rough guesstimate of 3 times deepest point, as that was given as the ‘worst case’ scenario – this should be evaluated over time). So my sizing for this one queue in whatever structure would be that I would need 1.7M bytes. Note that this is a very simple calculation, and does not take into account message body offloading, or the storage mechanism storage overhead (entries and elements). But that is still more accurate than a calculation based on 18240 MQPUTs that were as large as 9999 (the average size would fall into that range in the Class 1 accounting data), with no idea about average longevity on the queue, etc. Using the total puts, the 9999 size, and 3 times the storage would be 547M bytes – again if that were the only queue being used by the transaction (which we do not know from the class 1 data).
Also note that if the structure is to hold multiple queues, as most do, this same calculation needs to be done for each queue. Also if the deepest depth is expected to occur at the same time for these multiple queues, that should be accounted for as well. If the peaks are at different times, then the structure size may not have to be adjusted for the deepest point for each queue.
If you have the class 3 accounting data from a few busy periods, using that data for sizing should provide additional accuracy. If this is new workload, new queues, then using your estimates and class 3 accounting data from the production simulation tests will help refine the sizing data.