How often should we issue CFSTRUCT BACKUP commands?

The frequency and location of your BACKUP CFSTRUCT command should be driven by a few things:
a) Log turnover – if the average rate of log switching is long, every half an hour for example, and you manage to keep 2+ hours of logs readily available on DASD for all queue managers in the QSG, then running a back-up every half-hour is probably fine. If however your average log switch rate is much higher, especially if you are seeing a lot of long running UOW and log shunting messages; then I would recommend running the BACKUP CFSTRUCT every 10 minutes.
b) How much data are you normally seeing written out after a BACKUP command? When looking at this, I use a percentage of the structure size. For example, if the structure is 100G (we wish!) and the past BACKUP commands have written out an average of 5% or less, then I would feel more comfortable setting the back-up interval to a half hour as long as I was not getting a lot of log turnover from messages going to private queues or other structures. If the BACKUP command resulted in an average of 50G every execution, then I would set the frequency much lower.
c) Are you using SMDS and Flash memory? I would take both these factors into consideration when deciding on the time between BACKUP commands. If they are in use, then I feel more comfortable with a longer interval.
d) Do you have an ‘administrative queue manager’ where these commands can be run without impacting the logs where application work is being done? If the BACKUP command is going to cause impact to the applications processing messages, then less frequently might work better for your environment.
e) How long can you wait for a structure to be restored in the event of something dreadful? If your back-up is an hour ago and there are 50 log files to go thru to recover from each queue manager – it is going to take noticeably more time than if there are only 3.

There is no ‘one size fits all’ answer, the decision on the frequency has to weigh a number of factors and I am sure there are some I missed. IMHO – every half an hour as a minimum, though I think official Hursley says every hour. I am old and paranoid, and am constantly trying to reduce recovery time because you don’t need these measures on a good day.

MQ Clients and z/OS Queue managers – or why is that CIO yelling at me?

This is a tale of licensing and expectations; a consolidation of the experiences of many. A tale of technical decisions vs. licensing.
To start with there is a simple and straightforward statement, connecting MQ Client application to z/OS queue managers works quite well and there can be many processing advantages. Foremost is the continuous message availability associated with shared queues, but there are other reasons including simplification of the infrastructure.
Having said that, some customers have experienced significant sticker shock after implementing direct client attachments to z/OS queue managers – even after they have been warned. When MQ clients connect to any queue manager, whether z/OS or distributed, the CPU cost of the MQ API requests is absorbed by the channel initiator (on z/OS) or its equivalent process on distributed. MQ on z/OS is typically an MLC (monthly license charge) product, while the distributed platform is OTC (one time charge). The monthly license charge is charged based on use, which in its simplest form is the CPU consumption.
If a client application is well behaved, that is connects once and processes many requests before disconnecting, the costs are more predictable and controllable. Some years ago my team measured the cost differences between a locally attached application and a very well behaved client attached application and found the CPU difference to be about 17% more for the client – all of that coming from the channel initiator address space. That was on old hardware and an old release of MQ (V 7.0.1 I believe).
If a client is not well behaved, then the CPU use and therefore the cost is unpredictable. In one particularly horrible example, a customer saw their MLC charges rise by a significant amount in a single month due to a poorly behaved MQ Client application. Like many, their first client application had been well behaved and just caused a ripple in increased costs. Their second application was not quite so well behaved, but not bad enough to gain attention. By this time the customer had conveniently forgotten the advice to implement a ‘client concentrator queue manager on distributed’ to absorb the expensive MQCONN requests and implemented their third MQ Client application. This application followed that well known and expensive model of MQCONN_>MQOPEN->MQPUT->MQCLOSE->MQDISC followed by >MQCONN-MQOPEN->MQGET->MQCLOSE->MQDISC. In a single month their MLC bill went up well over 30% and a very angry CIO was calling me. Fortunately I could point to the recommended topology we had created for them three years previously that included distributed client concentrator queue managers and why we made that recommendation. They had chosen to ignore the client concentrator queue manager advice because they did not want to pay for a couple of ‘unnecessary’ distributed licenses.
The specific API requests vary in CPU consumption as well. The most expensive is usually the MQCONN or MQCONNX, as the CHIN and queue manager do a lot of work to set up the connection. The second most expensive is likely an MQOPEN of a temporary dynamic queue, again there is a lot of work going on within the queue manager to set up the queue. Others are typically less expensive, but can add up – especially when misused. Like using an MQPUT1 to put multiple messages to a single queue.

So the best advice is to know the applications. Make sure they use connection pooling if available and are coded to use the CPU expensive verbs as sparingly as possible. True for any platform, doubly true for z/OS.
Another word of advice, if you are planning new workload, connecting new applications, to your z/OS queue managers then talk to your IBM sales rep about options for new workload.