First take on Flanagan – Part I

I had heard a lot about the various drafts of the Flanagan Report that
had been floating around since last summer (and had read Flanagan’s own
preliminary conclusions in a paper he read at a conference at UC
Berkeley in October 2007, which I found pretty appalling). But I hadn’t
seen a draft until the release of what Flanagan apparently considers
his final version this week. Whatever Dreadful Blood Oath the Mellon Foundation required recipients of earlier drafts to sign was pretty effective in preventing wider distribution. What did they threaten offenders with – having to listen to timpani auditions?

The good news is that the report is considerably more nuanced than what I had been led to believe (not least by Flanagan’s October paper). The bad news is that, even with whatever modifications he made after receiving suggestions and criticisms from industry insiders, it’s still a flawed document, albeit not without some content that might be useful to the industry.

On a first read-through, I thought the flaws broke into two basic categories; the first associated with the data he used and the second with his misconceptions and lack of understanding about orchestras specifically and non-profits in general.

There are several aspects to the data issue. The first is that he picked the wrong years. He uses data from the 1987/88 season through the 2003/04 season. But, for a report intended to focus the extent to which orchestra finances are affected by economic cycles in the larger economic, these would appear to be bad endpoints. Even though he does apply corrections (in particular, local unemployment rates and stock prices) for economic cycles to the data from orchestras, it would have been better to have endpoints at similar points in the economic cycles. Starting his analysis with data from a year in which the economy is expanding and ending with a year when the economy is just coming out of recession seems likely to produce a tilt in the analysis, even after the corrections he applies. Had he started a few years earlier, when the economy was emerging from the recession of the early 1980s, or ended a year later, the uncorrected data would have provided trend data independent of the correction that he applied, which would have provided considerable redundancy to his analysis – and, as Henry Fogel pointed out, also would have provided a less gloomy long-term outlook.

Another problem is that the dataset is too small to be insulated from fluctuations within individual orchestras. For example, it’s a truism that opening a new hall, or hiring a new music director, is going to boost ticket sales. If even a small number of orchestras had done either toward the early years, or none had done so in the last two years, of the dataset, it seem to me that could have skewed the results. (So could the opposite, of course.) And no doubt there are other such infrequent events that can affect individual orchestras that could have a disproportionate effect on results derived from a dataset that’s too small.

A third problem is that the factors that Flanagan uses to correct for cyclical effects are not quite “on point.” It may well be that unemployment rates are the best number to use for such corrections, but it seems to me that unemployment rates are more likely a proxy for whatever it is about recessions that really affects attendance at orchestra concerts. The same may be true about stock prices and contributed income as well.

Lastly, it’s commonly accepted within the industry that the statistical data that the League has collected for decades, while unquestionably the best such dataset in the non-profit performance world, is not without flaws (which is why the League has started working on a new format and process). Filling out League surveys is not a task high on CFO’s to-do lists. And the level of detail of the collected data is not ideal either. An example is on page 52, where the report states that “the salary of an orchestra’s regular conductor averaged five percent of the artistic budget.” I haven’t seen a recent copy of the League’s annual statistical report, but I know that this was not a figure that was broken out separately in the reporting at the start of the dataset. So where did he get it? I agree that it’s a relevant figure for his analysis, but I don’t think he got it from the League data. There might well be other such details that, had the League data been more granular, could have led to a better analysis.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: