Flanagan’s fatal flaw

In an earlier post I talked about problems that I saw with the data
that Professor Flanagan used in his report and some of the ways he used
them. But it’s very possible that, with the exception of the years he
chose as endpoints for his dataset, correcting those problems would not
have made a substantive difference to his findings.

The real problems with the Flanagan report are in his assumptions and
with his lack of understanding of how non-profits in general, and
orchestras in particular, function in their environments.

An example of this is how he uses “information on local market characteristics, such as population and per capita income” (page v) as determinants of the ability of a community to support an orchestra. Of course such quantifiable characteristics matter. But anyone who’s been around the orchestra industry knows that a community’s traditions matter far more. Columbus OH is the 15th largest US city and the 32nd largest metropolitan area. Cleveland is the 33rd largest city, although the 23rd largest metro area. The per-capita and per-household income in Columbus is far higher than in Cleveland. But which city supports one of the world’s great orchestras? Nothing in the report indicates that Flanagan gets why this is.

His discussion of competition between performing arts non-profits is a further example. No doubt non-profits do compete for funding and for audiences. But they also cluster. A community that has deep philanthropic traditions is going to be able to support many more, and healthier, non-profits than one where there is no such tradition, regardless of relative size and wealth.

Another thing that Flanagan doesn’t get about orchestras is that orchestras aren’t homogeneous, especially in terms of how they fit into their communities. He shows no understanding that the smallest orchestras in his dataset have very different relationships to their communities, on average, than do the largest ones. To be blunt, the smaller orchestras are generally weaker institutions, with weaker boards and less community recognition and support – even in proportion to community size – than are larger orchestras. Nowhere in the report could I find any recognition of this.

But most important is Flanagan’s apparent failure to understand how different are non-profits from for-profit businesses. Admittedly he’s not alone in this – there are far too many people on orchestra boards who don’t really get this either. And it’s particularly hard to get it with orchestras, which not only look and feel very much like businesses on a day-to-day basis, but also compete directly with for-profit enterprises such as Broadway shows, rock concerts, sports teams, and movies. Unlike orchestras, most non-profits do not make significant revenue by selling product into a competitive marketplace dominated by for-profits.

Regardless of how much orchestras can look like small businesses, fundamentally they’re not. Businesses make things, or provide a service, in order to make money. Orchestras make money in order to provide a service. The bottom line for an orchestra is very different from the bottom line for a business. If a business puts out crap, or sells to a tiny number of people, but makes money doing so, it’s a success. An orchestra is only successful at doing what it’s supposed to do if it plays great concerts for lots of people. While it may need to balance its budget pretty consistently in order to keep doing so, balancing its budget is not the point of the enterprise.

This is why Flanagan’s obsession with the “performance income gap” is so revealing. A performance income gap for an airline would matter a great deal; simply put, a gap between what an airline makes in ticket sales and what it costs to get customers from A to B will, sooner or later, put the airline out of business. (It could be quite a bit later, though; by some estimates, the airline industry, considered as a whole and since its inception, has yet to turn a profit). But many non-profits don’t have a performance income gap because they have very little performance income per se, at least relative to total costs. This is true even of museums, which appear to earn more “performance” income relative to their expenses than many non-profits. Relative to the real cost of having a museum, most of which is in acquisition and building construction, earned income is likely pretty trivial.

Simply put, the performance income gap doesn’t matter. Less simply put, it only matters if the non-profit in question chooses to have it matter. There is no reason in theory that an orchestra, instead of selling tickets, couldn’t give away its seats, and in fact most orchestras do so for some concerts every year. (There have been orchestras where their bottom line might even have been improved by doing so.) Libraries don’t charge for their services. For that matter, sidewalks don’t charge for their services. But it’s impossible to have a city without sidewalks or libraries, so they are funded in ways other than by charging direct users.

Bruce Coppock of the St. Paul Chamber Orchestra has been leading the charge in a slightly different direction. He led a fascinating presentation at last year’s League convention in Nashville on realizing that earned income and contributed income were not, in fact, really separate income streams at all, since they came from sources that overlapped to a significant extent. The task of orchestra management in that model is to maximize the total by realizing the interconnection between the two revenue streams.

There are other examples in the report of Flanagan’s failure to grasp what orchestras are about. One I found fascinating was his discovery that, when orchestras add concerts, average attendance per concert goes down. But why is this surprising? To add even one concert is to add a large chunk of seats to inventory. It’s inevitable that some of those attending the new concert will be people who were already patrons, thus lowering average attendance per concert.

And why does it matter? The marginal cost of adding a concert is almost invariably far less than the average cost per concert, because most of the average cost is the relatively fixed cost of paying musicians on staff. And if the real bottom line for orchestras is playing more and better concerts for more and more people, what matters is total attendance, not average attendance.

Advertisements

2 Responses to “Flanagan’s fatal flaw”

  1. Bob Says:

    Dear Mr. Levine,

    Excellent points as always.

    One quick question about your last point -“what matters is total attendance, not average attendance.”

    What about the contention that when average attendance dips below a certain percentage of seats filled, the diminished perception of a “full house” leads to further problems in selling tickets for later events?

  2. Robert Levine Says:

    “What about the contention that when average attendance dips below a certain percentage of seats filled, the diminished perception of a “full house” leads to further problems in selling tickets for later events?”

    That’s a point. But I think that the basic idea that total attendance is the the key benchmark still holds. At some point, of course, average attendance will impact total attendance. But I don’t think that was Flanagan’s point, which is why I think he was wrong about this issue.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: