The Data Science of Art

Last week we talked about the Art of Data Science. Let’s turn that on its head this week, and think about the Data Science of Art. I turned to the experts I know over at AEA Consulting. AEA Consulting is a New York-based cultural consulting firm that works with arts organizations and funders all over the world. Established in 1991, AEA’s founder Adrian Ellis was most recently Executive Director of Jazz at Lincoln Center from 2007 - 2011. The AEA team, including AEA principal Elizabeth Ellis, Brent Reidy (not to be confused with our TA, Ben Reddy) and Becky Schutt (my sister!) put together the following on museums and data.

By Elizabeth Ellis, Brent Reidy and Becky Schutt

There are three key issues with regard to data science at museums:

1) There is a lack of historic data; and/or

2) Museum institutions lack resources (time, expertise, staff, money) to gather and analyze new data; and

3) Perhaps most importantly, many of the goals that cultural organizations have set for themselves - or are imposed upon them - do not lend themselves to quantitative analysis based on data. It is notoriously difficult, for example, to measure intangible impacts such as artistic value, aesthetic experience, the extent to which a performing arts center is fostering social cohesion or contributing to a community’s quality of life, or whether, say, a museum is effectively stewarding the world’s most important cultural heritage….

In 2004, Maxwell Anderson wrote a paper “Metrics of Success in Museums,” in which he argues that the data that museums tend to collect and the metrics against which they tend to judged — e.g. annual attendance, number of “blockbuster” exhibitions, size of collection, etc., — do not adequately measure success at a museum, because they do not capture educational, artistic, social or other more intangible outcomes. Museums measure that which can be measured easily. Anderson offers a different set of metrics to help museums better measure success, for example, by surveying visitors to find out how many attendees experience “an intangible sense of elation—a feeling that a weight was lifted off their shoulders” when he or she visits the museum.
Anderson’s paper was well received and remains a guidepost for museums as they think on ways to measure success more fully and fairly, and about how to capture data to support that measurement. (For example, it was mentioned recently in a post on Tyler Green’s art blog: http://blogs.artinfo.com/modernartnotes/2012/07/the-sudden-sexiness-of-museum-success-metrics/ )

Screen Shot of the The sudden sexiness of museo-success metrics


However, much has changed in the world of data since the paper was published (2004). In what ways would you suggest that Anderson’s guidelines be updated — how could a museum take advantage of trends in technology, data science, and evaluation to gather and analyze the sort of data that could help not only measure its success, but to help achieve its mission?

To get at this question, first read Tyler Green’s post, and then skim Anderson’s paper, which is linked on Green’s blog (and also available here: http://blogs.artinfo.com/modernartnotes/files/2012/07/AndersonMetrics.pdf). For a potential jump start on ideas, see the Dashboard Anderson set up as Director of the Indianapolis Museum: http://dashboard.imamuseum.org/

And remember: museums and other arts organizations have tight budgets and rarely have significant resources to dedicate to data gathering and analysis — so keep that sensitivity in mind and suggest strategies that are economical and easy to implement. Crucially any data strategy must demonstrate to the end user that the data will be useful – in improving the visitor experience, in increasing earned (tickets/café/shop, etc) and contributed income (public and private funding), in developing a more nuanced marketing strategy, and so forth — to justify the investment.

In order to help address this question, it may be helpful to look at a few museum mission statements. Here are three examples:

The Museum of Modern Art: The Museum of Modern Art is a place that fuels creativity, ignites minds, and provides inspiration. With extraordinary exhibitions and the world’s finest collection of modern and contemporary art, MoMA is dedicated to the conversation between the past and the present, the established and the experimental. Our mission is helping you understand and enjoy the art of our time.

The Newark Museum: The Newark Museum operates, as it has since its founding, in the public trust as a museum of service, and a leader in connecting objects and ideas to the needs and wishes of its constituencies. We believe that our art and science collections have the power to educate, inspire and transform individuals of all ages, and the local, regional, national and international communities that we serve. In the words of founding Director John Cotton Dana,”A good museum attracts, entertains, arouses curiosity, leads to questioning—and thus promotes learning.”

Dallas Museum of Art: We collect, preserve, present, and interpret works of art of the highest quality from diverse cultures and many centuries, including that of our own time. We ignite the power of art, embracing our responsibility to engage and educate our community, to contribute to cultural knowledge, and to advance creative endeavor.

31 comments

  1. Silis J. · · Reply

    Anderson suggests a total of 101 metrics to measure the success of art museums. Many of these metrics translate to more than one variable, which means hundreds (if not thousands) of variables need to be measured to make it possible to calculate all those metrics. It is nearly impossible for a museum to implement all of these measures, especially smaller museums. Producing a score for some of the metrics is very resource intensive. Also, some of the metrics are very similar to each other. For instance, under “Scope and Quality of Collection”, Anderson suggests that museums should consider both the total amount of insurance carried on the collection, and the percentage of estimated collection value covered by insurance. It would seem that simply measuring the percentage of work covered by insurance would satisfy the needs of a reporting committee. Going through and selecting only a subset of the metrics may be enough to measure the “success” of a museum.

    From a data scientist’s standpoint, if you will, I think the first step for improving these metrics is to try to reduce the number of dimensions to those that are absolutely necessary and non-redundant. This can be done semantically (by removing variables that are too close to each other, or are likely confounding each other’s effect), or statistically (by measuring all these measures in a small sample, and using dimensionality reduction methods). I do not think that measuring everything is necessary, but instead, I think museums should focus on a few key metrics.

    1. In addition to the quantity of metrics, we must also consider the quality of the metrics being collected. I agree with Anderson’s point that without better metrics, there is no way for museums to measure their impact and their success. However, I do believe that a more evidence-based approach is needed to define the set of metrics to use.

      Anderson does not present evidence to indicate that measurement of the proposed indicators will be helpful or associate well with the bottom-line. Therefore, not only it is not feasible for museums to measure them all, but it is also unreasonable to do so. Each of these metrics should be studied separately and in combination at a smaller scale, under well-controlled circumstances to understand how well they associate with the success of a museum. As Silis said, it is very likely that many of these metrics may turn out to be unhelpful or unnecessary. The number of metrics measured can be limited to those that are actually necessary for predicting the outcome, as long as the outcome is clearly defined.

      Additionally, one must weigh the cost of obtaining a metric with the benefit it will provide. For instance, measuring the amount of time a person spends in front of an exhibit or a particular item might be a useful metric for measuring interest in a certain topic, or understanding what museum items are most popular, however, this type of data collection is time intensive and may require invasive techniques (i.e., watching or videotaping people while they are in the museum, and subsequently recording this information by hand).

      1. Both of your comments are completely valid and to the point. I want to take it a step further, and focus on a bigger problem with Anderson’s recommendations.

        In his very famous book, “The Goal”, Eliyahu M Goldratt proves it over and over again that you cannot fix a broken process unless you know what your ultimate goal is. Museums are complex organizations; they can be viewed from a financial standpoint (after all, they need money to continue to exist), but their primary goal is not to make money (as opposed to most businesses).

        Anderson builds a very good case about the fact that the success of museums should not be measured by their revenue or their number of visitors, but he fails to give a clear-cut alternative. I believe the definition of “success” can be different for different museums, but there can be guidelines on how to translate their goal into measurable metrics. Perhaps this is one major change that we need in Anderson’s metrics: to move from a hardcoded list of metrics for general use towards a guideline about how to pick from those metrics depending on your mission, and how to translate your mission into a concrete, measurable goal.

        Anderson’s metrics also fail to represent outside factors that affect the success of a museum. The impact a museum has on the culture of people not only depends on how well it is run or how valuable its collectibles are (I mean cultural value as well as nominal value), but also by its domain. The Museum of Jewish Heritage and the Sports Museum of America have different audiences, and their cultural impact is affected by that, as well their physical closeness to the American Museum of Natural History, Museum of Modern Art, and the Metropolitan Museum of Art (all five are located in NYC). Success is not just an internal factor. Anderson’s metrics are mostly favoring internal success, and may not give a fair understanding of the big picture.

  2. Eurry Kim · · Reply

    The 11 metrics suggested by Anderson are a comprehensive yet applicable to a wide array of museums. And he rightly calls out “quality of experience” as the most difficult to quantify. However, in these days of hashtags on Twitter, pages on Facebook, and up-votes on Reddit, museum experiences can be broadcast to the “fill in the blank”-sphere and later analyzed. For instance, I would be more interested to know how many previous visitors tweeted about their experience of perusing Julia Child’s kitchen in the American History Museum than how many people visited the Museum on a particular day. The added effort (albeit not much) required to tweet about an experience, or to “like” a page on Facebook provides a useful gauge of the Museum Experience upon a technologically-aware audience. If people can post pictures of their breakfast onto Facebook or check in to a hipster Brooklyn pizza joint via Foursquare, why not blast a twitpic of The Spirit of St. Louis from the National Air and Space Museum? This type of social network analysis would provide a modern measure of a museum patron’s experience.

    As to the “scope and quality of collection,” a curator must ask oneself whether the stock of museum pieces are stored in a database and correctly tagged or categorized. Recently, I heard about an NPR story on a Indiana museum that found a rare Picasso in its basement (http://www.npr.org/2012/09/10/160132025/for-museum-long-lost-picasso-is-too-costly-to-keep). The piece was apparently mis-tagged as an artist, Gemmaux, rather than by its technique, gemmail. The discovery was made by way of inquiry from a New York auction house on gemmail pieces. The virtues of data science could have stepped in to correct the mis-classification! What was the database that initiated the New York auction house to inquire about the gemmail piece in Indiana? The Indiana museum should obtain access to it and cross-verify tags of their pieces with their own database. And should they want to monetarily quantify the scope of their collection, they would then be able to state a value of 40-50 million dollars rather than the 10 million dollars before the discovery of the Picasso.

  3. James McNiece · · Reply

    When an institution’s stated goals are to raise cultural awareness and appreciation of the arts in its community, it has set itself an impossible task when it attempts to measure its success. Unlike in business, where there are conventional metrics like profitability and valuation that are highly correlated with its success, I doubt that any of the metrics proposed by Anderson are indicative of a museum’s success in prosecuting its mission. They are probably no better at predicting success than the standard attendance and membership metrics he derides in his article. Moreover, these metrics are probably no better than the museum employees’ and its donors’ intuition regarding how well it is doing.

    Instead, I am going to propose something different entirely. Rather than using data science to crudely measure its achievement, I think museums should run a Kaggle competition to see who can make the best art using their data. Artists could use admissions figures, video footage from security cameras, databases of artwork characteristics, et cetera. All of the resulting artwork would then be placed in a decision tree where a randomly selected group of museum patrons would choose the winning piece of artwork at each node by majority vote until only piece of artwork remained. This process would be repeated 1,000 times and the mode of the random forest would be chosen as the winning piece of art to be displayed to great fanfare in the museum’s great hall.

    This, I believe, is as likely to promote a museum’s mission as any attempt at developing metrics of success.

    1. I love this idea! I think it’s a clever way for a museum to get their metrics out there, and in doing so increase them at the same time (at least, if all went well, interest in this would lead to something like increase in attendance). I think a more simplified version would be more appropriate (I doubt repeating it 1000 times is necessary), but this would be a really cool way to generate interest in the museum and in the museums measurements that were available for the art creation.

      However I disagree that there is no place for straight data analysis to help the museums meet their goals. I think Anderson mentions many things that would apply to the goal of raising art appreciation. Something as simple as tracking the school children who attend museum programs IS a measurable component of the mission of most museums, by spreading art awareness in the community they serve. I do think many of the metrics could be updated, or consolidated into ones that really account for much of the sakes things, as well as more avenues explored (like social media, how Eurry said), but I do think he does a very decent job covering many of the bases that he views necessary to working towards success. I think it’s naive to say that these measured metrics could probably do just as well as intuition. Without collecting information at the simplest level, museum administrators wouldn’t have any experience with what is working, which is required to shape their intuition.

    2. I also love this idea, though rather than only picking the mode, what if the museum set up a temporary exhibition or web exhibition of all the art that had won? It could be a forest from forests (with the pieces that won most often displayed in the places where they are most likely to be seen or every piece displayed once for each tree it placed in.) I think it has potential to be an interesting exhibit.

      Also, it would prove that the museum was, in fact, engaging people if it generates a large amount of museum metric themed art at no cost to the museum.

      Aside from trying to get people to create art out of admissions figures, I think another interesting metric could be collecting data on groups that regularly visit the museum, (ie: Schools, after-school programs, community outreach programs, etc) and look at if how number changes over time. (Repeat business can be a good proxy for the measuring the quality of user experience, and there is probably a fairly painless way to integrate collecting this with the ticketing process if a museum has group admission rates or a way in which groups make reservations.)

  4. As the blog post states, this paper was written 8 years ago, in 2004. The author is now the Director of the Dallas Museum of Art and I wonder how much of this he has put in practice. On the Dallas Museum website I found their DMA Dashboard (http://dmadashboard.org/), where they provide to the public many interesting metrics. They do have several engagement metrics (number of flickr images, twitter followers, facebook fans), as well as some education metrics (miles on the go van gogh van, etc), but there isn’t much in the way of metrics for visitor experience/impact. Maxwell states in his comment that this is the more difficult task, one that he is actively working on. His suggestions for “Quality of Experience” seem more than adequate, so I’m curious to know how many of these they have already collecting (and just haven’t posted on their dashboard).

  5. From a philosophical perspective, I agree with many of the postings. That being said, there were some obvious flaws in the industry’s practice (i.e.: Valuing patrons by hard count as opposed to monetized them by their ticket-value contribution). Many of these poor metrics remained in place for non-scientific reasons. Anderson’s paper is significant in the fact that he proposed some, perhaps too many, variables that could be considered in 2004. Now any modern institution has learned to embrace technology as a cost-effective data source. Many museums offer interactive apps and web-based services to engage the community. Studies have been growing within the sociology and research communities revolving around the idea of collecting qualitative and quantitative data via mobile devices (see link).

    http://australianmuseum.net.au/blogpost/Audience-Research-Blog/iPads-for-evaluation-and-audience-research

    The ability to track the physical and cyber activity of your audience and passively or actively engage them can bring the Data Science of museums to a level of sophistication that most Fortune 500 companies would die for. The massive amounts of visitor data that can be collected, in conjunction with the variables revolving around an institution’s finance/operations (as outlined by Anderson), means Museums have an advantage they have never seen before.
    With a mobile device in-hand, Museums can track…
    1) The length (start & end time-stamp) of a visit
    2) The movement / length of time a visitor engages with a specific piece of a collection
    3) Click-Through-Rate of various Museum-offered apps
    4) Surveys offered by institution (some offer free Wi-Fi in exchange for visitor’s opinions)
    With the right tools/apps the ability to collect data is no longer a cost-benefit analysis for institutions. The big question that remains is how to ask questions and collect data that prove an organization is meeting its obligation as outlined in their own specific mission statement.

  6. David Wutchiett · · Reply

    I agree with the several other writers who commented that the scope of Anderson’s variables seem very expansive. Each of his 11 areas seem very relevant and I can see how they offer a comprehensive set of measurements of a museum’s success. However, I found that many of the variables don’t seem particularly telling in terms of public engagement. Anderson’s first metric is Quality of Experience, a metric of visitor engagement, but his following points seem much more focused on prestige and reputation -metrics that are important but indirectly influence a museum’s success in causing learning and thought among its visitors.

    I would argue that if engagement and learning is the priority, measurement of visitor behavior needs to be improved, and that behavior needs to be accounted for using experimental design. Anderson touches on time spent at the museum as a sign of engagement, but the specificity in which time spent is examined should be greatly expanded. Additionally the group/interpersonal dynamic of a museum visit should be increasingly evaluated.

    As an example of why these two metric topics should be used, picture a child impatiently bouncing around the room while a parent slowly tours an exhibit. They leave at the same time, but the museum likely hasn’t achieved its goal of engaging both visitors, bringing the two together in shared thought and curiosity. The difference between joint engagement in the museum’s content and the previous example could be the difference between positive memories of the museum, true learning and future re-engagement with the museum.

    As such, group fracturing versus group consistency could be a very important metric. If the museum’s displays are interesting then you would expect visitors to stay with their group in order to discuss their thoughts and insights. Ideally, strangers may even begin talking about what they see. A very interesting or effective display would result in people congregating and staying for greater lengths of time.

    Now how to accomplish measurement of these metrics: you would measure the number of people viewing each particular display per time unit. High volatility would suggest fracturing groups and less interested viewers, whereas high constant numbers of viewers would suggest real engagement -either with the content or with each other. You could then control for the time of day, the content in the display, the strategy in which viewers are engaged, the location in which the display is placed. These comparisons would be used to test the public’s interest level in the content, whether different displays for similar content affect viewing and engagement, and how location in the museum is driving behavior. I would expect that these behavioral signs would influence and predict sales in cafes and gift shops, memberships and return visits.

    This type of measurement is facilitated by recent technological advances as measurement of these behavioral variables would need to be automated and would be extremely data intensive. Each display would be monitored continuously for the number of individuals viewing it. You would receive a huge dataset, which you would then evaluate using algorithms like random trees to determine which factors are influencing the total viewers and the volatility of viewers across time. While I imagine that the content would be the biggest driver of viewers/time, display design would be hugely important for all the non-high profile content. As such these metrics could be used to inform the museum’s constant quality irrespective of the shocks caused by individual events.

    Alongside metrics of memberships, exhibit quality, staff, and other somewhat qualitative measures, it seems very possible that advances in data science could provide analyses allowing a much greater understanding of a museum’s visitor engagement.

  7. a. guess · · Reply

    I have a few thoughts on ways to update museums’ approach to measuring success given recent trends in data science and other methodologies. Besides the usual difficulties with measuring seemingly intangible outcomes, a major problem with this kind of exercise is finding a way to gain causal leverage — i.e., can we show that the institution in question caused a given surge in curiosity, self-reflection, community-wide education, etc.? I’m doubtful that simply applying a series of data-mining algorithms after the fact can provide answers to these kinds of questions, although they must play a part. A successful approach also has to include a ground-up data strategy that matches the institution’s mission. Such as:

    1. Building online surveys into the usual member outreach. For example, a museum could encourage prospective visitors to pre-register online with their email addresses and expected date of attendance. Then they could be sent an online survey with questions measuring a pre-visit knowledge and “experience” baseline. Then those who actually visited would get a follow-up afterwards. This is not perfect, since people who end up going may differ systematically from those who don’t, but it would give an idea of the impact of an actual visit on outcome measures of interest, from knowledge to enthusiasm to curiosity.

    2. To improve on the last idea, museums could band together to support field experiments in schools. Classes could be randomly assigned to treatments (where “treatment” is a museum visit and “control” is not visiting), and a similar pre/post battery of surveys could measure the effect of a visit. Even more interesting, potentially, would be the downstream network effects on students who heard about the visits from their friends or acquaintances.

    3. To measure the scholarly impact of museums, a relatively straightforward method would be to take the names of all Ph.D.s on staff, look up their h-index on Google Scholar, and average them together. This measures impact of scholarly work via citations, among other things.

    4. Finally, I think an important part of what we’re interested in measuring is more spontaneous and fluid than what might be captured in a survey or controlled environment. This is one area where data science can provide crucial insight, because extracting what people are saying on social networks, microblogs, comment threads, etc. could potentially provide clues about what is capturing people’s imaginations and sparking conversations in the community. Content and sentiment analysis would be extremely useful and could be used to further refine existing measures or create new ones.

  8. Yegor Tkachenko (it2206) and Siwei Guo (sg3017) · · Reply

    We view the main failure of Anderson’s article to be the absence of discussion of museum’s purpose. Success is defined as an accomplishment of purpose, and before answering how to measure museum’s success we need to answer what museum’s fundamental purpose is.

    According to Museum Association, “Museums enable people to explore collections for inspiration, learning and enjoyment. They are institutions that collect, safeguard and make accessible artifacts and specimens, which they hold in trust for society.” http://www.museumsassociation.org/about/frequently-asked-questions

    Focusing on this definition can help us narrow down the number of variables to consider. We are not not saying that Anderson’s ideas are bad; rather, some of the variables he suggested are not crucial.

    In our view, if we start from the definition of museum’s purpose above, the appropriate measures for assessing the success of museum are those, which capture:
    a. How much and how many people enjoy/get inspired by the content of the museum.
    b. How much and how many people learn from the content.
    c. The internal value of the whole museum’s collection, determined by the experts. (It captures how well museum deals with its function of collecting).
    d. The internal value of the displayed portion of the collection, determined by experts.
    e. How well museum preserves its collection, as determined by experts.

    Thus, if we view a museum purely as an institution serving public cultural needs, profit is NOT a valid metrics of its success, because profit is not museum’s fundamental purpose, as captured in the definition. In the end, it is not museum’s fault if people get less interested in arts and history, which entails decrease in profits for the museum. In this case museum’s role is not to bend public’s tastes, but rather to keep doors open for those who have preserved the interest in what it has to offer and to inform the public of what is to be discovered behind museum’s doors.
    It is only when we view the museum as a traditional business that profit becomes important, as the profit is the main metrics of success for a business.

    Our point is that we need to decide first, what dimension of the museum is of importance for us: museum as a cultural institution serving the needs of the society or museum as an enterprise. I have a feeling that convention tends to stick with the former. Would we really close down the Louvre if it were not profitable? I believe there would be a reason to support its existence even if it were not self-sustainable. For example, situation is like this in Ukraine, where museums are mostly not self-sustainable and are either funded by government or by donations. Are they a success? As businesses they are a complete failure, but as public institutions they are quite successful, holding collections of great cultural and historical value and attracting decent amount of attention from the public.

    And if we find that the cultural role of museums is more important than their business side, then we should be firm in stating that business metrics cannot be a decisive factor in judging museums’ success.

  9. Anderson established the through and quantifiable metrics to measure success of a museum. Given that the article is published in 2004, one item that can be added to Anderson’s guidelines is to use of social media data. Since the social media data are “big” and updated real time, I believe museums can really exploit them to their advantage. With quantitative data gathered, one possible effort can be to identify types of audience they get. For example, they can attempt to classify their audience into subgroups based on multiple features such as age, average length of time spent, membership, relative ranks of museums, etc. Once they identify the subgroups, museums can better serve audience to make the experience more relevant and meaningful. Stretching even more, I believe museums, using the quantitative data and machine learning approach, can personalize each visitor’s experience via audience via preference analysis, suggestion system, and all the cool stuff of data science.

  10. It would be worthwhile for museums to understand not just the magnitude of their impact, but also the very nature of it. Rather than simply counting the number of visitors, museums should analyze what types of visitors are coming. Are the visitors strictly art enthusiasts and academics or does the museum appeal to a broader audience? They could find these answers out by asking visitors to fill out a survey.

    For museums which find that most of their visitors are art enthusiasts, they could measure their success by analyzing art journals and blogs to see how often their museum or pieces from their museum are mentioned. They could extend this analysis to try to quantify the nature of these mentions. For instance, sentiment analysis could determine whether the author was reacting positively or negatively towards the art. Furthermore, do the authors simply discuss the artwork itself or do they have anything to say about their overall impressions of the museum? If they discuss the artwork itself, do they focus on a specific piece?

    On the other hand, museums which appeal to a broader audience should also incorporate data analysis from more “mainstream” sources, namely Facebook, Twitter, and other social networks. The museums should similarly consider not only how often they are mentioned, but whether these mentions are positive or negative and what specific aspect of the museum inspired the post/tweet. They could also try to quantify the nature of those who discuss their museum as measured by Klout score or other such metrics.

  11. Shaodian · · Reply

    Data science might help to build better models to understand the mission of museums, especially getting rid of problems and making the operations and maintenance easy. However, I suspect if such effort in using automatic approaches really makes sense to a museum, which is somewhere people enjoy arts and gain something emotional. It will never be possible for automatic approaches to really measure success of museum and help achieve its mission, which is decided by the innate limitation of modern computer and people’s rationality. Just like computers could never really perceive happiness and sorrow, data or data science is not able to measure factors like customers’ satisfaction as well. This is limited by modern theory of artificial intelligence, not data scale, data collection, data quality or anything else – because you will never have such “perfect” data to complete that mission.

  12. Nbeul Kim · · Reply

    I strongly support Anderson’s point of view and suggest “Museum Digitization” as a method for museums to take advantage of trends in technology, data science, and evaluation to gather and analyze the sort of data that could help not only measure their success, but help achieve their missions. According to Anderson’s writing, three of the leading indicators of success in art museums today— The number of big shows, visitors, and members— provide at best highly problematic metrics and, at worst, deceptive ones. In the writing, the reasons why these indicators turn to in evaluating museums are that they resemble denominators of more familiar markets, are easy to document and report, and may be presented in a positive light. Now, I believe, it’s high time that professional museum leaders make a persuasive case for new metrics of success that more accurately measure their museum’s long‐term health by the digitization of museums.

    According to Rachel’s writing, there are three key issues with regard to data science at museums: 1) There is a lack of historic data 2) Museum institutions lack resources (time, expertise, staff, money) to gather and analyze new data; and 3) Many of the goals that cultural organizations have set for themselves do not lend themselves to quantitative analysis based on data partly because it’s difficult to measure intangible impacts such as artistic value and aesthetic experience. Thus, I pose museum digitization will have a significant influence on solving the problems because it leads at least some new metrics that Anderson listed to be directly connected with the core values and mission of the art museums, be reliable indicators of long‐term organizational health and, of course, be easily verified and reported.

    Digitization converts materials from formats that can be read by people (analog) to a format that can be read only by machines (digital). The main advantages of digitizing can be to enhance access and improve preservation, which can contribute to both measuring indicators of success and achieving their missions, education and conservation. By digitizing their collections, cultural heritage institutions can make information accessible that was previously only available to a select group of researchers. Digital projects allow users to search collections rapidly and comprehensively from anywhere at any time. What’s more, digitization can help preserve precious materials. Making high-quality digital images available electronically can reduce wear and tear on fragile items. This does not mean, however, that digital copies should be seen as a replacement for the original piece. Even after digitization, original documents and artifacts must still be cared for.

    One of the examples of museum digitalization would be Metropolitan Museum of Art’s Digitization of the Libraries’ Collections. Over the past two years the Thomas J. Watson Library at The Metropolitan Museum of Art has established a digitization program, with the dual goals of preserving these original printed materials and expanding access to their content. The Library has already digitized more than three thousand items both independently and in collaboration with Metropolitan Museum of Art curatorial departments as well as other art museum libraries and galleries. In conjunction with the Mission of the Watson Library, this collection will support the scholarly endeavors of the Museum staff, and will be accessible to an international community of researchers.

  13. Bianca Rahill-Marier · · Reply

    When considering the question of how to measure a person’s experience, many think, quite naturally, of social networks. As some of my classmates commented above, social networks might provide museums a way of cheaply measuring user experience and satisfaction. In general, I think it’s a sensible approach and it’s definitely cost effective to count the number of likes or comments on facebook page. However, I’m not sure it’s the best approach, or the approach most in line with museum’s purpose. First and foremost, a facebook page like, or similar social network blast, does not, at least to me, seem to match the goals of engagement and learning that many museum’s have in their mission statements. Some posts may indicate learning, others may not, and I don’t feel that is something that is easily distinguishable. Certainly a ‘like’ on a facebook page is no better metric of success than counting the number of museum goers. Beyond the pure meaning of the metric, I feel that social networks, or at least the most common forms (i.e. facebook, twitter, etc…) do not match the missions of these museums and risk alienating certain populations, such as the elderly, young children, or even just tourists without access to their regular smart phones.

    That being said, I do think that social networks could play a role in measuring the success of a museum and in offering an educational tool in increase their success. However, instead of twitter or facebook, both networks that serve to connect the user to his existing network outside the museum, what about social networks that exists solely within the museum? While it would be great to measure how inspired a person is when they leave the museum, there would be no easy of way of measuring whether they changed their day-to-day behavior because of it. Social networks inside a museum could take the form of an interactive exhibit or something along the lines of a tablet device that could be rented to an individual museum attendee, like audio guides are in many museums today. In addition the audio component, users could participate in quizzes against other museum go-ers or simple answer questions about the exhibit as they go along (after all, who wants to feel like they’re taking a test at a museum). Based on the number of participants, type of participants (basic information such as age, gender, residence, tourist vs. local could be collected at the beginning), could help a museum understand what exhibits most encourage certain type of users to want to know more, or in simpler words, to engage. For example, if a user walks through a particular exhibit (he could indicate what exhibit he is in on the tablet), but doesn’t seek to read any additional information on it or participate in the interactive activities the museum could posit that this type of user is less interested in that particular exhibit than another where he spends an hour and engages in more activity on the tablet. The social aspect of the tablet could be showing users some of the result of the data; for example, what exhibits were most popular for users similar to them. Some museums, like the MET in New York, are huge and for those without membership getting the most out of single visit is important. The measurement of success here would be whether, after some time of implementing these data-driven products, museum attendees are engaging more with the exhibit and tablet learning tools.

    Of course, the idea of tablets may be beyond the financial reach of many museums. A simpler version could simply be to provide visitors a small sheet of paper (perhaps attached to their ticket), where they could mark off what exhibits they saw, about how much time they spent, and how ‘engaged’ or ‘interested’ they felt. Visitors could then hand this in on their way out. Ideally the survey would be such that the results could be easily digitized (like a standardized test – though ideally it wouldn’t look like one).

  14. Anderson’s article is particularly good because he explicitly states his aims. He argues that museums focus too much on special exhibitions and increasing the value of their collections. Anderson would rather see museums measure their success through the experience of their visitors and their education efforts.

    Although he is explicit about his goals, Anderson frames his argument as an attempt to “correctly” measure the value of a museum to its community. Through his critique of attendance, memberships and exhibitions as a measurement of a museum’s success, Anderson hopes to divert the emphasis that directors place on these goals. Instead he sees the value of a museum in the experience of its visitors.

    I agree with Anderson’s argument. It is a good example of what is truly at stake in data science. A data scientist is as much defining the meaning of success for the scope of her study as she is measuring whether the specific entity under examination lives up to that metric of success.

    In addition to the metrics that Anderson proposes, which are still relevant even eight years later, I would suggest adding more qualitative measures. Almost every metric in his list is a numerical measure Given the modern techniques of natural language processing, a museum should also be collecting the words of its visitors for analysis. One possible example: Have students write an essay at the beginning and the end of their education with the museum. The first essay could be a response to the prompt: “What do you hope to learn here?”. The second essay could be a response to the prompt: “What have you learned here?”. Not only would the museum have a direct answer to these questions from its students, but they could also preform an analysis comparing the essays to demonstrate the contribution of the museum to that individual’s education.

  15. Locke and Demosthenes, qua advocatii diaboli · · Reply

    I have an ethical objection to this entire project, and if Cathy’s lecture has taught us anything, it was to stand up and be counted (and polemical). So:

    There is no point at present in trying to measure museums.

    As other commenters have pointed out, many of these measure are vague, confusing and overlapping. And all are useless without a stated goal for the museum. But I contend that for a data scientist to participate in this kind of project, while not in violation of the Hippocratic Oath, would be unethical.

    The most telling phrase of Maxwell’s entire piece is in the middle of page 12:

    “In a global sense, the number of mentions of the museum on Google is a blunt but statistical measurement of that museum’s reputation.”

    As all good data scientists (pretend to) know, a statistic is some quantity calculated form a sample. Statistics are often used to estimate properties of whole populations, when they are known as estimators. What is the number of Google hits estimating? The real problem here is that the concepts (most obviously success) have not been constructed before being measured.

    Other commenters seem to think that this is about financial success. Were it only so. The suggested stats are not about measuring success or whatever; they’re implicitly defining it. Not only that, but “An objective assessment of these eleven features” is impossible, because they’re all entirely subjective. Worse than that, they’re value-laden, but the values are not explicitly stated. I guess it’s because I know nothing about art, but “Sculpture gardens, delightful as they can be, should be measured separately” seems to be missing a “because”.

    It is technically and legally possible to track the movements of all visitors to a museum (without their knowledge), and to record how much time different types of people spend looking at a Rodin vs. drinking lattes. And you could target these people with special offers, or reorganise the layout of the museum so that the absinthe bar is next to the Van Goghs. And these visitors would stay longer, or spend more, or report greater elation on the surveys you send them. But so what? What’s better, what’s worse, which tradeoff would be worth it?

    But even if they do know what their goal is, and they have the right metrics for it, it still doesn’t matter. All museums have to do is to convince their funders that they are performing well (or better than the competition). If the venture philanthropists know what they want to measure, measure that. If they don’t, make something up. If “dedication to the exhibition’s stated purpose is incrementally sacrificed for the pursuit of a large audience”, then stop measuring your audience. Maxwell admits that museums already lie about their attendance stats, so just juke the stats. If you measure without meaning, analyse without understanding, people will play to the measure.

    And here’s the worst point: each museum can choose slightly different things to “measure”, so that they all look great, and are entirely incommensurable. (“Hey, guys! We had the most of museum-published catalogues over 75 pages in length over the last five years!”, “So what, we’ve got the most catalogues published in partnership with universities for the past ten!”)

    There is one sentence in Anderson’s paper with which I agree: “Measuring museums as prescribed above will produce neither a comprehensive nor a scientific result.” (p. 20) Museums will learn exactly what they expect to learn from an ad-hoc collection of semi-measurable constructs like these. And if they don’t find what they want they’ll just ignore them, and so will everyone else.

    What matters is not how many or what percentage or the size of whatever. What matters is convincing people that these things matter. What matters more is convincing yourself. And convincing the people who matter. Museums would be much better served spending their scant money on lobbyists and PR than trying to measure things which can’t be measured.

    There is immense value for museums in thinking about what goals they want to reach. But for the same reason that they shouldn’t just measure footfall, they shouldn’t take these suggestions as anything more than a starting point. Museums will need to use professional judgement and experience to put together whatever numbers they rely on, and they can’t stop using their judgement once the numbers are collected. We data scientists have a professional (and intellectual) interest in things being measured. But as long as that would do more harm than good, let’s keep it vague.

  16. I think this topic can be viewed from the perspective of our last discussion question, which is the relationship between model and data. When we try to come up with some appropriate metrics, it’s quite like the modeling process that we are trying to do. In addition to try to find an appropriate model, we can just try to see what the data really tells us. Here are two ways to do that.

    The first way is what Eurry said. We can figure out the comments after people visit the museum and analyze how they expressed on their twitter and facebook. We can make the best of the social network analysis to make a modern measure of a museum experience.

    The second way is to track user’s behavior, like what the google art project provides us. Since when we login in the google art project, we need to search a museum and then go into that museum and begin our visit tour. We can compare the different search numbers for each museum. And that can be regarded as a factor to judge whether a museum is success. For example, if Louvre is accessible on google art project, I will definitely search for it and see how Mona Lisa smiles in extraordinary detail with extremely high resolution.

    And then we can also try to find that besides the influence of master piece, whether the museum can be regarded as a success. For example, we can track the user’s action after the user enjoys the master piece. Whether they will finish the whole journey in that museum or they just go out to search for another museum, because that can reflect whether the museum can attract visitors in many aspects, not just one or two master pieces.

    In addition, Google Art Project also added a google+ icon for each work of art. With that, we can easily do data analysis as any social network data analysis.

    So whether to a student, an aspiring artist or a casual museum-goer, Google Art Project gives us a new, fun and unusual way to interact with art. Also they provide a new way for data scientist to deal with the analysis of a museum.

  17. Data Science, at first sight, appears antagonistic to the world of museums. If museums are about “artistic value, aesthetic experience, (…) fostering social cohesion or contributing to a community’s quality of life” (Ellis et al. 2012), it seems preposterous to judge them using the technical toolkit of data science. However, not employing data science does not mean that museums are unencumbered from data analysis, it just means that their success is measured by a less sophisticated metric: attendance. This measure reflects a decision mostly made before entering the museum, is unsensitive to visitors’ experiences, and hardly relates to artistic quality – many reasons to conclude that “the ‘how many bodies crossed the entrance’ question should be of no more interest to a museum director than it is to the owner of a shopping mall” (Anderson 2012).

    So if bringing data analysis to museums is inevitable, the data analysis should measure museums’ success as precisely as possible, given the practical constraints spelled out by Ellis et al. (2012), particularly the lack of historic data and of resources. Earlier comments have already stressed that, under these practical constraints, Anderson’s (2004) extensive set of metrics, which are often difficult and/or expensive to measure, is outright impossible to implement. In addition to these practical problems, Anderson’s (2004) proposal suffers from conceptual weaknesses. In particular, in his attempt to measure the degree to which a museum achieves its mission, he does not differentiate succinctly between means and ends. For example, the “intangible sense of elation — a feeling that a weight was lifted off [visitors’] shoulders” (measure A.1), an end in itself, should not be lumped together with means to reach this end such as the “percentage of contributed income in operating revenues” (measure F.4).

    In order to remedy this conceptual problem and to construct a data strategy that can realistically be implemented under the constraints facing museums, we propose a strategy for measuring a museum’s success that starts from a simple dualism presented in the introduction to Anderson’s (2004) paper: museums should care about (1) the quality of their collection and (2) their impact on the broader public. One might, accordingly, view a museum both as a guardian of cultural heritage and as a service provider. In a stylized argument, one might identify this dual role with a museum catering to two audiences: the aficionados and the uninitiated.

    How, then, can a museum’s collection quality and its impact on the public be measured succinctly yet cheaply?

    First, measuring a museum’s collection quality is difficult but essential. If the application of data science would focus unduly on the impact on the public, the metric would incentivize museums to focus all resources on the (usually small) part of the collection that is on display, in the extreme case selling pieces that appear less central. However, with hindsight, these unusual pieces often become particularly important. They allow a museum to react to current trends at times when building a collection in this particular area has already become expensive or even impossible. A great example of this is given by Campbell (2012), who highlights that the Met could meet Americans’ newfound interest in Islamic cultures after 9/11 by bringing Islamic artifacts from the storage rooms together in a new gallery, which has already drawn thousands of visitors. This feat would have been impossible if a success metric unduly focused on short-term success with the audience had incentivized Campbell and his predecessors to sell Islamic artifacts at times when this collection seemed esoteric. How, then, can we incentivize museum managers to build collections of lasting quality through a measure that is easy and cheap to implement? We propose an online survey of 5 to 20 local experts – including Columbia University art historians for the Met and the bosses of sports organizations for the Sports Museum of America. Such a survey is cheap to implement, easy to anonymize, and will most probably draw competent evaluators by giving them the confidence that their voice will be heard, as it does not get buried in a verbose annual report but immediately informs the success metric according to which the museum leadership is judged (and possibly payed).

    Second, how can we measure a museum’s impact on its audience? With respect to long-term, deep impact such as raising awareness and changing social dynamics in the communities around a museum, we take Cathy O’Neils recommendation from her lecture on October 3 to heart and admit: we don’t know. Given museums’ resource constraints and the difficulties of measuring big-scale social impact, we favor humility over pie-in-the-sky proposals. So instead of focusing on the big picture, we want to develop proposals to measure the - also important - impact on individual visitors. In this respect, online communication media promise deep insights, often for free. For example, using geocoding, we can measure how often visitors tweet or facebook about an exhibition while visiting it, and how often they share photos from a museum (where taking photos is allowed) on Flickr. However, such measures may fall short or even become misleading if the very goal of a museum is to take visitors out of the constant-communication loop, as Campbell (2012) argues. Even then, however, technology may help. Many museums today hand out chip/RFID cards, allowing visitors to access personalized information or follow a certain character or idea through the exhibition. From the logs of these cards, it is easy to construct a detailed picture of a visitor’s engagement with an exhibition.

    These proposals, while certainly not exhaustive, may serve to sketch the amazing possibilities that open up when data science is brought to museums. Conceptually, one may criticize that our proposal rests on an oversimplified opposition between artistic quality and broad-audience impact. Indeed, as one may conclude from Campbell (2012), what makes a museum great is that it tears down the wall between aficionados and the uninitiated: pushing those trained in art history to go beyond the academic understanding of a bacchanal by Titian and experience the orgy depicted, at the same time educating those who come in seeing only the orgy about the artistic category and historical background of the painting. However, the wall between these two audiences can be torn down only if a museum both possesses a great collection and reaches a broad audience. As we have tried to argue, reaching these two goals can be greatly facilitated through the intelligent use of data science.

    Anderson (2004): http://blogs.artinfo.com/modernartnotes/files/2012/07/AndersonMetrics.pdf
    Anderson (2012): http://compleatleader.org/2012/06/18/measuring-success/#comment-152
    Campbell (2012): http://www.ted.com/talks/thomas_p_campbell_weaving_narratives_in_museum_galleries.html
    Ellis et al. (2012): http://columbiadatascience.com/2012/10/03/the-data-science-of-art/

  18. Chaoran Liu & Zaiming Yao (Team) · · Reply

    This semester, I am taking the course Marketing Arts, culture and entertainment, in which we discuss a lot on the museum industry and conducted a case analysis on Museum of Fine Arts in Boston. It is interested to see the metrics of success in art museums and I believe such metrics will help the industry to better understand itself.

    However, when the problem comes to how we can help the museum achieve its mission, we should consider the consumers at the same time. The metrics from Anderson start from the point as a outsider, not a visitor. Those questions are too general, only answering the “what” — but not answering the “why”. It fails us to understand the consumer behavior. I remember in the marketing arts course that the professor emphasized the importance of customer segmentation in approaching the consumers. The metrics from Anderson seem to be more useful for an industry committee who needs to rank the museums in USA. They are not useful for a specific museum to understand its customer. Nor can it help analyze the problems within the museum.

    I would suggest it include more questions on the demographic information of the customer and ask the specific questions on the exhibition going on in the museum. What’s more, except for the survey, we can just secretly follow the customers during the visit and see where they have been, what motivates them to go. (Such research method is often used in the hotel industry in Macau where the hotel manager needs to know where the touch points of the customers.) So with the help of data science, we can build a model in understanding the consumer behavior cycle, and bring out some insights with amazing visualization techniques.

  19. Reading this passage immediately brings me back to the debate of whether or not data science retains ethical values when dealing with evaluation of something that “should” be left for its ethical values. In many of Anderson’s reasoning and arguments in determining the success of a museum, it seems that many of his suggestions of analysis approach are very difficult to measure. The art of going to a museum and being able to appreciate the different pieces is subject to the individual that is going. Anderson’s eleven features can accurately describe what may define a “successful” museum, but fails to measure it in an effective manner, which is not at all his fault. He even accepts the fact that measuring the quality of a visitor’s experience is very difficult. With surveys, it is hard to stray from the power of suggestion, which in turn only creates bias and not very quality rich data to determine the performance of the museum.
    It can even be argued that museums may strive towards differents types of “success”. After reading Anderson’s metrics of determining a successful museum, it seems that the dynamic of determining success has been spread too thin, covering too many different dimensions. Shrinking those dimensions may help with narrowing the focus of study, making it easier to determine so called “success”. However, metrics such as appreciation hold so much more value in this case compared to the number of visitors, and by adopting this method, that valuable information is lost.

  20. Michael Discenza · · Reply

    I think that the points that everyone brings up about using social network analysis with data from service like Foursquare and Instagram are quite valid. And certainly I would say that the list of metrics to measure that Anderson created is quite exhaustive. I think though that Anderson’s list and mentality is honestly more like the U.S. News & World Report’s rankings of Universities and Hospitals. Though it’s unclear whether museums would pay any attention to their rankings or if those rankings would even be published or circulated, but if they did start paying attention to their rank or composite index based on this data, we have to ask if that would be actually a good thing for the museums in terms of satisfying their more intrinsic goals or if it would lead to manipulation of metrics for higher rank like what happens in some colleges. I also think that the index fails to account for the various different missions of museums. For instance a Museum like the Brooklyn Museum, my favorite art Museum in the city, and one that gives me great joy has an altogether different character and atmosphere than the MoMA or the Met.

    Moreover the rankings would probably create a consolidation of donations and it would make the process by which benefactors might figure out how to “best spend” their money overly market-based. Though I am not in a position to endow and collections or donate much more money than a suggested entry fee to a museum, I imagine that there is an importance that a donor have some kind of personal connection the museum, some more intimate reason for donating as to make the donation somehow more legitimate.

    Having articulated my discomfort with this idea of an index, I would like to focus now on some of the ways that I think methods in data science and machine learning might be useful for providing specific insights about user experience to help museum management enhance the enjoyment of their guests.

    First I think it is important that internally as opposed to externally museum management and boards devise a set of metrics and indicators that they find to be important for their particular priorities. Once they do this they should maintain some kind of “dashboard” that is updated with at least some consistent frequency, as has become standard organizational practice so going forward, they can have sense of how various interventions they make in guest experiences effect the metrics that they deem important.

    Second, I think that art museums (or maybe some new tech consulting firm that could specialize in art museums) should begin to leverage technology that has been used by retailers for a good amount of time now including motion tracking with web cams to better track guest interest in certain piece of certain kinds based on the among of time of the inferred position of units (the guests) within a map of the museum. This would allow the museums to make a number of useful interventions including feeding information and feedback to curators, better planning operations and queuing strategies to avoid lines and waits [basically what Disney IE/OR people do http://www.nytimes.com/2010/12/28/business/media/28disney.html?_r=0%5D.

    Finally, museums should try to experiment with new service like Art.sy and develop (or use a template for some kind of mobile app) to create mobile apps to scan QR codes associated with pieces of art to read descriptions and facts about the works they are viewing on their smart phones. These would be services presented under the auspices of providing enhanced information about works they are viewing, but that just as importantly for the museum management serve to create a rich click-stream-like dataset that can be cross referenced with the visual tracking of users and stored from one visit to the next to better understand how guests’ interaction with a museum’s collection (or multiple museums’ collections) varies over time.

  21. Alexandra Boghosian · · Reply

    I spent a long time thinking and writing about Anderson’s paper. I ended up with a 7-page response… I’ll spare you the details, but I’d be more than happy to talk about this subject with you! I’ve included one of the more coherent of my thoughts.

    Museums wrongly measure success in three ways, says Anderson. These misconstrued metrics are success of major shows, attendance, and number of members. Anderson’s critique of these metrics is itself fraught with misunderstanding. It seems as though Anderson doesn’t really consider what a metric is in simple terms. My understanding is that “metric” is a fancy word for “measurement.” So literally anything that can be measured is a metric. What makes something a good measurement depends on what you want to know. Anderson fails to recognize this basic concept, and jumps ahead with many biases and assumptions about museums’ missions, which predetermine the success of his metrics. He is, in other words, more focused on the actions that are measured than the measurement of the actions.

    For fairness’ sake, I adopt Anderson’s biases temporarily. Good museums are educational institutions, temporary exhibitions take a backseat to a museum’s permanent collection, and a museum whose primary source of income comes from contributions is better off than one which relies on ticket sales, and merchandise for its funding. We will also adopt Anderson’s standards for a good metric; consistency with the museum’s mission, long term financial viability, and easy reportability.

    Let’s consider Anderson’s take on major shows as a metric for museum success. In his words, the problem is that they “result in red ink, [distract]…from the core educational and collections-focused missions of art museum…and [depend] on quick fixes rather than long-term planning.” In other words, a major show offends all of Anderson’s biases. The case is already closed; since a major show distracts from the educational mission of museums, it must be a bad metric. Anderson doesn’t even need to discuss the measurement- his mind is already made up.
    For the sake of argument, let’s see what happens when Anderson does delve into a discussion of the metric. Anderson develops his point by noting that the success major shows is often misreported. In particular, indirect costs are not factored in to the show’s budget, and so the show appears to do more for the institution financially than it really does. He then concludes that the show does nothing more than cost money and detract from the educational mission of the museum. However, Anderson has not successfully argued that large shows are a bad measurement, instead Anderson has complained that large shows are measured badly. Perhaps they are costly, but it is unreasonable to discard a metric simply because it is badly reported. To be clear, suppose bad shows really were detrimental to the financial health of a museum. The metric itself hasn’t been shown useless, because it is the measurement of the action that matters, not the outcome of that action.

    If I were the director of a museum, I’d be curious to know exactly how much in debt I would be after a large exhibition. I would want this metric. Similarly, I’d like to know how many people attended. My next natural question would be something like this: now that I know large shows are financial trouble, but many people still attend them, what can I do to turn a profit?

  22. Boti Li(bl2472), Yige Wang(yw2511), Dan Xu(dx2133), Arash Yazdiha(ay2285), Luyao Zhao(lz2329) · · Reply

    Our team found that Anderson’s proposition of measurement of the success of a museum intriguing, which abandons the traditional three golden rules of measuring success but captures a wide variety of aspects regarding the performance of museums. However, viewing Anderson’s guidelines through a critical lens, we found that several issues need to be readdressed and updated.

    First of all, Anderson’s overly emphasis on the educational value created by museums seems to be biased. Although for a big number of museums aim to educate visitors, nowadays, the primary objective of museum is not always necessarily education-oriented. The diversity of types of museums increased. More and more museums serve as places for fun, for relaxation, for commemoration and etc. Undoubtedly, people are more or less learning new stuff while visiting these types of museums, but education, under this circumstance, is not the primary mission anymore. As a result, the measurement of success should be altered accordingly based on the museums’ individual objective.

    Secondly, although Anderson provided us a detailed and all-aspect plan for measuring the success of museums, its feasibility is doubtful. Despite the technical difficulty in the large amount of data collecting/tracking, one realistic barrier is the limited budgets that a museum is granted. It is uncertain that whether museums have enough funds to spend in data collecting, researching and surveying through all the eleven measurement criteria mentioned by Anderson.

    Lastly, our team thought that the biggest drawback of Anderson’s guideline of measurement resides in the lack of modeling. In other words, even if we had successfully collected all the data, but what are we supposed to do with it? How can we generate insights from the data? Anderson didn’t provide any suggestions on how we should utilize or manipulate the data collected from the metrics. Questions such as what are the key factors that drive up visiting experience, how these 11 factors interact with one another, should we analyze visitors’ behavior through a linear regression, correlation or k-nearest neighbors are all crucial to measure the success of a museum yet remaining unclear. However, it is this lack of a clear explanation of how to deal with all the data that introduces data science into play.

    Like Silis mentioned, the first thing our team suggests is to reduce the number of dimension of the metrics. Instead of covering all variables, it might be wiser to just select some focal facets and questions that pertain the most to the particular goal of a museum. For instance, a museum whose purpose is for entertainment should focus more on quality of exhibition and collection than its contribution to scholarship. This process of factor selection/elimination will reduce the difficulty of data collection, which makes the measurement feasible.

    Secondly, it is important to translate individual data point into stories and insights. And this could be achieved by model construction. For instance, if visitors’ experience is considered as a primary element of measuring the success of a museum, we could build a linear model to detest what are the key factors that contribute to a more memorable museum experience. It is data scientists’ job to determine what factors need to be taken into consideration and which types of model returns the most accurate prediction.

    Speaking of marketing, museums should also take advantage of digital marketing /advertising. Museums could advertise their exhibitions through various digital means: ads on their websites, on search engine websites (Google), on mobile phones or digital tablets. This could be more effective if museums (an art history museum) target their audiences based on their demographics (age 18 and above), interests (arts; art history) or even location (new to the area – visitors).

  23. Shuyu Wang (group member: Yuantao Peng, Jianyu Wang, Alex Lo, Ariel Marcus) · · Reply

    I would like to speak for Anderson’s idea of using quantitative methods in shaping the museum business. There are so many hidden information that can be easily obtained from the visitors if some careful analyses being done on the data. Data can be collected through various ways. Not limit to the expensive and time-consuming ways, such as interview or survey visitors. With high technology, for example, we can easily sense the customers’ preference by measuring the time spending in certain collections (technically it can be achieved through putting a RFID tag on visitors and reading their information at each entrance and exit of collections).

    Ways I would suggest Anderson’s guidelines be updated are to remove tangible measures because data can speak for itself. If a person visit museum on a regular cadence but he has only showed up to a particular one twice in the past few years. Need you say more? No! Inference is that he doesn’t like that one! Secondly, some metrics are out of date and make no sense to be account for making museums successful. Examples are in the “Quality of Exhibitions” category. He listed out explicitly the requirements. We should give careful thoughts to those numbers. However, more importantly, though quality of exhibitions can be partially represented by numbers, it shouldn’t be limited to.

  24. This question itself is based on several assumptions that are questionable for evaluating museums. The base assumption is that museum quality or performance is somehow quantifiable beyond the elemental binary parameter: Can it afford to keep it’s doors open or not. Beyond that, the post asks for an update to Anderson’s metrics when (1) a complete reimagining of the problem (if there is a problem) may be most appropriate and (2) Anderson’s ideas do not seem to have taken hold in the museum community in the past eight years, so the exercise itself may not be relevant.

    All that said, we have to consider the fundamentals of a museum:

    • Collection: What is the quality of the art it contains? This is generally the first conceptual association people have with a particular museum.
    • Building or site: What role does the physical institution play in the museum’s status or people’s perception of it.
    • Environment: Where is the museum located? Is it a “site” within a larger destination (e.g. New York City) or a destination itself (e.g. Guggenheim Bilbao)?
    • Mission: What is its cultural purpose as defined by its operators and patrons?

    All of these require lengthy discussion, but in short, a museum is a vehicle for uniquely interpreting the world and reimagining it for its time, environment and community; it’s a large piece of art. Other commenters have referred to a museum’s “purpose”, but the purpose of a museum cannot be discussed in the same way as, say, an airport. So, these attributes of uniqueness, quality, emotion, feel makes the museum experience inherently anecdotal. In Catcher in the Rye, Holden loves the Museum of Natural History because it doesn’t change; yet we change and can experience it differently each time. He extends this to a desire to place his own experiences in “one of those big glass cases and leave them alone.” In essence, I suppose, that’s what that museum goes for (witness a similar view of the museum over 50 years later in Noah Baumbach’s film The Squid and the Whale). I love that museum for the memory of a single visit when I was 6 years old and can’t say I’ve enjoyed it much since.

    So, I’m left to consider my own favorite museum experiences, and ask rhetorically what metrics would be useful. How do we measure the inspiration of the immigrant stories of the Tenement Museum? How do we measure the grandeur of the impressionist paintings in the Musee D’Orsay and the majesty of the former train station in which it’s housed? I’m sure we can measure the effect of Guernica on the Reina Sofia’s attendance figures, but how do measure the quality of it’s courtyard benches for napping jetlagged travelers?

    The effects and interpretation of art are anecdotal, personal and unpredictable. Culture shifts. Tastes shift. The most relevant things a museum can measure and analyze are outside its walls. And this likely requires perceptive individuals connected to the museum and community rather than quantifiable metrics.

  25. Anonymous · · Reply

    When I started reading how data science can be used to help improve art museums, I immediately thought of efficiency. One key concept that was not mentioned was the efficiency of the layout of the room. Obviously there are going to be several signature pieces in a room, but are these pieces overshadowing other would be highly visited pieces of artwork? One example is Starry Night by Vincent Van Gogh in MOMA. It is the signature piece in the room that at times it is almost impossible to see it as well as other pieces around it. Most people are more willing to wait to see this piece of art than the other less known pieces around it. It might be beneficial to figure out a way to rotate pictures around so previously unviewable pictures can be discovered by patrons. This would in turn would increase the quality of experience among other things. A couple metrics on how to measure the popularity of a piece are: how often people stop and admire a certain piece, do they take pictures, what about compared to other paintings. A good way to do this would be with security cameras that have recognition software that can track people as they walk through the museum. It wouldn’t have to be too complicated because people typically move in a linear fashion.

  26. Maxwell L. Anderson (@MaxAndersonUSA) · · Reply

    When the Getty asked me to write the equivalent of a blog post in 2004-before there were blogs-I took on the assignment with relish. The absence of agreed-upon methods of adjudicating institutional achievement were then, as now, a problem for our field.

    I appreciate the thoughtful offerings of those involved in Dr. Schutt’s course, and am certain that much more can be brought to the evaluation of museum performance than my original essay intended to add. The advent of social media in particular-which was at best a fledgling phenomenon in 2004-has radically shifted not only means of communicating, but also it has upended institutional authority, rendering the top-down culture of art museums in many respects obsolete.

    Without teasing too much, I will pass on that the Dallas Museum of Art will announce, at the end of November, a series of steps to change our operations that will be very germane to the concerns of your course.

    I look forward to rejoining your thread once that announcement has been made. In the meanwhile, I encourage you to be in touch via Twitter or via email: manderson@dallasmuseumofart,org.

    Max Anderson
    The Eugene McDermott Director
    Dallas Museum of Art
    @MaxAndersonUSA

  27. […] to Maxwell Anderson’s “Metrics of Success in Museums” paper. You wrote in comments to a blog post, “The Data Science of Art”, about how museums could utilize Data Science, and Anderson, Director of the Dallas Art Museum has […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

You are commenting using your Twitter account. Log Out / Change )

You are commenting using your Facebook account. Log Out / Change )

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 429 other followers

Build a website with WordPress.com
%d bloggers like this: