Showing posts with label informatics. Show all posts
Showing posts with label informatics. Show all posts

Tuesday, February 20, 2018

Are Participant Demographics the Most Useful Single Measure of Community Impact?

Let's say you want your organization to be rooted in your community. To be of value to your community. To reflect and represent your community. To help your community grow stronger.

What indicator would determine the extent to which your organization fulfills these aspirations?

Here's a candidate: participant demographics. If your participants' demographics match that of your community, that means the diverse people in your community derive value from your organization. The people on the outside are the ones coming in.

We use participant demographics as a core measure at the MAH. At the MAH, our goal is for museum participants to reflect the age, income, and ethnic diversity of Santa Cruz County. We compare visitor demographics to those of the county. We use the county census as our measuring stick. We set our strategy based on the extent to which we match, exceed, or fall short of county demographics.

Is this overly reductive? Possibly. There are at least four arguments against it:

Serving "everyone" shouldn't be the goal. I understand this argument, but I think it's suspect when it comes to demographics (especially income and race/ethnicity). Organizations can and should target programs to welcome different kinds of people for different kinds of experiences. But should those differences be rooted in participants' race or income level? Would anyone say with a straight face that it's OK to exclude people based on the color of their skin or the balance in their bank account? I don't think this holds up.

People are more than their demographics. I agree with this argument, but in my experience, it doesn't invalidate demographic measurement. For years, we focused at the MAH on non-demographic definitions of community, seeking to engage "makers" or "moms seeking enrichment for their kids" as opposed to "whites" or "Latinos." I believe that there are many useful ways to define community beyond demographics. BUT, when we actually started measuring demographics at the MAH a few years ago, we saw that we were engaging the county's age and income diversity... but not the county's ethnic diversity. How could we credibly argue that this wasn't a serious issue for us to address? Was it reasonable to imagine that Latina moms didn't want enrichment as much as their white counterparts? When we saw our race/ethnicity mismatch with the county, we started taking action to welcome and include Latinos. We changed hiring practices, programming approach, collaborator recruitment, and signage. Taking those actions led to real results, helping us get closer to our participants matching the demographics of our county.

Participants matching your community's demographics is insufficient. This is an argument I'm still grappling with. It's an argument advocating for equity instead of equality. Many cultural resources are disproportionately available to affluent, white, older adults. So, to advance equity, your organization should strive to exceed community demographics for groups that may be marginalized or excluded from other cultural resources. This argument encourages organization to strive for a demographic blend that over-indexes younger, lower-income, more racially diverse participants. This argument is also often linked to related arguments that changing participant demographics without addressing internal demographics of staff and board is inadequate and potentially exploitative. I'm torn on this too. In my experience, you can't effect community impact without internal organizational change. But the internal changes are a means, not an end. I wouldn't use internal indicators to measure whether we succeeded in reaching community goals. 

Attendance is not the same as impact. I'm torn about this argument too. On the one hand, showing up is not a particularly powerful indicator of impact. You don't really know why the person showed up or what they got out of the experience. On the other hand, on a basic level, attendance is the clearest demonstration that someone values your organization. They're only going to invest their time, money, and attention if they think they'll get something worthwhile out of the experience. Attendance may not be a signifier of deep impact, but it is the clearest way that people tell you whether they care or not about your offerings.


What do you think? Are participant demographics a worthy bottom-line indicator of success? Or is another measure more apt?



Wednesday, October 14, 2015

Use This: Audience Research in Rotterdam Provides a Template for Smarter Segmenting

Imagine a concise, well-designed report on audiences for cultural activities in a large urban city. Imagine it peppered with snappy graphics and thought-provoking questions about connections to research and audience development in your community.

Stop imagining and check out the Rotterdam Festival's 2011 report on five years of trends in audience data and related audience development efforts. They didn't do anything shocking or groundbreaking, but what they did, they did very well:

  • They identified the unique characteristics of Rotterdam citizens. 
  • They created psychographic profiles of eight target types of cultural consumer in Rotterdam, based on existing European market segmentation research. 
  • They interviewed and learned more about people representing these eight types. They identified the types' distinct interests and concerns, aspirations, media usage, and barriers to participation.
  • They used clear, evocative language (even in translation!) to convey their ideas. 
While their approach is not one I have used, I learned a lot from it. I recommend checking out the short-form report [pdf] and considering how the work in Rotterdam might inspire or support your own work on audience identification, understanding, and development. Hats off to Johan Moerman and the crew for making and sharing this work.


Wednesday, June 24, 2015

ASKing about Art at the Brooklyn Museum: Interview with Shelley Bernstein and Sara Devine


I’ve always been inspired by the creative ways the Brooklyn Museum uses technology to connect visitors to museum content. Now, the Brooklyn Museum is doing a major overhaul of their visitor experience--from lobby to galleries to mobile apps--in an effort to “create a dynamic and responsive museum that fosters dialogue and sparks conversation between staff and all Museum visitors.” This project is funded by Bloomberg Philanthropies as part of their Bloomberg Connects program.

I’ve been particularly interested in ASK, the mobile app component of the project. The Brooklyn team has been blogging about their progress (honestly! frequently!). To learn more, I interviewed Brooklyn Museum project partners Shelley Bernstein, Vice Director of Digital Engagement & Technology, and Sara Devine, Manager of Audience Engagement & Interpretive Materials.

What is ASK, and why are you creating it?

ASK is a mobile app which allows our visitors to ask questions about the works they see on view and get answersfrom our staffduring their visit.  

ASK is part of an overall effort to rethink the museum visitor experience. We began with a series of internal meetings to evaluate our current visitor experience and set a goal for the project. We spent a year pilot-testing directly with visitors to develop the ASK project concept. The pilots showed us visitors were looking for a personal connection with our staff, wanted to talk about the art on view, and wanted that dialogue to be dynamic and speak to their needs directly. We started to look to technology to solve the equation. In pilot testing, we found that enabling visitors to ASK via mobile provided the personal connection they were looking for while responding to their individual interests.

Are there specific outcome goals you have for ASK? What does success look like?

We have three goals.

Goal 1: Personal connection to the institution and works on view. Our visitors were telling us they wanted personal connection and they wanted to talk about art. We need to ensure that the app is just a conduit to helps allow that connection to take place.  

Working with our team leads and our ASK team is really critical in thiswe’ve seen that visitors want dialogue to feel natural. For example, staff responses like: “Actually, I’m not really sure, but we do know this about the object” or encouraging people with “That’s a great question” has helped make the app feel human.

Goal 2: Looking closer at works of art. We’d like to see visitors getting the information they need while looking more closely at works of art. At the end of the day, we want the experience encouraging visitors to look at art and we want screens put to the side. We were heartened when early testers told us they felt like they were looking more closely at works of art in order to figure out what questions to ask. They put down the device often, and they would circle back to a work to look again after getting an answerall things we verified in watching their behavior, too.

Moving forward, we need to ensure that the team of art historians and educators giving answers is encouraging visitors to look more closely, directing them to nearby objects to make connections, and, generally, taking what starts with a simple question into a deeper dialogue about what a person is seeing and what more they can experience.  

Goal 3: Institutional change driven by visitor data. We have the opportunity to learn what works of art people are asking about, what kinds of questions they are asking, and observations they are making in a more comprehensive way than ever before. This information will allow us to have more informed conversations about how our analog interpretation (gallery labels for example) are working and make changes based on that data.

So, success looks like a lot of things, but it’s not going to be a download rate as a primary measure. We will be looking at how many conversations are taking place, the depth of those conversations, and how much conversational data is informing change of analog forms of interpretation.  

You’ve done other dialogic tech-enabled projects with visitors in the past. Time delay is often a huge problem in the promise of interaction with these projects. Send in your question, and it can be days before the artist or curator responds with an answer. ASK is much more real-time. As you think about ASK relative to other dialogic projects, is timeliness the key difference, or is it something else entirely?

How much “real time” actually matters is a big question for us. Our hunch is it may be more about how responsive we are overall. Responsive means many thingstime, quality of interaction, personal attention. It’s that overall picture that’s the most important. That said, we’ve got a lot of testing coming up to take our ASK kiosksthe ipads you can use to ask questions if you don’t have or don’t want to use your iPhoneand adjust them to be more a part of the real time system.  Also, now that the app is on the floor we’re testing expectations that surround response time and how to technically implement solutions to help. There’s a lot to keep testing here and we are just at the very beginning of figuring this out.

That’s really interesting. If the conversations are about specific works of art, I would assume visitors would practically demand a real-time response. But you think that might not be true?

In testing, visitors were seen making a circle pattern in the galleries. They would ask a question, wander around, get an answer and then circle back to the work of art. Another recent tester mentioned that the conversation about something specific actually ended in a different gallery as he walked, but that he didn’t mind it. In another testing session, a user was not so happy she had crossed the gallery and then was asked to take a picture because the ASK team member couldn’t identify the object by the question; she didn’t want to go back. This may be one of those things people feel differently about, so we’ll need to see how it goes.

If we are asking someone to look closer at a detail (or take a photograph to send us), we’ll want to do that quickly before they move on, so there’s a learning curve in the conversational aspect that we need to keep testing. For instance, we can help shape expectations by encouraging people to wander while we provide them with an answer and that the notifications feature will let them know when we’ve responded.

Many museums have tried arming staff with cheerful “Ask me!” buttons, to little effect. The most common question visitors ask museum staff is often “Where is the bathroom?” How does ASK encourage visitors to ask questions about content?

Actually, so far we’ve had limited directional, housekeeping type questions. People have mostly been asking about content. Encouraging them to do more than ask questions is the bigger challenge.

We spent a LOT of time trying to figure out what to call this mobile app. This is directly tied into the onboarding process for the appthe start screen in particular. We know from user testing that an explanation of the app function on the start screen doesn’t work. People don’t read it; they want to dive right into using the app, skimming over any text to the “get started” button. So how to do you convey the functionality of the app more intuitively? Boiling the experience down to a single, straight forward call-to-action in the app’s name seemed like a good bet.

We used “ask” initially because it fit the bill, even though we knew by using it that we were risking an invitation for questions unrelated to content—”ask” about bathrooms, directions, restaurants near byparticularly when we put the word all over the place, on buttons, hats, signs, writ large in our lobby.

Although “ask” is a specific kind of invitation, we’re finding that the first prompt displayed on screen once users hit “get started” is really doing the heavy lifting in terms of shaping the experience. It’s from this initial exchange that the conversation can grow. Our initial prompt has been: “What work of art are you looking at right now?” This prompt gets people looking at art immediately, which helps keep the focus on content. We’re in the middle of testing this, but we’re finding that a specific call-to-action like this is compelling, gets people using the app quickly and easily, and keeps the focus on art.



Some of the questions visitors have about art are easily answered by a quick google search. Other questions are much bigger or more complex. What kinds of questions are testers asking with ASK?

It’s so funny you say that because we often talk about the ASK experience specifically in terms of not being a human version of Google. So it’s actually not only about the questions we are asked, but the ways we respond that open dialogue and get people looking more closely at the art. That being said, we get all kinds of questionsdetails in the works, about the artist, why the work is in the Museum, etc. It really runs the gamut. One of the things we’ve noticed lately is people asking about things not in the collection at alllike the chandelier that hangs in our Beaux-Arts Court or the painted ceiling (a design element) in our Egypt Reborn gallery.

Visitors’ questions in ASK are answered by a team of interpretative experts. Do single visitors build a relationship with a given expert over their visit, or are different questions answered by different people? Does it seem to matter to the visitors or to the experience?

The questions come into a general queue that’s displayed on a dashboard that the ASK team uses. Any of the members of the team can answer, pass questions to each other, etc. Early testers told us it didn’t matter to them who was answering the questions, only the quality of the answer. Some could tell that the tone would change from person to person, but it didn’t bother them.

We just implemented a feature that indicates when a team member is responding. Similar to the three dots you see in iMessage when someone on the other end is typing, but our implementation is similar to what happens in gchat and the app displays “[team member first name] is typing.” In implementing the feature this way, we want to continually bring home the fact that the visitor is exchanging messages with a real person on the other end (not an automated system). Now that we’ve introduced names, it may change expectations that visitors have about hearing from the same person or, possibly, wanting to know more about who is answering. This will be part of our next set of testing.

The back-of-house changes required to make ASK possible are huge: new staff, new workflows, new ways of relating to visitors. What has most surprised you through this process?

This process has been a learning experience at every point... and not just for us. As you note, we’re asking a lot of our colleagues too. The most aggressive change is more about process than product. We adopted an agile planning approach, which calls for rapid-fire pilot projects. This planning process is a completely new way of doing business and we have really up-ended workflows, pushing things through at a pace that’s unheard of here (and likely many other museums). One of the biggest surprises has been not only how much folks are willing to go-with-the-flow, but how this project has helped shape what is considered possible.

In our initial planning stages, we would go into meetings to explain the nature of agile and how this would unfold and I think many of our colleagues didn’t believe us. We were talking about planning and executing a pilot project in a six-week time spanabsolutely unreal.

The first one or two were a little tough, not because folks weren’t willing to try, but because we were fighting against existing workflows and timelines that moved at a comparatively glacial pace. The more pilots we ran and the more times we stepped outside the existing system (with the help of colleagues), the easier it became. At some point, I think there was a shift from “oh, Shelley and Sara are at it again” to “gee, this is really possible in this timeframe.”

After two years of running rapid pilots and continuing to push our colleagues (we’re surprised they’re still speaking to us sometimes!), we’ve noticed other staff members questioning why projects take as long as they do and if there’s a better way to plan and execute things. That’s not to say that they weren’t already having these thoughts, but ASK is something that can be pointed to as an example of executing projecton a large scale and over timein a more nimble way. That’s an unexpected and awesome legacy.

Thanks so much to Shelley and Sara for sharing their thoughts on ASK. What do you want to ask them? They will be reading and responding to comments here, and if you are excited by this project, please check out their blog for a lot more specifics. If you are reading this by email and would like to post a comment, please join the conversation here.

Wednesday, June 03, 2015

Learn to Love Your Local Data

Last month at the AAM conference, a speaker said, "we should all be using measures of quality of life  to measure success at our museums."
I got excited. 
"We should identify a few key community health indicators to focus on."
I got tingly.
"And then we should rigorously measure them ourselves."
Ack. She killed the mood.

Many museums (mine included) are fairly new to collecting visitor data. Especially new to collecting data about broad societal outcomes and experiences. Why the heck would we be foolish enough to do it all ourselves?

The "we have to do it ourselves" mantra is one of the most dangerous in the nonprofit world. It promotes perfectionism. Internally-focused thinking. Inability to collaborate and share. Plus, it's expensive. So when we find we can't afford to do it ourselves, we throw up our hands and don't do it at all.

Here are three reasons to find and connect with community-wide sources of data instead of doing it yourself:

The data already exists.

Want to know the demographic spread of your county? Check the census. Want to know how many kids ate fruits and vegetables, or how many teens graduated high school, or how many people are homeless? The data exists. In some communities, it exists in different silos. In others, someone is already aggregating it. 

When we started more robust data collection at our museum, we wanted a community baseline. We thought about collecting it ourselves (stupid idea). Instead, we found the Community Assessment Project--an amazing aggregation of data from all over our County, managed by a wide range of stakeholders from health and human services. Not only do they aggregate existing data, they do a bi-annual phone survey to tackle questions like "have you been discriminated against in the last year?" and "what most contributes to your quality of life?" We got the data, and we got involved in the project. Now, instead of using our meager research resources to collect redundant data, we can springboard off of a strong data collection project that we access for free. 

You may not have a Community Assessment Project in your community, but you have something. Ask the health department. Ask the United Way. Someone is collecting baseline community data. It doesn't have to be you.

We're stronger together.

Imagine a community with 50 different organizations working to reduce childhood obesity. Would you rather see them each pick a measure of success that is idiosyncratic to their program, or join forces to pick a single shared measure of success?

If your museum is working to tackle a broad societal issue, you're not doing it alone. Your program may exist in its own bubble of the museum, but there are likely many organizations tackling the same big issue from different angles.

Each of you is stronger--in front of funders, in front of advocates, in front of clients--if you can work together towards one shared goal. Even if it doesn't map perfectly to your program, it's worth picking a "good enough" measure that everyone can use as opposed to a perfect measure that only works in your bubble.

For example, one of the outcomes in our theory of change that we care about is civic engagement. We want visitors to be inspired by history experiences at the museum to get more involved as changemakers in our community. Our Community Assessment Project already measures indicators of civic engagement like voting, writing to an elected official, and speaking at a public hearing. Are these the indicators we would choose in a bubble? Probably not. But are they more powerful because we have years of good countywide data about them? Absolutely.

Shared data builds shared purpose.

What happens when those 50 different organizations agree on one indicator for success in reducing childhood obesity? They get to know each other. They understand how their individual work fits into a larger picture. They build new partnerships, reduce redundancies in programming, and fill the gaps. They do a better job, individually and collectively, at tackling the big issue at hand.

That's what we should be using measurement to do. I can't wait to hear a story like this at a conference and fall in love with data all over again.

Are you working across your community to share key indicators of success? Share your story, question, or comment below. If you are reading this via email, you can join the conversation here.

Wednesday, February 18, 2015

Data in the Museum: Experimenting on People or Improving Their Experience?

Every few months, a major news outlet does an "expose" about data collection on museum visitors. These articles tend to portray museums as Big Brother, aggressively tracking visitors' actions and interests across their visit. Even as the reporters acknowledge that museums are  trying to better understand and serve their visitors, there's a hint of menace in headlines like "The Art is Watching You."

We're trying to personalize. We're trying to adapt. We're trying to be responsive. But it can still come off as creepy. In a world of iteration, prototyping, and A/B testing, do we need a new ethical litmus test for social experimentation?

I came back to this question as I listened to the most recent RadioLab podcast about Facebook's mass social experiments on users. For years, Facebook has teamed up with social psychologists to perform social experiments through small changes to the Facebook interface. These experiments look a lot like those conducted in social psychology labs, with two big differences:
  • the sample sizes are many tens of thousands of times larger than those in the lab--and a lot more diverse across age, class, and geography. 
  • no one signs a form giving consent to participate. 
I thought this sounded great: better data, useful research. Turns out not everyone thinks this is a good way for us to learn more about humanity. Last year, there was a HUGE media kerfuffle when people were shocked to learn that they had been "lab rats" for Facebook engineers researching how the News Feed content could impact people's moods.

To me, this was surprising. Sure, I get the ick factor when my personal data is used as currency. But I know (mostly) what I'm buying with it. Facebook is a completely socially-engineered environment. Facebook decides what content you see, what ads you see, and your personal ratio of puppies to snow warnings. And now people are outraged to find out that Facebook is publishing research based on their constant tweaking. It's as if we are OK with a company using and manipulating our experience as long as they don't tell us about it.

It seems that the ethical objections were loudest when the intent of the experiment was to impact someone's mood or experience. And then I started thinking: we do that all the time in museums. We change labels based on what visitors report that they learned. We change layouts based on timing and tracking studies of where people go and where they dwell. We juxtapose artifacts to evoke emotional response. We tweak language and seating and lighting--all to impact people's experience. Do we need consent forms to design an experience?

I don't think so. That seems over the top. People come to the museum to enjoy what the invisible hands of the curators have wrought. So it brings me back to my original question: when you are in the business of delivering curated experiences, where is the ethical line? 

Consider the following scenarios. Is it ethical to...
  • track the paths people take through galleries and alter museum maps based on what you learn?
  • give people different materials for visitor comments and see whether the materials change the substance of their feedback?
  • cull visitor comments to emphasize a particular perspective (or suite of perspectives)?
  • offer visitors different incentives for repeat visitation based on behavior?
  • send out two different versions of your annual membership appeal letter to see which one leads to more renewals?
  • classify visitors as types based on behavior and offer different content to them accordingly?
I'd say most of these are just fine--good ideas, probably. I suspect we live in an era where the perceived value of experimentation outweighs the perceived weight of the invisible hand of the experimenter. Then again, I was surprised by the lab rat reaction to the Facebook experiments.

It's hard sometimes to differentiate what's an experiment on humans and what's an experiment to improve your work for humans. As the Facebook example shows, just claiming your intent is to improve isn't enough. It matters what the humans think, too. 

I guess that's what makes us more than lab rats--we can speak up and debate these issues. What do you think?

If you are reading this via email and would like to share a comment, you can join the conversation here.

Wednesday, November 27, 2013

Visualizing the Tate's Collection: What Open Data Makes Possible

Detail on distribution of artworks in the Tate collection by birthdate of artists, visualized by Florian Krautli.
What does "big data" look like for museums? Collecting institutions have enormous stacks of data about the artifacts and artworks in their stores. Several museums around the world have worked hard to make their data accessible by providing free access to datasets, applying Creative Commons licenses to digital content, or creating APIs (application programming interfaces) that allow programmers to build their own software on the museum's data.

Last month, the Tate joined the party when they opened up their collection database to the world on GitHub, a website where programmers collaborate on projects. The Tate is providing metadata about artworks and artists in its collection--over 70,000 artworks in all. The data is in a computer language called .JSON that is commonly used for data sharing and processing. Even if you don't speak database, it's worth seeing how the Tate is presenting their collection to programmers on GitHub.

What can you do with these .JSON files? Anyone can pull down the data and use it for their own purposes, subject to some simple goodwill guidelines. Here are two examples of visualizations created by GitHub users:
These visualizations are fun. They are beautiful. They raise interesting questions about the Tate's collection and the imperfections of collections data. 

But the discussions they raise are limited. Florian's blog post centers on the question of why there are so many pieces by William Turner in the Tate's collection. A commenter pointed out that there must be an error in the data, as it is highly unlikely that Turner produced more than 40,000 works in his lifetime. Jim's post suggests some fun but somewhat silly conclusions about the height/width ratio of artworks.

Reading these posts and the related conversations, I was struck by two conflicting feelings:
  1. It's awesome that data-sharing is causing people to have a conversation about what artists are represented in a museum collection, what kind of artwork the Tate has, what surprising things can be visualized and learned from the collections data, and how the data can be improved.
  2. The data is sufficiently flawed and idiosyncratic to yield conclusions of questionable value. Knowing the dimensions of the frame a painting is in is much less compelling than many, many other things that could be known and explored about works of art. I'm imagining visualizations focusing on the gender or race of artists in the collection, frequency of loans (and to whom), frequency of display, common words used in label text... the list goes on. 
To me, the fact that #1 is exciting and promising makes addressing #2 worth it. Opening up data is just the first (big) step to make it usable and useful. These experiments prompt questions, identify gaps in the data, and promote new forms of collection, dissemination, and analysis. The data you have is not always the data you want, but you often don't know that until you start monkeying with it. Future iterations of data sharing and use will help institutions and citizen-participants take the next steps to make it meaningful. 


Wednesday, August 08, 2012

Introducing Loyalty Lab

A woman walks into your museum. She's visited a few times before, and you vaguely recognize her as the lady who loved bubble painting, thought the bike sculpture was funny and didn't like the video installation. Last time she had a kid with her, and he got chalk all over his hands from the mosaic activity they did with a volunteer. They wrote a comment about their experience that got turned into a bird by other visitors in the public sculpture hanging in the middle of the museum. You remember seeing them stand in front of the magic mirror in the history gallery, laughing as they made themselves into giants in the glass.

In the admission log today, she is registered as a tick mark under the column marked "General." That's it. No information about who she is, why she's here, what she's looking for, and what she gets out of her connection to the museum. No memory of her relationship with us.

Our museum has a big challenge when it comes to tracking and rewarding participation. Like a lot of small museums, at the MAH staff and community members build relationships on a daily basis. Staff members invite visitors to help write exhibit labels, create art installations, and give opinions on upcoming programs. Visitors become volunteers and take the lead on new projects and activities. Visitors tell staff members and volunteers again and again how their lives are changing because of their involvement with the museum.

This is wonderful and maddening at the same time. It is wonderful to see the uptick in membership and donations and the positive energy from people who come in the door. It is maddening to have no way to track or intentionally encourage these relationships to grow. Like many small museums, the MAH cannot afford expensive ticketing or membership software systems. We have email newsletters and memberships and conversations, but none of those things talk to each other. Our computers are amnesiacs when it comes to participation. We have very high ability to form relationships with visitors, but very low ability to capitalize on those interactions.

With the support of the National Arts Strategies Chief Executive Program and the Institute for Museum and Library Services, we're starting a new project called Loyalty Lab to change that. In the Loyalty Lab, we will develop a series of low-tech, low-cost strategies and systems for small institutions to track, celebrate, and act on personal interactions with visitors. I'm not talking about RFID chips for every visitor or a Nike+ system to track their every move. I'm talking about human-scale, simple, delightful ways to acknowledge people's involvement and encourage them to go deeper. It could be loyalty cards. It could be charm bracelets. It could be free hugs. We want to be as creative as possible in exploring the options.

Our goals are to:

  • Measure and increase membership acquisition and renewal 
  • Measure and encourage repeat visitation 
  • Increase participant perception of the MAH as a friendly place with high community value

And we want to do it with you, too. We've created a little blog that we will use to track our project openly. It's starting with a workshop tomorrow with Adaptive Path, an experience design firm that focuses on mapping "customer journeys" and developing tools that enable users to more enjoyably and successfully navigate the offerings of the business or organization. In museum terms, that means understanding how visitors hear about us, why they come, what they do when they are here, and what happens after they leave. It means finding the points along the way where we lose people, and the opportunities for us to track and celebrate people's deepening involvement. You can learn more about this process from an Adaptive Path slideshow here.

This is a year-long project for us at the MAH. We'll go from research to prototyping to final design from now until early summer of 2013. We'd love to have you join us as contributors to the Loyalty Lab blog or just follow along and comment on our progress. We've already heard from one museum--the Boston Children's Museum--where they are experimenting with a "V.I.F." program (Very Important Family) to reward repeat family engagement. I know there are other organizations--museums and beyond--playing with innovative approaches to membership, pricing, and tracking to support and encourage deeper relationships. The goal here is for all of us to learn and experiment together.

How do you think about loyalty and relationship-building in your organization?

Tuesday, February 19, 2008

Data Visualization Part 2: What's in a Name?


A couple of weeks ago, I wrote about the power of data visualization as an addition to the exhibit design toolkit. Paul Orselli made a thoughtful and challenging comment, saying:
...many data visualization art pieces, albeit elegant, seem to be inherently "push" technologies. That is to say, they parse selected bits of data for the viewer.

So how does finding patterns in streams of algorithmically-derived data move beyond the enjoyable exercise of discovering "shapes" in the clouds?

I couldn't come up with a satisfying response until a week later, when another colleague/reader (Matt DuPlessie) reminded me about one of the early, massively popular visualizations on the web: Name Voyager. Name Voyager touts itself as a "ba
by name wizard," allowing you to view the frequency of use of names for American babies per year, using data from the Social Security Administration. But it's not a list of names and numbers. Instead, it's a beautiful, quite intoxicating Java applet that shows you the relative frequency of names dynamically as you type--so that typing MO will show you how Mohammed matches up to Molly and Morgan, but when you get to MOR you just see the difference in frequency between Morgan (for boys) and Morgan (for girls).

Try it. It's another gorgeous time sink with data behind it.

And yet. The reason I bring up Name Voyager is because of a new element of their website: a link to a service called Nymbler, which invites you to type in names of interest so it can generate lists of related names you might like. It's sort of like Netflix for baby names: you rate names, it makes recommendations.

Checking it out last night, I was struck by the paradox that Nymbler gave me more useful information than Name Voyager, and yet I like Name Voyager more. I would use Name Voyager longer. Why would I prefer the less useful site?

Because I'm not having a baby. Name Voyager is a site that allows people to explore names through American history in an interesting way, whereas Nymbler provides an outcome-driven service. Perhaps if the Nymbler interface allowed me to see the algorithms behind their selections in a visually interesting way--as a shifting web of related names--I could get more deeply into exploration of what defines the set of names which appeal to me.

The difference between Name Voyager and Nymbler is instructive for exhibit designers. Too often, we go for the Nymbler model, both in terms of how we deal with content and the kind of interactions we provide. Content-wise, we are so interested in connecting the dots that we don't allow the kind of open-ended exploration that Name Voyager provides. Consider, for example, exhibits on global warming. Many such exhibits (and related websites) allow you to calculate your carbon footprint, step-by-step. Few allow you to do it in a way that dynamically reacts to each selection such that you can easily alter your choices to see the corresponding carbon drain. You select your vehicle, your eating habits, your power usage, linearly, and you have to go through the whole process again if you want to change a parameter. The design thinking behind this is that people are output-driven and want to see how their selections form a composite picture. But that precludes people from having a more flexible experience, one that is less focused on "my carbon footprint" and more on "contributors to carbon emissions." The general wisdom is that people will be more invested if it's "about me." But that's about me as an object of the exhibit, not I as the subject, I who am empowered to tinker with the parameters and my responses.

Bottom line, this comparison has made me realize that the thing that excites me about data visualizations is this empowerment. I am able, erroneously or not, to draw my own conclusions and perform my own simple experiments with the data. Intelligence officers get nervous about giving the president "the raw intelligence" for exactly that reason--it can be misinterpreted by non-professionals. But visitors aren't making national policy; they're learning. And at least in science centers, we profess to want to encourage visitors to think like scientists, like data-interpreters. With data visualizations, the visitors are no longer the object of the exercise. They are the subjects, and that's powerful, intoxicating, and hopefully, educational.

Thursday, February 07, 2008

Data Visualization: Honest, Powerful Interpretative Design

I have seen the future of interpretative design, and that future is data visualization. I'm talking tables of figures. Huge swaths of words. Volumes of dry-as-dirt content.

On the face of it, data visualization is just about the least sexy thing imaginable. Entertain the idea of an exhibit based on Gantt charts and spreadsheets, and your head might just explode. And yet, over the last few years, as the web has unlocked piles of information, a quiet group of math-minded designers are figuring out how to interpret the vast impersonalness of data and make it both beautiful and meaningful.

I met one of these data artists last year while visiting a friend/journalist at the New York Times. His name is Mark Hansen, a UCLA statistician, and he was working on the finishing touches of the installation of Moveable Type in the lobby of the new Times building (shown above).

Moveable Type, like its predecessor, Listening Post (now touring international art and science museums), is an exercise in harnessing and repackaging data as art. And while the installation is digital (560 fluorescent displays backed by individual tiny speakers), the effect, when multiplied across a large space, is intensely physical. Talking to Mark, I was amazed by he and his partner Ben Rubin's dogmatic insistence on capturing the energy and life inside the millions of words cranked out by reporters in the building, echoing the energy and life of the outside world about which they write.

Sure, a lot of artists can express those kinds of intentions. But in Hansen and Rubin's case, it actually gets across. Moveable Type is one of the most accessible pieces of art I've ever experienced, and I think its honesty and power come from the fact that it is a distillation, not an interpretation, of the New
York Times. It doesn't launch from a news story and then go gestural. Every element, from the obituaries that blow across the screens like wind through grass to the wedding announcements, which tick by interchangeable as train schedules, tries to get at the core meaning of the data involved. And that leads ultimately to a presentation of content which is both evocative and deeply connected to the core information.

And herein lies the power of data visualization: no matter how artistic it gets, it remains truthful to the core content. It has to, because that content is the basis for the work itself. Whether you are modeling the brain, tracking the incidence of emotional statements on the Web, or conveying a chair as a sound wave, the resultant art is a deep reflection, not just an interpretation, of the data involved.

And thus data visualization tackles one of the core problems with interpretative design. Traditionally, there's a battle between veracity and interpretation--the more you interpret, the more the purists cry foul. There's an ongoing debate in the museum field about whether interpretation enhances or distorts visitors' understanding of content, and what kind of interpretation distorts in what ways.

We have well-developed design skills for interpreting and presenting stories and objects. But when it comes to presenting data, most museum folks believe that over-interpretation is necessary. It would be deadly dull, they reason, to show the meat of what scientists produce--endless tables of numbers--so we have to find another way to interpret and translate their work. We throw a rug over it and call it a story. But data visualizers, instead of looking for another way beyond or outside the data, pore into the numbers and try to create an interpretation centered, and endlessly circling back on, the data itself.

This is not to say there aren't bad incidences of data visualization, pieces that distort or confound data in ways that may be particularly harmful (since they retain the semblance of being based on hard numbers). And there are plenty of gestural data pieces that go a little too far off the interpretative end to be meaningful (origami representation of web use, anyone?). But when it works, the result is deeply intoxicating, rich with content, and the meaning seems to emerge artistically from the data itself. You feel that you are closer to the true experience of conducting science, the tedious rigor of collection matched with the rush of putting it all together. Data visualization helps us be intelligent interpreters on our own, instead of asking someone else to design an interpreted experience for us.

And that makes you feel like a tiny god, to stand in a lobby and feel that you have the pulse of a newspaper, a corporation, a world, in your grasp.

Monday, July 16, 2007

Is "Museum" a 4-Letter Word (for visitors)?

There have been some fun semantic jousting matches recently on the ASTC listserv about the difference between science museums and science centers. And earlier last month, the Museum of Television and Radio announced a name change to the Paley Center for Media. In the NYTimes article about the switch, Pat Mitchell, the president and CEO of the center, made no apologies about the change:
‘Museum’ was not a word that tests really well with the under-30 and 40-year-olds,” especially in the context of radio and television, Ms. Mitchell said.
I'm not sure what research Ms. Mitchell based her comment on, but I'm hardly surprised by her findings. Despite the herculean efforts many museums take to offer accessible, cool, inviting experiences to the public, the word museum is still laden with the ghost of "don't touch" past. Add to this the fact that many museums no longer offer the basic collections and research services associated with them historically, and the appeal of the word diminishes. In the example of the Paley Center, the NY Times article continues:
Moreover, the name was somewhat misleading: some patrons would arrive expecting to see, say, Archie Bunker’s chair. In fact, until recently, museumgoers had nothing that they could see, unless they wanted to watch a specific old program. As part of the continuing changes, the West 52nd Street space now offers a rotating display, which now features Middle Eastern media, including a live feed of Al Jazeera’s English television channel.
But does switching to "Center" really clear up the fact that the place is a repository of and distribution center for media content? And more importantly, will it attract more visitors, members, and gifts?

I don't think so. The word Museum is not powerful enough, alone, to attract or repel visitors. Museum means different things in different markets. It's interesting that science museums have started to gravitate towards "center" to convey interactivity, and yet childrens' museums are rarely called centers and don't seem to suffer under the Museum label (though drunken variations of the Funatarium abound).

As an illustration, consider the following names:
  • Pirate Museum
  • Art Museum
  • Rock Star Museum
  • History Museum
What makes you excited about some of these and yawning at others? The word Museum has nothing to do with it--or, rather, our prejudices and expectations have more to do with the word(s) preceding Museum than the M-word itself. Kids wouldn't care if they were going to the Harry Potter Museum or the Harry Potter Castle of Fun or the Harry Potter Center: their interest in the topic overrides any prejudices about the venue.

In fact, Museum can be quite a useful word, especially if your collection is small, your topic is odd, or you generally seek credibility.
Driving across the country last month, I was amazed at the zillions of road signs for museums--it seemed like we passed more museums than truck stops. Many were local historical, but there are also Harley-Davidson dealerships, locksmiths, and candy stores with small window signs that say "AND MUSEUM." Labeling your collection--however dinky--a museum puts it into a useful category that signals value, organization, and public presentation of the stuff.

What if you think you are creating something so beyond the standard museum, either in collection, presentation style, or interpretation, that you want a new word? It's hard to create a new genre around a single location. The Exploratorium did it--and spawned off many "wondariums" and "discoveriums" trying to tie into the same spirit of activity and invention that makes the original a success. But the Experience Music Project? Sony Wonderlab? Will those brands define new genres? Are these places helped or hindered by their non-traditional names?

When the International Spy Museum was first being conceptualized, there was a name study commissioned. The designers initially favored a more mysterious name, the House on F Street, which they felt conveyed the intrigue of the future site. But people surveyed overwhelmingly prefered the straightforward "Spy Museum." And going with the Museum label has probably had other legitimacy benefits for SPY, which has been criticized as too Disneyesque. The House on F Street could be a haunted house, a ride, a movie... the Spy Museum is clear, and they've been able to stretch what they offer within that label.

The pirate museum in Key West, Pirate Soul, went the opposite direction and assumed an unclear name. Is "Pirate Soul" a strong enough brand to stand on its own, or do they lose potential visitors who look at it and think, what is that thing? In my mind, they missed a huge opportunity to be the Pirate Museum. When your content is wacky and compelling enough, the word Museum adds a legitimacy that transforms a potential tourist trap into a valuable attraction experience in the eyes of potential guests.

But what about the Paley Center and other museums offering more traditional content? If "art museum" is a deadly phrase, but you are a place that collects and shows art, what are your options?

I have a personal aversion to the word Center. I went to a junior high that was a feeder from many elementary schools, including one called the Center for Early Education. We always talked about those kids who came from "the Center" like it was some Orwellian futuristic kid-pod. But beyond my personal association, I think Center suffers from the fact that there's no public concept of what a center is. A park or library, sure. But a center? What is that thing?

A marketing blogger commented about the Paley Center's lack of context, saying:

If it's not a museum then what is it? Center for Media is open to interpretation varying from a room with a computer in a middle school to a State Department of Censorship, or (hopefully) an intriguing destination that offers rich content. ...

Best Buy is a Center for Media. YouTube is a Center for Media, the Apple Store is a Center for Media, Pearl Art Supply is a Center for Media, and so is the Public Library.
If you're searching outside the word Museum, why not adopt less ambiguous words with strong cultural associations? There are evocative location-based words, like Park, Alley, Lab, Station, and Market. There are action-oriented words, like Project and Exchange. Even words like Club, Gang, Crew--which connote more social than physical organizations--are identifiable expressions of some of the things museums are trying to be.

Of course, at the end of the day, it's what's inside that counts. And it will always be more powerful, marketing-wise, to have visitors walking out saying "That was the best museum I've ever been to!" than saying "That was the best thingamajig I've ever been to!" If you can make your content compelling, exciting, and glorious on the inside, word will spread and you could be calling yourself Aunt Ethel and people would still come.

What words could you imagine on your institutional masthead? How has being a Museum been a help or a hindrance in your world?

Thursday, March 22, 2007

Social Architecture Part 2: Hierarchy, Taxonomy, Ideology (and Comics)

Jeremy Price offered a comment on my last blog post with a link to an excellent article by Lee Shulman on the uses and abuses of taxonomies in educational theory. Dr. Shulman argues that taxonomies, which are often created to distinguish individual elements of a set, are often transformed into hardened ideologies with an implied value on the “higher” or “better” elements in the taxonomy.

As she puts it:

Taxonomies exist to classify and to clarify, but they also serve to guide and to goad. A taxonomy’s rapid progression from analytic description to normative system—literally becoming a pedagogical conscience—warrants caution.
When I created the hierarchy of social participation shown in this diagram, I debated what shape it should take. Was it a spectrum from individual to collective experience? Could it be cyclic? The pyramid I created implies hierarchy, and it also implies that these levels need to be moved through in a sequential manner to get to the “top”—presumably the goal. Was this the right decision?

In Shulman’s article, she points out the folly of a rigid sequential theory when it comes to learning (and my case, I’ll expand that to social engagement). For example, in Bloom’s taxonomy of Educational Objectives, knowledge and comprehension of content comes before application. Narratively, this makes sense. Once you learn about something, you are ready to act on and do it. But how often, and easily, do we go in the other direction? Do you understand physics before you start skateboarding? Dr. Shulman offers the great example of doctors, relating a comment from a surgeon that, “’Internists make a diagnosis in order to act. Surgeons act in order to make a diagnosis.’” As Shulman puts out, “the directionality of the taxonomy is situational.”

Back to my diagram. I created a directional pyramid to make a point about social content in museum; namely, that museums are not offering networked, social experiences—and therefore will have a hard time jumping to initiating meaningful social discourse. I talked about leveling up, and indeed I do believe that museums should consider engaging more in levels 3 and 4—whether their goal is making it to level 5 or not.

But perhaps a hierarchy is not appropriate. After all, you don’t have to have great content to get to a networked, social experience (Twitter certainly proves that fact). And I’m not advocating that the dream museum would be all level 5 experiences, all the time. So here’s a reenvisioning of this hierarchy as a taxonomy.

Looking at the pyramid, each level is typified by an element: the content, the interaction, the network, the social benefit, and the collective action. So let’s create a taxonomy of Social Participation that's a simple list:

  1. CONTENT (What is being discussed/shared/shown/explored?)
  2. INTERACTION (How does the user engage? What do they do?)
  3. NETWORK (How do users link to one another?)
  4. SOCIAL BENEFIT (How much value does one user get from the participation of other users?)
  5. COLLECTIVE ACTION (How much do people work together?)

Now. Rather than thinking of these elements as levels or experiences to move through, let’s consider them to be the building blocks of an experience. Here I’m using as a template one of the most impressive (and esoteric) taxonomies I’ve ever seen: Scott McCloud’s taxonomy of panel-to-panel transitions in comics.

Scott McCloud is the author of hands-down my favorite design book: Understanding Comics. In Chapter 3, McCloud identifies six different methods by which comic artists transition from one panel to another (for example, scene-to-scene or action-to-action). Then, he charts the incidence of each of these six for many different major comic empires (i.e. X-Men) and artists. His charts look like this:
McCloud uses these charts to demonstrate that most American and European comic artists employ only three of the six transitions (as shown in the example on the left). Some radical artists used a dramatically higher portion of the other three, but most of these, McCloud argues, are examples of comics well out of the mainstream. And yet Japanese comics—arguably the most popular in the world—consistently draw from at least five of the six transitions (example on the right).

So. Back to the new Taxonomy of Social Participation. If we number the five elements as follows:

  1. Content
  2. Interaction
  3. Network
  4. Social Benefit
  5. Collective Action

then I’d argue that most museums provide something very different from Web 2.0:

Of course, there are distinctions to be made among different kinds of venues. On the 2.0 side, consider these different major venues:

A little explanation:

Wikipedia is mostly about content. The value of that content is strongly impacted by the number of active users (social benefit), and there is some collective action around the development of wikipedia articles.

Twitter is mostly about connecting to others. The content sucks, and the interaction is simple, but the feeling of empathy with others is high (if debatable).

MySpace is mostly about personal identity within networks. The interactivity in page personalization is high, but primacy is put on engaging with others from your personal profile. You are a “me” in a space of many—a perfect 3 experience.

Flickr is similar to Wikipedia in that the focus is content, but there’s a lot more emphasis on networks (groups, establishing a personal profile with your pics).

Now there’s a new judgement to be made. Is it bad to be unbalanced? Why does it matter that museums are not engaging much in 3, 4, and 5?

Web 2.0 is the reason it matters. Web 2.0 has introduced people to a new way of interacting with web content. The web used to also be heavy on 1 and 2; you go to web pages, you look at stuff, you click on stuff. But now, web usage is different. Where do I get my content? On blogs and aggregators that draw heavily from 3 and 4. How do I search? Google relies on 3 to provide me the results that are most likely to be useful to me. How do I buy stuff? I use Amazon, which provides me 3 and 4 benefits of seeing what other people bought, which then helps me make my purchasing decision.

When we lived in a 1 and 2 world, it was acceptable for museums to rely on 1 and 2. But now, people walk into museums and are aware of the dearth of 3 and 4 elements. They have come to take the experiences that emerge from those elements for granted. Level 2 used to be the hot thing, but now, pushing buttons is old hat. People want to connect, they want network benefits, and they want them in content experiences in museums.

Imbalance also matters because museums are venues that (hopefully) offer diverse experiences that attract and sustain diverse audiences (though I would argue that this is best achieved by offering a variety of powerful singular experiences.) Designers strive to make sure there is “something for everyone” when it comes to learning styles and developmental stages. Why not also for social engagement?

And there’s a third reason that imbalance matters. I created this pyramid for an article on civic discourse in museums, which depends heavily on 5. Level 5 is a question mark in my mind. How do we get there? How do we sustain it? Moving through 3 and 4 to get there may not be the answer. When the content is provocative, as in Bodyworlds or gay animals (thanks, Nik), a museum may be able to jump directly from 1 to 5. But for bread-and-butter content, 5 is a mystery. And while no one has cracked the code, there are many web-based 3 and 4 applications (like LibraryThing) that are successfully creating some 5 experiences via 3 and 4, especially 4. Level 4 experiences make you aware of, comfortable with, and appreciative of the extent to which a social experience is better than an individual one. That understanding may then lead to the application of collective action—level 5. As Shulman said, the directionality is situational. How else can we connect to 5-based experiences?