Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Tuesday, August 20, 2019

One-Person Bands and Museum Labor as an Access Barrier



When I was little, my uncle drove me to see a real big top circus. I don’t quite remember where it was, somewhere over in the farmlands near Santa Cruz, like Gilroy or Watsonville. Like so much of those valleys, I mostly remember the lush flatness, in this case with the red tent popping up like a mirage. I was younger than school age, and little from that circus visit remains in my memory.
One distinct memory I have, to this day, was of a single one-man, or we might say one -person band. He wandered around the big top playing music with his jiggered musical contraption attached to his body. As a kid, this one-person band didn’t seem all that extraordinary. The guy after him was on a unicycle juggling rubber chickens, after all. Seemingly without thought, cymbal hand and kazoo mouth sounded in time with keyboard hand and horn foot. Everything ordered, everything in time, everything easy.  

But, now as an adult, I am amazed at the guy’s ability to move his limbs in harmonious synchronicity. I can barely drink coffee and read my email some mornings, let alone play a full symphony alone. (Of course, I was four when I saw the guy. It might have been barely a harmony.)

I tell the story of the one-person band because I think many museum professionals feel like him. We are spinning and performing, and most people have no idea of the preplanning it takes to make it look so easy. But, most importantly, few museum professionals have a free hand or moment. We are just doing our best to keep from going off-key.

Last week, I asked what are the barriers to keep us from throwing open the doors. There are plenty. We might think of structural racism or the classism inherent in our funding structures. I hope to hear you articulate your thoughts in comments or on social.

Today, I’d like to call out a huge one. We will always find it hard to implement equity and access, metaphorically throwing open the doors, if our leaders don’t spend time thinking about how we do our work. We can’t serve our patrons if we are not thinking about the people doing the work.
Museums rarely have the funding to replicate positions. If the building operations guy is sitting with you in a meeting, there is no second building operations person at his desk. If you have a teen program running, there are no second teen programs person out drumming up business. While we might not play accordions with our feet while shaking maracas, most museum professionals are orchestrating huge amounts of disparate forms of labor all the while making it look effortless.

As a field, we spend a whole lot of time evaluating patron’s experiences (hopefully). Museums are for people, after all. But patrons are only a portion of the people in museums. Staff is an important part of the equation. The systems that staff work with can be empowering or inveigling. So much of our work is collective, a lifelong group project. But as a field, we don’t always articulate our work norms to each other. Our organizations often have people playing different songs, with earplugs on, instead of finding ways to perform together.

What’s the solution? Well, noticing each other, listening carefully, and trying solutions. We do this for our visitors (hopefully). Why not for staffing functions?

Recently, my amazing colleagues and I have started to articulate and improve many aspects of our work. For example, we are working out what needs a meeting and what can use an email, writing out process documents, and then putting these efforts into action. This is stuff that any workplace does, ad hoc, but we are trying to be purposeful and thoughtful. Why? Because while we want to do the real work, we first have to work out how to best keep our own sanity. If we can as a staff decrease the cognitive load of our everyday work, think of how far we can fly. I am humbled by how awesome my colleagues have been to take the leap with me. We’re not quite at the point where we can share all our efforts, though we will eventually. But, in a broad sense, we are trying to be purposeful in how we do our work, so we can free ourselves up to do our work better. BTW, Thank you. Thank you, awesome colleagues.

To take it back to this month’s topic, what is holding back our ability to metaphorically throw open the doors? Time and energy are finite resources. Are we using them well? Work practices can be a boon, helping you do more better. But efficient and effective work practices take thought and refinement. Most museum workplaces don’t place energy or thought into work practices as they focus their scant energy on collections or visitors. If you can spend real time on improving work, you might find yourself freed up, emotionally and with labor. With that freedom, you might feel giddy and free—so free you decide it’s time to plan to throw open the doors.

Managers have a huge part in this. Leaders often look at their best staff, and think, ‘hey let’s put them on this project.’ But what they might not realize is that they are potentially destabilizing that employee. They are asking the one-person band to jump on a unicycle. Now, maybe that performer can do that, but he will need time to practice and fall. Similarly, when good employees are asked to take on more thing, they will need time to fail. Many of our institutional efforts at throwing open the doors, add labor to staff. But, leaders don’t create the systems to understand how it impacts overall work. We are asking our staff to perform without a net with their hands tied behind their back. They can’t throw open the doors.

What’s the solution? Leaders need to realize access and equity isn’t solely about visitors. It’s about systems and staff too. They need to think holistically and carefully. They need to put in the effort to support their staff and try to support process improvements. They also need to honor the careful orchestration that happens in every museum in the country, with each museum professional, spinning, dancing, and performing amazing feats every day.

---

Also, please consider passing on your ideas about what keeps us from throwing open the doors. Tag me so I can add your thoughts to this month’s summary post @artlust on twitter, @_art_lust_ on IG, & @brilliantideastudiollc on FB). 

Thanks to Cynthia Robinson of Tufts University for talking out the one-man/ one-person band. I appreciate her reaching out and discussing it. I was worried one-person band wouldn't work since one-man band is common idiom. But we agreed one-person works--we are flexible, equitable thinkers after all. I write these things late in the evenings alone. Without a sounding board in person, I need your voices to help me. 

Tuesday, February 20, 2018

Are Participant Demographics the Most Useful Single Measure of Community Impact?

Let's say you want your organization to be rooted in your community. To be of value to your community. To reflect and represent your community. To help your community grow stronger.

What indicator would determine the extent to which your organization fulfills these aspirations?

Here's a candidate: participant demographics. If your participants' demographics match that of your community, that means the diverse people in your community derive value from your organization. The people on the outside are the ones coming in.

We use participant demographics as a core measure at the MAH. At the MAH, our goal is for museum participants to reflect the age, income, and ethnic diversity of Santa Cruz County. We compare visitor demographics to those of the county. We use the county census as our measuring stick. We set our strategy based on the extent to which we match, exceed, or fall short of county demographics.

Is this overly reductive? Possibly. There are at least four arguments against it:

Serving "everyone" shouldn't be the goal. I understand this argument, but I think it's suspect when it comes to demographics (especially income and race/ethnicity). Organizations can and should target programs to welcome different kinds of people for different kinds of experiences. But should those differences be rooted in participants' race or income level? Would anyone say with a straight face that it's OK to exclude people based on the color of their skin or the balance in their bank account? I don't think this holds up.

People are more than their demographics. I agree with this argument, but in my experience, it doesn't invalidate demographic measurement. For years, we focused at the MAH on non-demographic definitions of community, seeking to engage "makers" or "moms seeking enrichment for their kids" as opposed to "whites" or "Latinos." I believe that there are many useful ways to define community beyond demographics. BUT, when we actually started measuring demographics at the MAH a few years ago, we saw that we were engaging the county's age and income diversity... but not the county's ethnic diversity. How could we credibly argue that this wasn't a serious issue for us to address? Was it reasonable to imagine that Latina moms didn't want enrichment as much as their white counterparts? When we saw our race/ethnicity mismatch with the county, we started taking action to welcome and include Latinos. We changed hiring practices, programming approach, collaborator recruitment, and signage. Taking those actions led to real results, helping us get closer to our participants matching the demographics of our county.

Participants matching your community's demographics is insufficient. This is an argument I'm still grappling with. It's an argument advocating for equity instead of equality. Many cultural resources are disproportionately available to affluent, white, older adults. So, to advance equity, your organization should strive to exceed community demographics for groups that may be marginalized or excluded from other cultural resources. This argument encourages organization to strive for a demographic blend that over-indexes younger, lower-income, more racially diverse participants. This argument is also often linked to related arguments that changing participant demographics without addressing internal demographics of staff and board is inadequate and potentially exploitative. I'm torn on this too. In my experience, you can't effect community impact without internal organizational change. But the internal changes are a means, not an end. I wouldn't use internal indicators to measure whether we succeeded in reaching community goals. 

Attendance is not the same as impact. I'm torn about this argument too. On the one hand, showing up is not a particularly powerful indicator of impact. You don't really know why the person showed up or what they got out of the experience. On the other hand, on a basic level, attendance is the clearest demonstration that someone values your organization. They're only going to invest their time, money, and attention if they think they'll get something worthwhile out of the experience. Attendance may not be a signifier of deep impact, but it is the clearest way that people tell you whether they care or not about your offerings.


What do you think? Are participant demographics a worthy bottom-line indicator of success? Or is another measure more apt?



Wednesday, June 24, 2015

ASKing about Art at the Brooklyn Museum: Interview with Shelley Bernstein and Sara Devine


I’ve always been inspired by the creative ways the Brooklyn Museum uses technology to connect visitors to museum content. Now, the Brooklyn Museum is doing a major overhaul of their visitor experience--from lobby to galleries to mobile apps--in an effort to “create a dynamic and responsive museum that fosters dialogue and sparks conversation between staff and all Museum visitors.” This project is funded by Bloomberg Philanthropies as part of their Bloomberg Connects program.

I’ve been particularly interested in ASK, the mobile app component of the project. The Brooklyn team has been blogging about their progress (honestly! frequently!). To learn more, I interviewed Brooklyn Museum project partners Shelley Bernstein, Vice Director of Digital Engagement & Technology, and Sara Devine, Manager of Audience Engagement & Interpretive Materials.

What is ASK, and why are you creating it?

ASK is a mobile app which allows our visitors to ask questions about the works they see on view and get answersfrom our staffduring their visit.  

ASK is part of an overall effort to rethink the museum visitor experience. We began with a series of internal meetings to evaluate our current visitor experience and set a goal for the project. We spent a year pilot-testing directly with visitors to develop the ASK project concept. The pilots showed us visitors were looking for a personal connection with our staff, wanted to talk about the art on view, and wanted that dialogue to be dynamic and speak to their needs directly. We started to look to technology to solve the equation. In pilot testing, we found that enabling visitors to ASK via mobile provided the personal connection they were looking for while responding to their individual interests.

Are there specific outcome goals you have for ASK? What does success look like?

We have three goals.

Goal 1: Personal connection to the institution and works on view. Our visitors were telling us they wanted personal connection and they wanted to talk about art. We need to ensure that the app is just a conduit to helps allow that connection to take place.  

Working with our team leads and our ASK team is really critical in thiswe’ve seen that visitors want dialogue to feel natural. For example, staff responses like: “Actually, I’m not really sure, but we do know this about the object” or encouraging people with “That’s a great question” has helped make the app feel human.

Goal 2: Looking closer at works of art. We’d like to see visitors getting the information they need while looking more closely at works of art. At the end of the day, we want the experience encouraging visitors to look at art and we want screens put to the side. We were heartened when early testers told us they felt like they were looking more closely at works of art in order to figure out what questions to ask. They put down the device often, and they would circle back to a work to look again after getting an answerall things we verified in watching their behavior, too.

Moving forward, we need to ensure that the team of art historians and educators giving answers is encouraging visitors to look more closely, directing them to nearby objects to make connections, and, generally, taking what starts with a simple question into a deeper dialogue about what a person is seeing and what more they can experience.  

Goal 3: Institutional change driven by visitor data. We have the opportunity to learn what works of art people are asking about, what kinds of questions they are asking, and observations they are making in a more comprehensive way than ever before. This information will allow us to have more informed conversations about how our analog interpretation (gallery labels for example) are working and make changes based on that data.

So, success looks like a lot of things, but it’s not going to be a download rate as a primary measure. We will be looking at how many conversations are taking place, the depth of those conversations, and how much conversational data is informing change of analog forms of interpretation.  

You’ve done other dialogic tech-enabled projects with visitors in the past. Time delay is often a huge problem in the promise of interaction with these projects. Send in your question, and it can be days before the artist or curator responds with an answer. ASK is much more real-time. As you think about ASK relative to other dialogic projects, is timeliness the key difference, or is it something else entirely?

How much “real time” actually matters is a big question for us. Our hunch is it may be more about how responsive we are overall. Responsive means many thingstime, quality of interaction, personal attention. It’s that overall picture that’s the most important. That said, we’ve got a lot of testing coming up to take our ASK kiosksthe ipads you can use to ask questions if you don’t have or don’t want to use your iPhoneand adjust them to be more a part of the real time system.  Also, now that the app is on the floor we’re testing expectations that surround response time and how to technically implement solutions to help. There’s a lot to keep testing here and we are just at the very beginning of figuring this out.

That’s really interesting. If the conversations are about specific works of art, I would assume visitors would practically demand a real-time response. But you think that might not be true?

In testing, visitors were seen making a circle pattern in the galleries. They would ask a question, wander around, get an answer and then circle back to the work of art. Another recent tester mentioned that the conversation about something specific actually ended in a different gallery as he walked, but that he didn’t mind it. In another testing session, a user was not so happy she had crossed the gallery and then was asked to take a picture because the ASK team member couldn’t identify the object by the question; she didn’t want to go back. This may be one of those things people feel differently about, so we’ll need to see how it goes.

If we are asking someone to look closer at a detail (or take a photograph to send us), we’ll want to do that quickly before they move on, so there’s a learning curve in the conversational aspect that we need to keep testing. For instance, we can help shape expectations by encouraging people to wander while we provide them with an answer and that the notifications feature will let them know when we’ve responded.

Many museums have tried arming staff with cheerful “Ask me!” buttons, to little effect. The most common question visitors ask museum staff is often “Where is the bathroom?” How does ASK encourage visitors to ask questions about content?

Actually, so far we’ve had limited directional, housekeeping type questions. People have mostly been asking about content. Encouraging them to do more than ask questions is the bigger challenge.

We spent a LOT of time trying to figure out what to call this mobile app. This is directly tied into the onboarding process for the appthe start screen in particular. We know from user testing that an explanation of the app function on the start screen doesn’t work. People don’t read it; they want to dive right into using the app, skimming over any text to the “get started” button. So how to do you convey the functionality of the app more intuitively? Boiling the experience down to a single, straight forward call-to-action in the app’s name seemed like a good bet.

We used “ask” initially because it fit the bill, even though we knew by using it that we were risking an invitation for questions unrelated to content—”ask” about bathrooms, directions, restaurants near byparticularly when we put the word all over the place, on buttons, hats, signs, writ large in our lobby.

Although “ask” is a specific kind of invitation, we’re finding that the first prompt displayed on screen once users hit “get started” is really doing the heavy lifting in terms of shaping the experience. It’s from this initial exchange that the conversation can grow. Our initial prompt has been: “What work of art are you looking at right now?” This prompt gets people looking at art immediately, which helps keep the focus on content. We’re in the middle of testing this, but we’re finding that a specific call-to-action like this is compelling, gets people using the app quickly and easily, and keeps the focus on art.



Some of the questions visitors have about art are easily answered by a quick google search. Other questions are much bigger or more complex. What kinds of questions are testers asking with ASK?

It’s so funny you say that because we often talk about the ASK experience specifically in terms of not being a human version of Google. So it’s actually not only about the questions we are asked, but the ways we respond that open dialogue and get people looking more closely at the art. That being said, we get all kinds of questionsdetails in the works, about the artist, why the work is in the Museum, etc. It really runs the gamut. One of the things we’ve noticed lately is people asking about things not in the collection at alllike the chandelier that hangs in our Beaux-Arts Court or the painted ceiling (a design element) in our Egypt Reborn gallery.

Visitors’ questions in ASK are answered by a team of interpretative experts. Do single visitors build a relationship with a given expert over their visit, or are different questions answered by different people? Does it seem to matter to the visitors or to the experience?

The questions come into a general queue that’s displayed on a dashboard that the ASK team uses. Any of the members of the team can answer, pass questions to each other, etc. Early testers told us it didn’t matter to them who was answering the questions, only the quality of the answer. Some could tell that the tone would change from person to person, but it didn’t bother them.

We just implemented a feature that indicates when a team member is responding. Similar to the three dots you see in iMessage when someone on the other end is typing, but our implementation is similar to what happens in gchat and the app displays “[team member first name] is typing.” In implementing the feature this way, we want to continually bring home the fact that the visitor is exchanging messages with a real person on the other end (not an automated system). Now that we’ve introduced names, it may change expectations that visitors have about hearing from the same person or, possibly, wanting to know more about who is answering. This will be part of our next set of testing.

The back-of-house changes required to make ASK possible are huge: new staff, new workflows, new ways of relating to visitors. What has most surprised you through this process?

This process has been a learning experience at every point... and not just for us. As you note, we’re asking a lot of our colleagues too. The most aggressive change is more about process than product. We adopted an agile planning approach, which calls for rapid-fire pilot projects. This planning process is a completely new way of doing business and we have really up-ended workflows, pushing things through at a pace that’s unheard of here (and likely many other museums). One of the biggest surprises has been not only how much folks are willing to go-with-the-flow, but how this project has helped shape what is considered possible.

In our initial planning stages, we would go into meetings to explain the nature of agile and how this would unfold and I think many of our colleagues didn’t believe us. We were talking about planning and executing a pilot project in a six-week time spanabsolutely unreal.

The first one or two were a little tough, not because folks weren’t willing to try, but because we were fighting against existing workflows and timelines that moved at a comparatively glacial pace. The more pilots we ran and the more times we stepped outside the existing system (with the help of colleagues), the easier it became. At some point, I think there was a shift from “oh, Shelley and Sara are at it again” to “gee, this is really possible in this timeframe.”

After two years of running rapid pilots and continuing to push our colleagues (we’re surprised they’re still speaking to us sometimes!), we’ve noticed other staff members questioning why projects take as long as they do and if there’s a better way to plan and execute things. That’s not to say that they weren’t already having these thoughts, but ASK is something that can be pointed to as an example of executing projecton a large scale and over timein a more nimble way. That’s an unexpected and awesome legacy.

Thanks so much to Shelley and Sara for sharing their thoughts on ASK. What do you want to ask them? They will be reading and responding to comments here, and if you are excited by this project, please check out their blog for a lot more specifics. If you are reading this by email and would like to post a comment, please join the conversation here.

Wednesday, June 03, 2015

Learn to Love Your Local Data

Last month at the AAM conference, a speaker said, "we should all be using measures of quality of life  to measure success at our museums."
I got excited. 
"We should identify a few key community health indicators to focus on."
I got tingly.
"And then we should rigorously measure them ourselves."
Ack. She killed the mood.

Many museums (mine included) are fairly new to collecting visitor data. Especially new to collecting data about broad societal outcomes and experiences. Why the heck would we be foolish enough to do it all ourselves?

The "we have to do it ourselves" mantra is one of the most dangerous in the nonprofit world. It promotes perfectionism. Internally-focused thinking. Inability to collaborate and share. Plus, it's expensive. So when we find we can't afford to do it ourselves, we throw up our hands and don't do it at all.

Here are three reasons to find and connect with community-wide sources of data instead of doing it yourself:

The data already exists.

Want to know the demographic spread of your county? Check the census. Want to know how many kids ate fruits and vegetables, or how many teens graduated high school, or how many people are homeless? The data exists. In some communities, it exists in different silos. In others, someone is already aggregating it. 

When we started more robust data collection at our museum, we wanted a community baseline. We thought about collecting it ourselves (stupid idea). Instead, we found the Community Assessment Project--an amazing aggregation of data from all over our County, managed by a wide range of stakeholders from health and human services. Not only do they aggregate existing data, they do a bi-annual phone survey to tackle questions like "have you been discriminated against in the last year?" and "what most contributes to your quality of life?" We got the data, and we got involved in the project. Now, instead of using our meager research resources to collect redundant data, we can springboard off of a strong data collection project that we access for free. 

You may not have a Community Assessment Project in your community, but you have something. Ask the health department. Ask the United Way. Someone is collecting baseline community data. It doesn't have to be you.

We're stronger together.

Imagine a community with 50 different organizations working to reduce childhood obesity. Would you rather see them each pick a measure of success that is idiosyncratic to their program, or join forces to pick a single shared measure of success?

If your museum is working to tackle a broad societal issue, you're not doing it alone. Your program may exist in its own bubble of the museum, but there are likely many organizations tackling the same big issue from different angles.

Each of you is stronger--in front of funders, in front of advocates, in front of clients--if you can work together towards one shared goal. Even if it doesn't map perfectly to your program, it's worth picking a "good enough" measure that everyone can use as opposed to a perfect measure that only works in your bubble.

For example, one of the outcomes in our theory of change that we care about is civic engagement. We want visitors to be inspired by history experiences at the museum to get more involved as changemakers in our community. Our Community Assessment Project already measures indicators of civic engagement like voting, writing to an elected official, and speaking at a public hearing. Are these the indicators we would choose in a bubble? Probably not. But are they more powerful because we have years of good countywide data about them? Absolutely.

Shared data builds shared purpose.

What happens when those 50 different organizations agree on one indicator for success in reducing childhood obesity? They get to know each other. They understand how their individual work fits into a larger picture. They build new partnerships, reduce redundancies in programming, and fill the gaps. They do a better job, individually and collectively, at tackling the big issue at hand.

That's what we should be using measurement to do. I can't wait to hear a story like this at a conference and fall in love with data all over again.

Are you working across your community to share key indicators of success? Share your story, question, or comment below. If you are reading this via email, you can join the conversation here.

Wednesday, March 25, 2015

Developing a Theory of Change, Part 1: A Logical Process

This is the first in a two-part series about the Santa Cruz Museum of Art & History’s new theory of change. This week, Ian David Moss and I are each writing blog posts about our collaborative process to develop a theory of change at the Santa Cruz Museum of Art & History. Check out his blog post on the Fractured Atlas site. Next week, I’ll share more about what is in our theory of change, and why.

For three years, we hit the gas at my museum—hard. We were pointed in a new direction and knew we had to push ourselves to expand our community impact.

Three years in, the dust settled on the many changes. We had tripled our attendance. Doubled our staff. Experimented, launched, and retired many programs and exhibition formats.

We decided it was time to shift from exploring to deepening. A little over a year ago, we received a three-year grant from the James Irvine Foundation to strengthen our commitment to community engagement. One of the first things we decided we had to do was to grow some roots under the new strategies at the museum. We embarked on a process of “naming and claiming” the work we do.

Where to start? We decided to develop three things:
  • Clear engagement goals that define how we do our work
  • A theory of change to connect what we do to the impact we seek
  • An engagement handbook to provide an overview of our goals, our theory of change, and the programs where they are manifest
I’ve written about the engagement goals before. I’ll write about the handbook soon. This blog post focuses on the theory of change and the process by which we developed it.

A theory of change is a model that connects: the activities we do, to the outcomes they effect, to the impact we seek to create in the world.

We wanted to build a theory of change for two reasons:
  1. Externally, we need strong, data-driven arguments for support. We can’t just say,” fund this exhibition and the community will grow stronger.” We have to prove it. Donors want to understand the logic of how their dollars will translate into impact. A strong theory of change can make that case. 
  2. Internally, everyone needs to know what “good” looks like and how their work helps contribute to the overall goals of the organization. A clear theory of change helps staff make strategic choices at every level.
We didn’t know how to develop a theory of change. But we knew we wanted to be rigorous about it. So we contracted with Ian David Moss, of Fractured Atlas and Createquity fame, to shepherd us through the process. Besides being brilliant and skilled in this area, Ian came in with an outsider’s perspective, which really helped us get out of the mindset of what we THINK we do and shift into what is actually observable and track-able.

Here’s what we did:
  • Ian came to Santa Cruz, interviewed a bunch of staff, and drafted a very rough theory of change based on what he learned about our programming.
  • We did a board/staff retreat where we built theories of change in two directions: UP from our activities to our intended impact, and DOWN from the intended impact to the activities that fuel it. The “DOWN” side was the most interesting, because it helped us understand the role we could play in our desired impact—and the community partners we would need to engage and support to see the impact realized to its fullest.
  • We worked with Ian to revise the theory based on the retreat.
  • Ian did a social psychology literature review to understand the research grounding the connections we made from activities to outcomes to impacts. We identified areas where the connections were weak and where we have to do more research to ensure that the logic is sound.
  • We developed a final version of the theory in a wonky powerpoint flowchart model.
  • We worked with a fabulous illustrator (Crista Alejandre) to transform the flowchart into an inspiring graphic.
  • We started using the theory of change to focus our programs and partnerships, evaluate our work, and change the way we talk about what we do.
Here are some questions for Ian about his side of the experience working with us on this project. Next week, I’ll write about the actual content of the theory of change and how it is starting to impact our organization.

When you are working with an organization on a theory of change, how do you sort out the organizations' aspirations from the reality of their current activities?

Ian: This is always one of the most challenging (and interesting) parts of the engagement. One of the reasons why I find theories of change valuable as a tool for this kind of conversation is that they are really good at making the chain of logic – or lack thereof – between an organization’s activities and goals really clear. That sets up a process where I map out what I perceive the connections to be, and then I run it by the organization to make sure that I’m understanding their thinking correctly. If I spot a place in the logic chain that doesn’t make sense to me, all I need to do is ask some probing questions about it. It could well be that I’ve overlooked something important, in which case I’ll add in whatever’s missing, or it could be that I’ve uncovered something the organization hasn’t thought of, which could spark a much-needed reassessment of what the strategy is or even what the real goal is. (This is exactly what happened to us at Createquity: our theory of change precipitated a global rethinking of our entire content and engagement strategy because we discovered a gap between what we were doing and what our aspirations were.) Either way, the theory of change makes the assumptions embedded in a strategy transparent to everyone and provides a way to put those assumptions to the test.

In our work together, we ended up looking primarily to social psychology research to develop a strong logical basis for the MAH’s theory of change. Do you often find that these projects take you outside of the “arts” field in terms of defining the logic that connects activities to outcomes to impacts?

It depends. I would definitely say that you guys are unusual in how you see non-arts and non-humanities research and practices as not just relevant but central to your work at the museum. But I’d venture to say that it’s a rare arts organization that can’t learn something from how things are done in the wider world, whether that means understanding how and why potential audience members are motivated to make the choices they do, or understanding the policy context for the community-level changes you’re hoping to see, or whatever. I think a very common mistake people make is to draw the frame too narrowly, to say “well, we don’t have any data on this exact thing that we’re looking for, so there’s no point in trying to answer that question.” The reality is that we have many tools to understand and to estimate the way the world works around us, and there are a lot of parallels and inferences to be drawn either from examples in analogous fields or from initiatives that have a general focus that includes the arts but isn’t specific to them.

What do you think is the most challenging part of developing a theory of change?

Different projects present different challenges, but one thing I’ve found to be consistent is that the theory of change process can end up drawing out major differences in thinking styles. There’s a certain type of person who’s really comfortable breaking ideas down into orderly, modular components and analyzing the connections between them. Then there are other folks who are not at all accustomed to thinking that way – they’re much more at home in an open-ended, anything-goes brainstorming session that encourages divergent thinking and untethered creativity. For those people, the process of creating a theory of change or logic model can very easily feel confining if you don’t set it up carefully. What I’ve found is that things go better if I make sure that nobody has unrealistic expectations placed upon them. A lot of people find it easier to have a conversation and then react to a model presented to them than be tasked with having to figure everything out themselves. On the other hand, other folks want to be super involved and that’s great too.

Any words of wisdom about how to build buy-in and encourage use once a theory of change is developed?

A really good way to do this is to include it in training materials for both current and new staff. The more that the theory of change gets talked about, the more likely it is to be used. You can also use it as a reference point for other institutional capacity-building things your organization is doing. So the MAH used it as the basis for a measurement framework for the organization. At Fractured Atlas, it was a key input for a new brand book we developed to guide our internal and external communications. It can be an attachment to grant applications or included in annual reports to donors. And it’s important that the theory of change be periodically revisited to make sure that it doesn’t reflect stale thinking. That all being said, I would emphasize that going into the process with the intention for a theory of change to be useful is the number one predictor of whether it will actually be useful. Furthermore, the best way to build buy-in for a theory of change is by giving people a voice in creating it. That’s why as much as possible I try to involve front-line staff as well as leadership in the process, so that it will feel resonant at all levels of the organization.

Thanks to Ian for collaborating on this process with us. If you are reading this via email and would like to share a comment or question, you can join the conversation here.

Wednesday, February 18, 2015

Data in the Museum: Experimenting on People or Improving Their Experience?

Every few months, a major news outlet does an "expose" about data collection on museum visitors. These articles tend to portray museums as Big Brother, aggressively tracking visitors' actions and interests across their visit. Even as the reporters acknowledge that museums are  trying to better understand and serve their visitors, there's a hint of menace in headlines like "The Art is Watching You."

We're trying to personalize. We're trying to adapt. We're trying to be responsive. But it can still come off as creepy. In a world of iteration, prototyping, and A/B testing, do we need a new ethical litmus test for social experimentation?

I came back to this question as I listened to the most recent RadioLab podcast about Facebook's mass social experiments on users. For years, Facebook has teamed up with social psychologists to perform social experiments through small changes to the Facebook interface. These experiments look a lot like those conducted in social psychology labs, with two big differences:
  • the sample sizes are many tens of thousands of times larger than those in the lab--and a lot more diverse across age, class, and geography. 
  • no one signs a form giving consent to participate. 
I thought this sounded great: better data, useful research. Turns out not everyone thinks this is a good way for us to learn more about humanity. Last year, there was a HUGE media kerfuffle when people were shocked to learn that they had been "lab rats" for Facebook engineers researching how the News Feed content could impact people's moods.

To me, this was surprising. Sure, I get the ick factor when my personal data is used as currency. But I know (mostly) what I'm buying with it. Facebook is a completely socially-engineered environment. Facebook decides what content you see, what ads you see, and your personal ratio of puppies to snow warnings. And now people are outraged to find out that Facebook is publishing research based on their constant tweaking. It's as if we are OK with a company using and manipulating our experience as long as they don't tell us about it.

It seems that the ethical objections were loudest when the intent of the experiment was to impact someone's mood or experience. And then I started thinking: we do that all the time in museums. We change labels based on what visitors report that they learned. We change layouts based on timing and tracking studies of where people go and where they dwell. We juxtapose artifacts to evoke emotional response. We tweak language and seating and lighting--all to impact people's experience. Do we need consent forms to design an experience?

I don't think so. That seems over the top. People come to the museum to enjoy what the invisible hands of the curators have wrought. So it brings me back to my original question: when you are in the business of delivering curated experiences, where is the ethical line? 

Consider the following scenarios. Is it ethical to...
  • track the paths people take through galleries and alter museum maps based on what you learn?
  • give people different materials for visitor comments and see whether the materials change the substance of their feedback?
  • cull visitor comments to emphasize a particular perspective (or suite of perspectives)?
  • offer visitors different incentives for repeat visitation based on behavior?
  • send out two different versions of your annual membership appeal letter to see which one leads to more renewals?
  • classify visitors as types based on behavior and offer different content to them accordingly?
I'd say most of these are just fine--good ideas, probably. I suspect we live in an era where the perceived value of experimentation outweighs the perceived weight of the invisible hand of the experimenter. Then again, I was surprised by the lab rat reaction to the Facebook experiments.

It's hard sometimes to differentiate what's an experiment on humans and what's an experiment to improve your work for humans. As the Facebook example shows, just claiming your intent is to improve isn't enough. It matters what the humans think, too. 

I guess that's what makes us more than lab rats--we can speak up and debate these issues. What do you think?

If you are reading this via email and would like to share a comment, you can join the conversation here.

Wednesday, February 04, 2015

Audience Demographics and the Census: Do We Have a Match?

When you look at this infographic, do you see a problem to be solved? A snapshot of the market for the arts? Or something else entirely?

About five years ago, I sat in a planning meeting for a museum that was undergoing a major renovation. The director boldly stated that one goal of the remodel was to reconnect with the community. What would success look like? The demographics of the museum visitors would match those of the city at large.

That vision always stuck with me. This goal seemed simple, clear, and important. Now, as a museum director, I'm thinking about that goal less abstractly and more concretely in terms of what a target audience can and should look like.

The first step is to know who is already engaging. Arts audiences, on average, are older, whiter, and more affluent than the American population. Supporting data comes from many corners, but primarily the National Endowment for the Arts (NEA). Since 1982, the NEA has conducted a Survey of Public Participation in the Arts. This survey focuses on attendance to traditional arts institutions--museums, theaters, symphony halls. The data gets sliced and diced in different ways: to explore motivations for participation, to look at trends over time, to dive into data for specific regions or sectors.

When I look at this data, I have one question: what's the target?

In your dream situation, who would participate in your organization? Here are three options:
  1. Everyone. The demographic profile of those engaged would match that of the nation/region/city. 
  2. A subset, targeted for their unique characteristics. That target could relate to ethnicity, or education level, or gender, or age. It could be chosen for reasons related to the institution's mission (for example, a focus on youth empowerment) or for reasons related to the market demographics (for example, a growing number of Latinos).
  3. A subset, self-selected by voluntary engagement. Those who want the experience, come. The demographics are what they are.
Most arts organizations, for a long time, focused on #3. With a few #2 programs sprinkled in. 

At our museum, we've started shifting to #1 as an aspirational goal. This is that vision of inclusion that inspired me years ago. We got our hands on our local census data (free and easy). When we collect demographic data about participants, we measure it against the census figures. 

This helps us with program planning: we know who we are "under-engaging" and can work to involve them. It helps with fundraising: we can talk knowledgeably about how our visitors line up to the age, income, and ethnic diversity of our County. 

But as we've continued working on #1, I started wondering about #2. What if there is a group that is particularly marginalized, underrepresented, or underserved when it comes to the arts? 

For example, there is good research suggesting that school field trips to art museums are disproportionately valuable for students from poor and rural backgrounds. Does this mean that we should try to make school tours disproportionately accessible to these students? If the opportunity for impact is greater, should we go there? If the cost of doing so is more, is it worth the price? 

We're also considering these questions with local data in mind. As we have gotten more involved in data initiatives in our county, we've learned about clear demographic divides in quality of life and enrichment opportunities among specific groups. We're debating whether we should try to "over-engage" some groups relative to the needs and resource allocation in our County. Is matching local demographics "enough"? Is it even realistic or sensible?

I realize that this post is riddled with question marks. I'm sincerely curious about how others are approaching these questions of audience demographics and targets for engagement. 

How do you think about these issues in your organization and your community?


If you are reading this via email and would like to share a comment, you can join the conversation here.

Wednesday, October 08, 2014

Is it Real? Artwork, Authenticity... and Cognitive Science

A farmer says he has had the same ax his whole life--he only changed the handle three times and head two times. Does he have the same ax?

This question launches Howard Mansfield's fascinating book about historic restoration, The Same Ax, Twice. Mansfield explores the sanctity and lineage of historic sites, from Japanese Shinto shrines (completely rebuilt 61 times in 1300 years), to igloos (rebuilt annually, oldest documented human dwelling), to the USS Constitution (80-90% rebuilt since it first sailed). 

He argues that these relics are stronger because of their reconstruction. As he puts it: 
So, does that farmer have the same ax? Yes. His ax is an igloo, and a Shinto shrine. He possesses the same ax even more than a neighboring farmer who may have never repaired his own ax. To remake a thing correctly is to discover its essence.
How does this question play out in museums? At the 2013 American Alliance of Museums annual conference, a group of exhibition designers explored authenticity in a session called Is it Real? Who Cares? They explored a huge range of museum objects and grey areas of "realness." They arbitrated replicas, reproductions, models, and props... and the context that enhances or detracts from the perception of authenticity.

While many of their examples came from history and natural science, one of my favorite examples is from art. There are three portraits of George Washington shown at the top of this post: the famous painting by Gilbert Stuart, a copy of it also painted by Gilbert Stuart, and a copy of it painted by his daughter Jane. 

Many artists work with assistants and reproducing processes. Are the reproductions less real than the original? If done by the same hand? If done by another hand? If done by a machine?

Turns out, science has something to say on the topic. 

Cognitive scientists at Yale and University of Chicago researched how people perceive "identity continuity" of an artwork when reproduced. They conducted a simple experiment:
  • People read a story about a painting called "Dawn" created by an artist. There were different versions of the story. In some, the artist produced the original painting. In others, he instructed one of his assistants to paint it.
  • In all versions of the story, the painting was irrevocably damaged by mold. Gallerists hired another artist to reproduce it. 
When asked whether the new work was still "Dawn," about 30% of people said yes--if the artist had made the original with his own hand. If an assistant has painted it, the percentage climbed to 40%+. It was as high as 50% if the original work was commissioned for a commercial (hotel) setting. 

The researchers posit that the "personal touch" of the artist plays a key role in people's perception of an artwork's authenticity and value. By this notion, in the George Washington portrait example, Gilbert Stuart could make many copies of his own work at equal value, but his daughter's involvement dilutes its realness. That is, of course, unless you also factor in the "personal touch" of George Washington being in the room live during the portrait's creation--in which case Gilbert Stuart's own copies have diminished value as well. 

Whose soul is stamped on a work of art? On a tool? On a scientific specimen? What does it mean if we conflate realness with human essence?

If you care about authenticity, this research is pretty troubling. Sure, it shows that people value the original artist's hand in his/her work. But more than that, it shows that value is positively correlated with a perception of human touch. That perception can be faked--to both positive and negative ends. Artists embue anonymous objects with fictional narratives to increase their value. Companies buy up long-lived brands to add a human story to their wares. Spiritualists contact the dead. 

In museums, we care about both perceived authenticity and real authenticity. We want the power of the story--and the facts to back it up. This can come off as contradictory. We want visitors to come experience "the real thing" or "the real site," appealing to the spiritual notion that the personhood in the original artifact connotes a special value. At the same time, we don't always tell folks that what they are looking at is a replica, a simulation, or a similar object to the thing they think they are seeing. 

Some of the museum exhibitions that feel the most real are composite reconstructions of reality--true stories told well, with fake bits supporting the narrative. Some museum experiences can be more powerful because of the freedom that replicas afford. And when it comes to art, a forced focus on "the real thing" can mean less access to cultural artifacts. Were those plaster cast collections of the 1800s really hurting people?  

In the Is It Real? conference session, participants ranked a series of case studies of ambiguous museum artifacts from "real" to "fake," from "works" to "doesn't work." 

We live in a world where the commercialization of "fake" and "works" leads to some deceiving ends. The combination of "real" and "doesn't work" isn't a viable alternative. How do we get to "real" and "works" in the strongest way possible?

In other words: how do we remake the ax, tell the story of its reproduction, and honor its value every step of the way?

Wednesday, August 06, 2014

MuseumCamp 2014: Experiments in Social Impact Assessment

You run a program. It changes kids' lives. It builds more responsible environmental stewards. It strengthens your community.

How do you measure that?

This was the question at the heart of last week's MuseumCamp. MuseumCamp is an annual professional development event at the Santa Cruz Museum of Art & History in which teams of diverse, creative people work on quick and dirty projects on a big theme. This year, the theme was social impact assessment, or measuring the immeasurable. We worked closely with Fractured Atlas to produce MuseumCamp, which brought together 100 campers and 8 experienced counselors to do 20 research projects in ~48 hours around Santa Cruz.

We encouraged teams to think like artists, not researchers. To be speculative. To be playful. To be creative. The goal was to explore new ways to measure "immeasurable" social outcomes like connectedness, pride, and civic action.

The teams delivered. You can check out all twenty research projects here. While all the projects are fast, messy, and incomplete, each is like a small test tube of ideas and possibilities for opening up the way we do social impact research.

Here are three lessons I learned at MuseumCamp about research processes:
  • Look for nontraditional indicators. The JerBears group used "passing of joints" as an indicator of tribal affinity at a Grateful Dead tribute concert. The San Lorenzo Levee group used movement of homeless people as an indicator of social disruption. People x (Food + Place) looked at photos taken by children in a park to understand what contributed to their sense of community. Some of these experiments didn't yield anything useful, but some were surprisingly helpful proxies for complex human interactions.
  • Don't (always) call it a survey. Several groups created projects that were somewhere between engagement activity and research activity. Putting stickers on signs. Taking photos. Finishing a sentence mad-libs style. My favorite example of this was the One Minute Art Project group, which rebranded a fairly standard sticker survey into a "fast, fun, free and easy" activity. They had several participants who said "I wouldn't do a survey, but I like doing this."
  • Every active research method is an intervention. It's easy to look at the One Minute Art Project referenced above and see a red flag - maybe people self-select into this because it's "art" instead of "research." But I realized through this process that a survey solicitation is just an active an intervention as an engagement solicitation. There are different biases to who participates and why. But we shouldn't assume that any one research method is inherently "neutral" just because it is more familiar. Many of the most interventionist projects, like the Karma Hat, yielded really interesting information that was not visible in more passive research methods.

And here are three of my favorite findings from the experiments:
  • On depth of bridging among strangers. Two groups dove into the work at the MAH on social bridging - one with the Karma Hat game, and one with a photobooth project. The Karma Hat required people to wear a hat, write their name on it, and pass it on. It was hugely used. On the other hand, a photobooth where people were prompted to take a photo with a stranger they met at the museum was barely used. We saw that people were ready and willing to engage with strangers at the museum, but not necessarily to build relationships on those engagements. This is just a drop in the barrel of exploration we are doing around bridging at the museum.
  • On smartphone usage at natural sites. We Go to 11 studied the difference in mood change for people at a beautiful site overlooking the ocean relative to their smartphone use. They found that people with smartphones used them to go from a state of active negativity (tension, anxiety) to active positivity (energy, joy). People who didn't use smartphones at the same site tended to embody passive positivity (serenity, calm). Not a shocker, but a pretty interesting project.  
  • On the power of programming to spark civic action. This project, measuring the connection between empathy and action at an indigenous solidarity film screening, is full of useful insights. Read their report for thoughts about the challenges of participant observational research, the power of spiritual experiences, and the results of a compelling survey about ignition to action.
I encourage you to explore all the projects and see what insights might connect to your own work and research goals. You can comment on the projects too and share your own ideas. Please bear in mind that these were very quick projects and are more like research sketches than full evaluations.

What did you get out of MuseumCamp? If you didn't attend, what do you want to know more about?

Wednesday, March 12, 2014

The Truth about Bilingual Interpretation: Guest Post by Steve Yalowitz

You know those research studies that make you want to immediately change your practice in some way? I recently read The BERI report on bilingual labels in museums and was blown away by its findings. BERI was an NSF-funded three-year collaborative project co-led by Cecilia Garibay (Garibay Group), Steve Yalowitz (Audience Viewpoints Consulting), Nan Renner (Balboa Park Cultural Partnership, Art of Science Learning) and Carlos Plaza (Babel No More). This guest post was written by Steve Yalowitz, a Principal at Audience Viewpoints Consulting, who has a Ph.D. in Applied Social Psychology and has evaluated and researched informal learning experiences in museums and other visitor institutions for over 20 years.

Bilingualism in the U.S. is a controversial topic, and the same is true in museums. If someone asked you whether museums should or need to have text in more than one language, what would you say? You probably have an opinion, or you could probably come up with an opinion without too much effort. Maybe you are in a country that mandates multiple languages, or at an institution already committed to bi- or multi-lingual interpretation. However, based on my conversations and experiences with many museum professionals, my guess is that many of you are aware of the issue, may think it’s worth discussing, but have limited knowledge about the core issues surrounding bilingual interpretation.

I was co-author of a recently completed research study [PDF] funded by the National Science Foundation, the Bilingual Exhibit Research Initiative (BERI), which strove to better understand bilingual labels from the visitor perspective. This qualitative, exploratory study involved tracking and interviewing 32 Spanish-speaking intergenerational groups in fully bilingual exhibits at four different science centers/museums. We observed and audio recorded the groups, and conducted in-depth interviews in Spanish after they went through the exhibit, with a focus on what the bilingual experience was like for the group.

The BERI study really expanded our thinking about bilingual interpretation, even though we’d been studying the topic for years. One of the main affordances of bilingual interpretation, of course, is that it provides access to content. The BERI study shows that access to content—the most obvious benefit of bilingual labels—is just the tip of the iceberg. Bilingual interpretation expands the way visitors experience and perceive museums, shifting their emotional connection to the institutions.

Here are three affordances that may not be as top-of-mind when we think about bilingual interpretation:
  1. Code-switching – We found lots of evidence of effortless switching back-and-forth between English and Spanish. We saw kids and adults switch from English to Spanish not only mid-conversation but mid-sentence, both in the exhibition and in the interviews afterwards. Museum professionals often incorrectly assume that if we provide Spanish text for Spanish speakers, they stay in “Spanish mode.” The power of bilingual text is that it’s bilingual – it provides access in two languages, and code switching lets you understand and express yourself from two different perspectives, with two sets of vocabulary. It was a huge affordance for bilingual groups, especially when some members were not able to understand English, or even if they were Spanish dominant or fully bilingual. 
  2. Facilitation – We researched intergenerational groups, so it’s not surprising that many of the adults saw their role as facilitator as essential to their own and the group’s success in the exhibition. We confirmed what other label studies have previously found: that adults were more likely to read labels than kids. However, this study found that in bilingual groups adults were more likely to read in Spanish, while the kids were more likely to read in English. With Spanish labels available, adults were able to facilitate, guiding the conversations and interactions, showing their children, grandchildren, nieces and nephews where to focus and how to interact. Adults who were previously dependent on their children could now take the lead as confident facilitators. An added benefit of bilingual labels, even for those who could read in English, was that they didn’t feel slower or that they were holding up the group.
  3. Emotional reaction – This study found that the presence of bilingual interpretation had a profound emotional effect on the groups. Groups said they enjoyed the visit more, felt more valued by the institution, and many said having bilingual interpretation changed how they felt about the institution. In our field, if we focus on the emotional aspect of the experience, it’s typically around the content and what we’re hoping people feel when engaging with our exhibits. While some of the reactions were around engagement with content (as would be expected), many of them were really about feeling confident and comfortable–key factors for a satisfying and worthwhile visit. 
When asking whether bilingual interpretation is worth it, we’re often looking at it through the wrong lens. It shouldn’t be about whether it’s worth it for us as an institutional investment, but whether it’s worth it from the visitor perspective. Does it improve the visitor experience in a way that adds value to the visit, providing affordances that don’t exist in monolingual experiences? The answer, from the BERI study findings, is a resounding yes.

BERI was a three-year collaborative effort I worked on with Cecilia Garibay, Nan Renner and Carlos Plaza. When we received the award, we felt a great sense of opportunity and responsibility, since this was the first NSF-funded research study about bilingual families and their experiences in fully bilingual exhibitions. You can download the research report and find out about the research model, methods, analysis and implications for the field.

 We saw this study not as the answer to the field’s questions about bilingual interpretation, but as the start of a conversation around better understanding how it works. In doing so, we found out that it is a much more complicated and rich experience than even we thought. After a recent presentation about the findings, a museum professional told us that the study’s findings helped change how they think about bilingual interpretation. My hope is that some of you out there will continue this important work, and help change how I think about bilingual interpretation.