Thursday, October 18, 2007

Human + coLAB Experiment Post Mortem

Thanks to all who visited the coLAB and participated in the Human + collaborative experiment over the past ten days. For those who didn’t see it, this project was an open conversation about development of a planned traveling exhibition on human enhancement technology (Human +). The exhibition is being developed by the New York Hall of Science, and I worked with Eric Siegel, the Director of NY Science, to initiate this project. The project is powered by a free software called Voicethread. To view the conversation, turn on your speakers and click the play button below. There’s about 30 minutes of content here, but you can flip through the slides and voices as you see fit.




In terms of numbers, this collaboration was a success. 206 unique people from 131 cities all over the world viewed the site 358 times. (See map on right for distribution.) There were 54 comments made by 17 people. It was blogged by three sites, including Beth Kanter of the highly regarded non-profit social media site Beth’s Blog.


Logistically, it was simple. It took Eric and I about 2 hours each to get the site up and running (content plus distribution plan). We each spent another 2 hours throughout the week checking in on the voicethread and responding to comments. There were no financial costs. There were no problems with spam or inappropriate comments. This was an unmoderated experiment, though I did add additional slides halfway through the experiment to add more venues for contribution.
But impact is what really counts.

Here are some observations from this experiment, gleaned from my impressions and yours:


A lot of you like this technology. Several people were impressed by the sound quality, the personal nature of voice, and the ease of use, and a few indicated that they would use Voicethread in their own institutions. Some of you were more fascinated by the technology’s demonstration than the specific content (which is fine!).


Participation was high. On this blog, about 0.5% of people who read a given post comment on it. On the voicethread, 8.5% who viewed it made comments, and many came back a second time to see how it had evolved. The participants were diverse, ranging from museum exhibit developers to NPR accessibility engineers to content experts to e-learning professionals. There was some emergent behavior where content experts previously unknown to Eric or me offered their support to the exhibition.


There was an inverse relationship between time of first view and participation. Participation dropped significantly after the first four days. The conversation reached a critical mass of participants quickly. After that point, many people emailed me to comment that it felt unwieldy, or that they perceived it as something already completed. It's hard to browse through lots of audio. As one person said, “it felt like watching a disjointed play.” It seems that there’s a sweet spot where just a few people have contributed to the conversation and you feel like it’s open to you. Too many and it feels overwhelming or like your contribution is not needed. It’s easier later in the process to look at the voicethread and feel like enough has already been said—thus promoting lurking over participating.


The content was interesting, but not always what was asked for. Some (including the creators of the technology) found it varied and fascinating. But there was no easy way to spin off individual “threads” of conversation on a single slide, so a divergent (interesting) point brought up by a couple people became hard to follow. The content stayed fairly surface-level, though many interesting comments, both personal and professional, were contributed.
-The purpose wasn’t totally clear. While Eric and I actively responded to other contributors, I think we could have done better to give people explicit challenges or goals so they could apply themselves concretely to solving a problem. The problem given, related to collaboration, was somewhat open-ended and proved less appealing than the Human + controversies themselves.

There was no clear way to identify the people speaking, except via their name, image, and voice. A few people commented that it would have been nice to see some basic information about speakers’ expertise and professional interest in the topic. I also would have liked an update function where people (myself included) could be notified when a new comment was added to the stream.


I left the experiment with a few core questions:

  • How can we encourage sustained participation throughout the life of a project, rather than just at its outset? How do we encourage new users to join partway through?
  • How can we guide collaboration towards a goal? What’s the balance between inviting people to talk about what they want versus what you want?
  • What platforms or technologies humanize rather than dehumanize the process?

What are your questions or comments? I look forward to doing more experiments with other technologies in the future. If you or your institution wants to get involved, let me know.

1 comments, add yours!:

Jason said...

Hearing people's voices and being able to see the little still of their faces is awesome!
Visually, it's very cool.

Awesome work!

Here're my responses to the questions you pose:

1. Some folks get their alerts through email, some like myspace msging, some comb their aggragators, but most use email. So maybe short, periodic, email progress reports will keep 'em participatin'.
& Rewards! Even volunteers need something in return for their energy!

2. Definitely limit the voice time. It's cool, but contributers need to be focused and on-point. If this project had gone on for a couple more days, the process of sifting through commentary would be too enormous a task to expect of volunteer participants.
Maybe some additional pages or rollover boxes for text elaboration & sources, so a user can gauge their commitment?

3. I've been thinking abit about the architecture...brainstorming ideas for changes...
Like what if the participants could self-organize in some way, direct their soundbites towards particular goals--the project's head, its hands, its heart, or what-have-you. Research suggestions into one box, design considerations in another.
What if users could enter a couple of ratings for each contribution? They are asked to rate themselves and/or each other on a number of attributes. This idea has 4 stars for simplicity and 2 for outandishness. Now ideas can be organized across a grid. Plotting "Simplicity <-> Complexity" against "Conservative <-> Outlandish." Or whatever four tendencies be deemed most important in the quest for ideas!