The Rise of #CitizenScience


By Kathleen Toerpe

From tracking the migration of songbirds to discovering new celestial bodies, amateur scientists may help fill in a need for more researchers. Beyond helping “real” scientists collect data, amateurs are becoming better trained, better equipped, and better prepared to contribute to tomorrow’s breakthroughs.

Imagine yourself discovering another Earth. Or identifying an unknown underwater species. Helping the blind to see. Finding a cure for cancer. Proving the existence of extraterrestrial intelligent life. Such visions motivate hundreds of thousands of ordinary people—of all ages, from around the world and all walks of life—to participate in the phenomenon known as “citizen science.”

Also called “crowdsourced science”—or, more formally, “public participation in scientific research”—citizen scienceis “the systematic collection and analysis of data; development of technology; testing of natural phenomena; and the dissemination of these activities by researchers on a primarily avocational basis,” according to OpenScientist .org. It has evolved from hobby to serious science and is on its way to becoming the favored twenty-first-century model for conducting large-scale scientific research.

First and foremost, citizen science answers the scientific community’s need for more researchers. Academic scientists get an army of motivated and dedicated volunteers who are trained in basic data identification, collection, interpretation, and analysis. Historically, this data collection has been local and done in person, as in the Audubon Society’s popular annual Christmas Bird Count or the highly successful Wells Cook study of seasonal bird migrations—a 50-year project that yielded an impressive 6 million individual records dating back to the 1880s. Environmental organizations like the Nature Conservancy also have long encouraged their members to volunteer for hands-on, science-based projects that further their policy goals.

Backyard bird studies are still a core interest of many citizen scientists, and Cornell University’s Ornithology Lab has been a leader in shaping modern citizen science proposal, training, reporting, and evaluation protocols. However, contemporary projects are decidedly more digital.

SETI@home, one of the first digital citizen science projects, taps into individual users’ computer down time to run data searching for radio signals of extraterrestrial communications. This type of “distributed computing” has largely been replaced by more interactive projects.

Tapping into the Science Crowd

The entire citizen science movement has taken on a rather entrepreneurial feel, with the focus on science as crowd-centered and open-sourced. A university or academic institute, frequently in collaboration with a hosting platform that serves as a gateway to individual projects, sponsors most of the citizen science research programs. Yale University’s partnership with the Zooniverse platform is one of the major models here: Zooniverse handles log-in, authentication, project promotion, and visitor tracking for Yale’s Planet Hunters.

Many of today’s most popular projects are conducted start-to-finish online, with volunteers acting as gamers who “win” by finding patterns hidden in the data sets. These massive amounts of aggregate raw data are the product of data-generator behemoths like NASA’s Kepler Mission, the Sloan Digital Sky Survey, and University of California at Santa Cruz’s Genome Browser.

Locale-based projects might involve research at a local lake or wildlife sanctuary, one’s workplace, backyard, or even one’s own home. These projects are more heterogeneous and closed-ended, while mass online research projects, such as astronomy-based Galaxy Zoo, SETILive, and Planet Hunters, and biological projects, such as Foldit and Phylo, are ongoing with an increasingly standard and predictable format. In those projects, volunteers are offered a short tutorial and then shown a series of data sets to evaluate.

It seems we humans are uniquely suited to these tasks. Our brains can discern patterns in raw data sets that are not picked up by computer algorithms, especially novel patterns or multiple, complex ones. A data set might consist of radio frequency signals picked up by SETILive’s Allen Telescope Array, searching for signs of extraterrestrial life, or of variations in starlight, recorded by the Kepler telescope on Planet Hunters, that are produced when an extrasolar planet, or exoplanet, passes in front of its star. Volunteers tag the pattern or anomaly; it is then farmed out to other volunteers for verification and comment, with academic scientists reviewing and interpreting the final results. Collaboration is facilitated through online forums in which participants report unusual findings and speculate on their significance.

Adding elements of gaming to the projects, such as tokens, awards, and points, reinforces motivation, as does recognizing individual effort both on the project Web sites and in the formal papers that announce the discoveries to the academic community for peer review. Going even deeper into the gaming metaphor, the bio-project Foldit boasts top “player boards” and encourages players to “earn” tools to build amino acids into different configurations of folded proteins.

But this is all more than fun and games. For research scientists, who are increasingly long on data and research agenda but short on grant funding and paid degreed assistants, motivating and utilizing citizen scientists to conduct either locale-based or online research has proven to be an efficient and pragmatic alternative to employing cadres of traditional graduate assistants and postdoctoral researchers.

What Citizen Scientists Get from Doing Science

Citizen scientists benefit from knowing they’ve helped advance scientific research and knowledge in a field they are passionate about. Plus, they get to be part of the thrill of discovery.

Galaxy Zoo, which uses volunteers to classify the shapes and characteristics of galaxies, surveyed its members in 2010 and determined that they were motivated by an interest in astronomy, the personal gratification of doing original research, and a sense of awe at the vastness of space.

The contributions of these citizen scientist volunteers can both make a difference and make headlines, as journalist Michael Belfiore recently found when he profiled Jaime Nomen, a Spanish dental surgeon. Nomen used telescopic cameras at the La Sagra Observatory to identify the now-famous asteroid 2012 DA14, which glided to within 17,300 miles of Earth on February 15, 2013. With only a scant 1% of the nearly one million Near Earth Objects identified and tracked, there is an all too obvious need for more eyes on the skies.

Closer to Earth, there is a citizen science project for every appetite. The diverse range of projects vying for volunteers is impressive and shows the extent to which the academic community has embraced citizen science. Activities include:

  • Documenting numbers of ducks shot by hunters.
  • Measuring levels of industrial pollutants in local water supplies.
  • Monitoring individual microorganisms inhabiting one’s own body.
  • Recording seismic activity in individual homes and offices.
  • Classifying unidentified underwater seafloor organisms.
  • Monitoring local automobile traffic patterns.
  • Mapping retinal connections to understand the mechanisms behind human sight.
  • Recording the number of local outdoor ice rinks to corroborate climate data with observable ice freeze.

Even such subjective projects as monitoring your baby’s laughter or the intelligence of your dog are available.

Late to the party but not to be left out, the humanities are testing the waters of volunteer research assistance. For example, the Ancient Lives project is cataloging and transcribing ancient Egyptian papyri scanned into an online database. Zooniverse is collaborating with the Imperial War Museum in London for a World War I project. Thus, for the citizen scientist, whether teenager or retiree or anywhere in between, citizen science offers a cafeteria of choices and the tantalizing hope of discovery.

Society, too, benefits from this partnership between professional scientists and dedicated amateurs. There is a global demand for increased proficiency in the STEM disciplines (science, technology, engineering, and mathematics). Boosted by initial successes with trained amateurs and feeling stretched thin by mounting backlogs of raw data, these are the precise fields that are most active in recruiting citizen scientists. With Web site links for volunteers to go deeper into the science behind the games, the projects provide hands-on STEM opportunities in tune with academia’s advocacy of lifelong learning for everyone.

Culturally, these projects tap into the themes of our everyday life. For instance, Cancer Research UK’s Cellslider project is mapping patterns in archived cancer cells, making the race for the cure everyone’s race. Politically, a more scientifically literate electorate might, in democratic countries, result in more vocal support for science initiatives and science-minded candidates at the ballot box.

Overall, from its humble beginnings in bird-watching and environmental activism, citizen science has proven itself to be a win-win-win formula for society, scientists, and the volunteers themselves.

Managing Scientific Conduct and Methodology

Significant challenges lay ahead, however, if citizen science is to reach its full potential in academia and extend eventually to the private sector. With the upsurge in the number and breadth of citizen science projects come concerns about standards of competency, ethics, assessment, and oversight. Locale-based projects, especially, are difficult to monitor for accuracy and integrity of data. Don’t we all think our baby or dog is cuter and smarter than the average? Subjectivity in data collection can derail the best of scientific studies and can be difficult to avoid, even for veteran researchers.

These concerns cut across project lines, and the most efficient solutions may be ones that train and certify volunteers in the overall conduct and methodology of science rather than in specific project content. Two-year community and technical colleges could have a prominent role here, creating Citizen Science 101 mini-courses that could teach the scientific method and the ethical practice of scientific inquiry through participation in specific citizen science projects.

New York’s Bard College sets a broad example for this with its new “Citizen Science” requirement. All Bard freshmen take a three-week intensive introduction to the scientific method, regardless of their anticipated major field. Coursera, Udacity, and edX, emerging leaders in designing massive open online courses (MOOCs), could also contract with gateway sites to provide online certification training that would be simultaneously accessible to tens of thousands of people.

Beyond training scientists, recruitment and retention continue to be problems for projects committed to ongoing research. Zooniverse’s portal has hosted more than 740,000 citizen science visitors from 196 countries since its inaugural in 2007, but half of those volunteers completed only a few data sets before moving on.

That disappointing retention rate from one of the movement’s largest stakeholders may mean that visitors are more curious than committed, or that they are merely sampling different projects to find a perfect fit. Not a bad thing, necessarily, but it does mean professional staff needs to focus much of their time on attracting and retaining volunteers, instead of analyzing data results, formulating conclusions, and disseminating research for review.

Interestingly, ornithology’s citizen scientists are a committed lot; volunteers in the Breeding Bird Surveyaveraged eight years of involvement, with 10% still active after 25 years. This likely reflects the more established nature of these projects, the central role of organizations like the Audubon Society, and ornithology’s longtime success in attracting backyard bird watchers. If that is the case, then the retention numbers in other projects may stabilize after visitors each find their niche, settle down, and get to work (or start playing!) and as the overall movement matures.

Recruiting and retention, however, would be easier if project designers had a firmer understanding of the average, or even ideal, citizen scientist to attract. Preliminary—and admittedly incomplete and anecdotal—demographic profiling shows citizen scientists to be predominantly middle income and having completed some college. Excluding students who complete citizen science projects as part of classroom assignments, a significant number of volunteers are middle-aged or older, with college degrees in fields unrelated to the projects they are involved in. In the United States, the demographic bubble of baby-boom retirees, many now active as citizen scientists, will continue to pad the numbers, but as this cohort thins, participants may be harder to find. The gaming platform of newer projects may help attract younger participants.

Compounding the problem is the fact that some citizen scientists volunteer on multiple projects concurrently, so a marked reduction in total participants will have a ripple effect across multiple projects. And, of course, since most projects are accessed and reported online—and many completely conducted online—on-demand computer access, basic Internet savvy, and high-speed availability are prerequisites. This effectively shrinks the number of potential volunteers from lower socioeconomic strata, whose work commitments may also leave limited free time.

Gateway providers could partner with community colleges, senior centers, and libraries to use citizen science projects as teaching tools in their computer-learning classes. This can reap benefits in STEM skills all around, as can tying citizen science projects to natural and social science curricula in colleges that serve working-class and returning students.

Still left out of the picture are interested recruits from developing countries where limitations in high-speed access may make online projects difficult—not to mention more underlying social and economic impediments. While the latter challenge is beyond the reach of citizen science, current experiments in creating mobile phone app-based projects may increase access in countries with established cellular networks.

The increasing complexity of the research projects themselves poses its own challenges. Phylo, which has volunteers look for patterns in human gene sequencing, requires more than the cursory background information provided, since terms like “heuristic” are not part of most people’s everyday vocabulary. While MOOCs can provide basic instruction in scientific methodology, training specific to each project will still be required; unnecessary jargon and pedantry are counterproductive.

Increasingly complex projects will require commensurately sophisticated user interfaces, sensing devices, and reporting platforms. As the citizen science movement becomes an industry, these will all be streamlined and standardized. Darlene Cavalier and Robert Cheetham recently reported at the Science Online 2013 conference that baby steps in this direction are under way. Experimental handheld data sensors such as Sensordrone and WildLab use mobile phone apps to collect and transmit data, providing portability and standardization. Other promising innovations in the pipeline include streamlined authentication and log-in protocols, upgraded user interfaces, and standardized online reporting forms.

As the movement gropes its way to consolidation, expect to see more projects come under the banner of fewer gateway providers. While lessening the entrepreneurial feel of the projects, it will allow for efficiency of administrative functions, especially for time-consuming recruiting, retention, and tracking efforts, and will provide a seamless “entry to exit” experience for volunteers. This, in itself, may help boost retention numbers.

The New Collaborative Relationship

The most vexing challenge facing citizen science, however, may be forging a new model of collaboration between scientists and volunteers. At what point do highly trained amateurs become dissatisfied with tallying and collecting data or discerning patterns in data sets and demand a more vocal role in project creation, administration, funding, and even final published output?

The Public Laboratory for Open Technology and Science (PLOTS) is already creating open-source “communities of shared interest” in which participants can create their own local environmental projects, sansscientists. CitSci also encourages individuals to create their own projects or to join the projects designed by other citizen scientists. And at what point are degreed scientists reduced to caretakers, displaced by people who, to be honest, are still comparative amateurs? Is there a pushback scenario to be avoided?

One solution is to invite citizen scientists into the inner circle. Historically, citizen science projects have been informally classed according to the objects, subjects, or locales studied. Projects were critter-based, environment-based, interaction-based, document-based, or space-based, depending on focus.

The Center for the Advancement of Informal Science Education (CAISE) has suggested an alternate ranking based on the depth of scientist-volunteer collaboration:

  • Contributory projects limit citizen scientists to more passive roles in data collection and identification. This represents the majority of available projects today.
  • Collaborative and co-created projects, on the other hand, allow for increasingly autonomous roles for volunteers to define hypotheses, set research protocols, interpret findings, and disseminate conclusions.

This is what CitSci is doing. This is where passive citizen science becomes Citizen Science 2.0. Remember Jaime Nomen, the dentist-turned-astronomer who discovered asteroid 2012 DA14? He is a team leader at La Sagra Observatory, writing and receiving grant money from The Planetary Society for upgraded equipment, and now spends more time hunting asteroids than pulling teeth.

So, while collaborative and co-created projects may require more up-front preparation, training, and oversight, the payoff to citizens, scientists, and society is tangible and lasting.

What’s Ahead for Citizen Science

Future opportunities await citizen science as the private sphere—e.g., pharmaceuticals, agribusiness, information technology, and energy industries—taps into public concerns over shortages of resources, health, and privacy. The challenge will be whether for-profit corporations can inspire volunteers to donate the time and effort required to launch and sustain projects that immediately benefit shareholders rather than society at large. Private corporations will likely take a more proactive role in providing the standardized interfaces, reporting protocols, and project design templates currently lacking.

Moving forward, expect citizen scientists to openly advocate for their positions on science and technology issues, as individuals or through coordinated political action. Signs abound that citizen science is here to stay: The inaugural Public Participation in Scientific Research academic conference was held in August 2012, and the movement has debuted an online journal, the Citizen Science Quarterly.

These are important benchmarks of institutional legitimacy. Citizen scientists have earned a seat at the table and will want a voice as their research is translated into public policy. Darlene Cavalier calls it “participatory technology assessment,” and it integrates the citizen scientist’s roles as researcher, beneficiary, and taxpayer.

The ultimate thrill may still await. The United States Rocket Academy, a private space enterprise that has purchased 10 suborbital flights from XCOR Aerospace, has founded “Citizens in Space.” It is recruiting citizen scientists to create their own research experiments, or payloads, to be conducted in space. Better yet, they will be training ten citizen astronauts to conduct those payloads.

From citizen to scientist to astronaut, all from the humble beginnings of backyard bird-watching! Citizen science has proven that there is power in numbers and in commitment. If two heads are better than one, and four heads are better than two, then the exploding phenomenon of citizen science has shown that hundreds of thousands are by far the best yet. And, who knows? You may even be the first to find ET!❑

About the Author

Kathleen Toerpe is a social and cultural historian who researches the interrelationship between outer space and human societies through the emerging field of astrosociology. She is currently involved with the 100 Year Starship effort to lay the groundwork for future interstellar travel and volunteers as a NASA/JPL Solar System Ambassador. She teaches social sciences at Northeast Wisconsin Technical College and has spent her spare time hunting for exoplanets and extraterrestrials as a citizen scientist. She can be found on Twitter at @ktoerpe.

This article is a preview of her presentation at WorldFuture 2013: Exploring the Next Horizon, the World Future Society’s conference in Chicago July 19-21, 2013.

Selected Resources for Citizen Scientists

Web sites


  • Citizen Science: Public Participation in Environmental Research, edited by Rick Bonney and Janis L. Dickinson (Comstock Publishing Associates/Cornell University Press, 2012).
  • Reinventing Discovery: The New Era of Networked Science by Michael Nielsen (Princeton University Press, 2011).

Can The “GitHub For Science” Convince Researchers To Open-Source Their Data?

Science has a problem: Researchers don’t share their data. A new startup wants to change that by melding GitHub and Google Docs.

By: Gabe Stein and Matt Mankins

Nathan Jenkins is a condensed matter physicist and programmer who has worked at CERN, the European Organization for Nuclear Research. He recently left his post-doc program at New York University to cofound Authorea, a platform that helps scientists draft, collaborate on, share, and publish academic articles. We talked with him about the idea behind Authorea, the open science movement, and the future of scientific publishing.

How did you come up with the idea for Authorea?

I had left my post-doc. I was going to leave for a year. I just planned on playing guitar and going climbing. Then Alberto [Pepe, Authorea cofounder] came down for a visit to New York. We had a long talk about open science. He mentioned an idea of starting Authorea, which we did not name at the time. The idea was really when you publish a paper, for example, if you write a simulation on the traffic in New York, or, in my case, you fit a superconductor in spectrum, you have some source data. You have some analytical code, and you have a model. You represent that model with code, and you apply that to the data. That gives your best fit, which is what you publish.

Every scientist that I know has gone through and picked out the points on the curve with various tools to do this. It’s quite a tedious little task, but you have some software to do that because people are not sharing their source data.

They’re not sharing the source data as data?

As data, no. They’re sharing the image. Sometimes you can ask them nicely. They don’t have to give it to you. Now, governments are putting pressure on government-funded research to share data. I know examples where people still, they really give the minimum. It’s not required.

Is there a reason they don’t share data?

It’s competitive advantage, I think. Overall, I think the incentives are wrong. In anything, if you get the incentives right, then the behavior’s going to come. If you want scientists to share all their data, the way it goes with conversation with professors is typically always the same: Professors don’t want to share their own data because if you went through this hard work of getting some data, then there’s some papers to be published with that data. You don’t want someone else to scoop you on the physics part where it’s just actually thinking about the results and writing up some reasonable opinions and publishing some papers. You want to keep that data as long as possible.

What generally happens is once you’re all finished with the data, it just sits on a hard disk somewhere and dies.

We hear a lot about the open science movement, which is about giving everyone access to this data. Why is it a bad thing that the data just sits there and dies?

This is bad because you might have an idea but you might not have this data set. You might want to look at combinations of data sets. There’s lot of different things. Even though I’m the one who takes the data, there’s no reason that I have all the ideas on how to analyze it. If you just gave it away for free, it would already be an improvement because any further contributions are just icing on the cake if what we want is to know more in science.

Obviously, people are worried about advancing their career. People are not so much worried about advancing science, but making sure that they have a job.

That’s really happening?

I can give you an example in biophysics. There’s maybe 10 groups that matter in the world in this specific field. It’s about protein unfolding in this very specific field.

They don’t share any data between each other. There’s a lot of details in the experiments that don’t get spoken about. I say, “I know you don’t want to share your data. Absolutely not. Under no question. At the same time, I know you would love to have all the data from your competitors.” They say, “Yes, I would love that because then I could do this, this, and this.” Most just give up this long list. They say, “Well, if I share my data, I’m giving up an advantage because now everybody else can profit. For me, I don’t get anything out of it.”

If this whole subgroup, if these 10 groups would just get together, which already happens at conferences, and say, “Okay, we’re going to do more sharing, and we need to figure out a way. If it’s your data and I use it, I need to cite you. Maybe your name needs to go on the paper even though you’re my competitor.” This doesn’t happen today, but it doesn’t mean it can’t. It just means that the current paradigm doesn’t incentivize sharing.

Is the appeal of accessing competitors’ data alone enough to get rid of that disincentive?

The question you get at every conference is, “How sensitive is this to the parameters?” You say, “It’s not sensitive at all.” I always find this exact optimal fit. No one ever believes you, but that’s the way it goes.

It’d be great if in the paper, they can see, “Here’s my fit. Here’s my parameters. Now you can change those parameters and refit it and see.” The incentive there is that I can publish now a better paper that’s going to be cited more, and it’s those citations that really matter. If I can see that up, now I’m going to get 30 more citations if I publish an interactive figure, which requires the source data.

So how does Authorea address these issues?

My one-liner is we’re Google Docs meets GitHub for science.

Why does science need a Google Docs or a GitHub?

Most of hard sciences is using LaTeX as a markup language. There have been people who’ve tried to change that in the past. They said let’s modify the PDF, and make all this interactivity possible inside the PDF. Now, you run in a lot of problems because you’re working with a proprietary format. It’s complicated. It’s already made a lot of decisions. PDFs do a lot more than just publishing a research article.

Our main backend is Ruby on Rails. What I really like with that is there’s been constraints placed upon you. You follow these rules; you get all this stuff for free. I like this idea of constraint.

LaTeX is totally open-ended. You can do whatever you want in LaTeX. Nobody ever uses any of this, but it means that you can compile your paper, and it might crash, and you get some totally non-obvious error.

It’s complicated, but we’re writing research papers and no one ever needs to write text that goes around some arbitrary vector. It doesn’t happen. If you remove that, things get simpler and you can start thinking of doing more.

That’s the mode we set on. We said we want interactive figures. Let’s just make the decisions that make that possible, and not make all the decisions right away. The decision that makes that possible means that the figures are no longer included in the LaTeX sources. They’re pulled out. If you import a document into Authorea, that’s a LaTeX file. It’s going to take all the figures and for each figure, it’s going to make a directory. The directory introduces a very simple constraint. There’s a size file. There’s a caption file. There’s either the figure or there’s some HTML that has some JavaScript in it and some data files.

So everything becomes a structured Git directory?

Everything is stored in Git at the moment. There’s a file that’s called layout. The layout file lists the elements that are going to be in the article. Currently, this means some content or a figure, which can be in LaTeX or Markdown. Two possibilities.

Does Authorea look for these by the file extension, or is it like in a JSON configuration file or something?

It’s by the file extension. It’s in the Git repository and based on the file, like intro.txt. It’s going to look for that. Then it can either be content, text content, or it can be a figure.

And how does Authorea know how to lay out this content into a whole paper?

Super-duper simple. Again, I just thought “what’s the simplest thing?” So it’s just a layout.txt file. I love text configuration. A worry with researchers is that I don’t want to give the impression to people that we can blackmail them for their data. Everything that could be in text should be in text. And it’s a Git repository, which was basically chosen for two reasons. One is that we were using Git for development, and it seemed like a good call to keep it the same. Two, since it’s not a database-based version system, but it’s a file-based version system, you can tell people that, “Look, you can just take your whole repository. We can give it to you. We don’t have to put it all on the database or anything, we can just give it to you as is, and you can have a local copy.”

If it’s totally open like that, how do you make money?

We are charging. I’m saying if you have your stuff on Authorea, you should always to be able to get it back. Our philosophy: We want people to be reassured that if we go under, you can get everything back. If we get sold for some reason, you can get everything back. You can jump ship easily. Everybody has problems with lots of web services where you can’t get your stuff. We are trying to be a business. For users, we’re similar to GitHub’s pricing model and philosophy. We limit the number of private articles you can have, but if you write an article from scratch in the open, that’s free forever.

So you have this concept of open articles? In a way, it sounds like you’re trying to be a repository yourself–not in the Git sense, but similar to arXiv or something like that?

That is actually the exact term I was going to use. We want to be a better arXiv. ArXiv is great–it’s been a great service for a long time–but I don’t feel like it’s changing fast enough. You can do a lot more. Once you publish, with traditional publishers, there’s a lot of constraints that they have to live with because they’re big companies. We’re small. We can do whatever we want. As long as users are happy, people find interesting content, that’s great.

In the short term or in the medium term, I think being a better arXiv is valuable. We don’t want to tell people that if you write your article on Authorea, you have to publish on Authorea because it’s important to go and publish in Nature andScience or wherever you publish, but we want to be a better pre-publishing server.

Speaking of scientific publishers, they’re notoriously picky about submission formatting. Will they work with a platform like Authorea that sort of enforces formatting conventions?

Publishers have a very inefficient process right now. They would love to have structured data the way they want it structured, submitted to them from the users. Users are now submitting doc files or LaTeX files with PDFs, that sort of thing. Publishers have to deconstruct all of that. They put it into an external XML representation, they send it off, they outsource it to a typesetter, and they get it back. That’s how it works.

They would love to just get that XML right away. Users worry about formatting in articles. It doesn’t matter at all because it’s all removed anyway. We want to be that middle ground. It would be great to be a big publisher ourselves. If people would just publish on Authorea, that would be fantastic. But that’s the crazy, long-term dream. What’s reasonable is to say: “Okay, well, now users can collaborate easily on articles together, and if they want to publish inNature, there’s a little button that publishes to Nature.” Publishers are willing to pay. If you can get them, if you can gain them a little bit of efficiency, they’re willing to pay for that.

Citizen science versus NIMBY?

There are ten graduate students associated with the Center for Civic Media, half a dozen staff and a terrific set of MIT professors who mentor, coach, advise and lead research. But much of the work that’s most exciting at our lab comes from affiliates, who include visiting scholars from other universities, participants in the Media Lab Director’s fellows program and fellow travelers who work closely with our team.

Two of those Civic affiliates are Sean Bonner and Pieter Franken of Safecast. Safecast is a remarkable project born out of a desire to understand the health and safety implications of the release of radiation from the Fukushima Daiichi nuclear power plant in the wake of the March 11, 2011 earthquake and tsunami. Unsatisfied with limited and questionable information about radiation released by the Japanese government, Joi Ito, Peter, Sean and others worked to design, build and deploy GPS-enabled geiger counters which could be used by concerned citizens throughout Japan to monitor alpha, beta and gamma radiation and understand what parts of Japan have been most effected by the Fukushima disaster.

Screen Shot 2013-08-29 at 10.25.44 AM

The Safecast project has produced an elegant map that shows how complicated the Fukushima disaster will be for the Japanese government to recover from. While there are predictably elevated levels of radiation immediately around the Fukushima plant and in the 18 mile exclusion zones, there is a “plume” of increased radiation south and west of the reactors. The map is produced from millions of radiation readings collected by volunteers, who generally take readings while driving – Safecast’s bGeigie meter automatically takes readings every few seconds and stores them along with associated GPS coordinates for later upload to the server.

It’s hard to know what an appropriate response to the Safecast data is – Safecast is careful to note that there’s no consensus about what’s “safe” in terms of radiation exposure… and that there’s questions to be asked both about bioaccumulation of beta radiation as well as exposure to gamma radiation. Their work provides an alternative set of information to official government statistics, a check on official measurements, which allows citizen scientists and activists to check on progress made on cleanup and remediation. This long and thoughtful blog post about the progress of government decontamination efforts, the cost-benefit of those efforts, and the government’s transparency or opacity around cleanup gives a sense for what Safecast is trying to do: provide ways for citizens to check and verify government efforts and understand the complexity of decisions about radiation exposure. This is especially important in Japan, as there’s been widespread frustration over the failures of TEPCO to make progress on cleaning up the reactor site, leading to anger and suspicion about the larger cleanup process.

For me, Safecast raises two interesting questions:
– If you’re not getting trustworthy or sufficient information from your government, can you use crowdsourcing, citizen science or other techniques to generate that data?
– How does collecting data relate to civic engagement? Is it a path towards increased participation as an engaged and effective citizen?

To have some time to reflect on these questions, I decided I wanted to try some of my own radiation monitoring. I borrowed Joi Ito’s bGeigie and set off for my local Spent Nuclear Fuel and Greater-Than-Class C Low Level Radioactive Waste dry cask storage facility.

Monroe Bridge, MA is 20 miles away from my house, as the crow flies, but it takes over an hour to drive there. Monroe and Rowe are two of the smallest towns in Massachusetts (populations of 121 and 393, respectively) and are both devoid of any state highways – two of 16 towns in Massachusetts with that distinctively rural feature. Monroe, historically, is famous for housing workers who built the Hoosac Tunnel, and for a (long-defunct) factory that manufactured glassine paper. Rowe historically housed soapstone and iron pyrite mines. And both now are case studies for the challenge of revitalizing rural New England mill towns.

Yankee Rowe, prior to decommissioning

But from 1960 to 1992, Rowe and Monroe were best known for hosting Yankee Rowe, the third commercial nuclear power plant built in the United States. A 185 megawatt boiling water reactor, Yankee Rowe was a major employer and taxpayer in an economically depressed area… and also a major source of controversy. I was in school at Williams College, 13 miles from Yankee Rowe, when the NRC ordered the plant shut down in 1991, nine years before its scheduled license renewal, over fears that the reactor vessel might have grown brittle. The plant was a source of fascination for me as a student – the idea that a potentially dangerous nuclear power plant was so nearby led to a number of excursions, usually late at night, to stare at a glowing geodesic dome (the reactor containment building) from across the Sherman Reservoir.

Since 1995, Yankee Rowe has been going through the long process of decommissioning, with the goal of returning the site to wilderness or to other public uses – the plant’s website features an animated GIF of the disassembly process. But there’s a catch – the fuel rods. Under the Nuclear Waste Policy Act, spent fuel was supposed to start moving from civilian power plants like Yankee Rowe to underground government storage facilities in 1989. That hasn’t happened. Fierce opposition from Nevada lawmakers and citizens to storing the waste at Yucca Mountain and from people who don’t want nuclear waste traveling through their communities enroute to storage facilities have meant that there’s no permanent place for the waste.

During the decades nuclear waste storage has been debated in Congress, more waste has backed up, and Yucca Mountain would no longer accomodate the 70,000 metric tons of waste that needs storage. The Department of Energy is now planning on an “interim” disposal site, ready by 2021, in the hopes of having a permanent disposal site online by 2048. The DOE needs the site, because companies like Yankee are suing the US government – successfully – to recover the costs of storing and defending the spent fuel in giant above-ground casks. (Yankee’s site has a great video of the process of moving these fuel rods from storage pools into concrete casks, a process that involves robotic cranes, robot welders and giant air bladders that help slide 110 ton concrete casks into position.)

So… at the end of a twisty rural road in a tiny Massachusetts town, there’s a set of 16 casks that contain the spent fuel of 30 years of nuclear plant operation, and those casks probably aren’t going anywhere for the foreseeable future. So I took Joi’s geiger counter to visit them.


I’d been to Yankee Rowe before, and remembered being amused by the idea of a bucolic nuclear waste facility. The folks involved with Yankee Rowe have worked very hard to make the site as unobtrusive as possible – it’s marked by a discrete wooden sign, and the only building on site looks like an overgrown colonial house. Not visible from the road is the concrete pad where the 16 casks reside, but it’s 200 meters from the road and 400 meters from “downtown” Monroe Bridge.

I was curious whether I’d be able to detect any radiation using the Safecast tool. Sean and Pieter pride themselves on the fact that the bGeigie is a professional grade tool and routinely detects minor radiation emissions, like a neighbor who had a medical test that involved radioisotopes. I drove to Yankee Rowe late yesterday afternoon, took the bGeigie off my truck (it had been collecting data since I turned it on in Greenfield, the closest big town) and tried to get as close as I could to the casks.


That turned out to be not very close. Before I had time to read the NRC/Private Property sign, I was met at the gate – the sort of gate you expect to see at a public garden, not a barbed-wire, stay out of here gate – by two polite but firm gentlemen, armed with assault rifles and speaking by radio to the control center that had seen my truck over the surveillance cameras, make clear that I was not welcome beyond the parking lot.

Screen Shot 2013-08-29 at 1.47.02 PM

That said, I got within 300 meters of the casks. And, as you can see from the readings – the white and green circles on the map – I didn’t detect any radiation beyond what I’ve detected anywhere else in Massachusetts. That’s consistent with the official reports on Yankee Rowe – dozens of wells are monitored for possible groundwater contamination, and despite a recent scare about Cesium 137, there’s been no evidence of leakage from the casks.

It would have been a far more exciting visit had I somehow snuck past the armed guards and captured readings from the casks suggesting significant radiation emissions, I guess… though what it would demonstrate is that you probably shouldn’t sneak in and stand too close to those casks. Better might have been to use Safecast’s new hexacopter-mounted drone to fly a bGeigie over the casks, though I can only imagine what sort of response that might have prompted from the guards.

While I’m reassured that there’s no measurable elevated levels of radiation at Yankee Rowe, it still seems like a weird state of affairs that Yankee’s waste is going to remain on a hillside by a reservoir for the foreseeable future, protected by armed guards. (The real estate listings for property owned by Yankee Atomic Energy Corporation are pretty wonderful – “Special Considerations: An independent spent fuel storage installation (ISFSI) associated with the previous operation of the Yankee Rowe Plant is located in the former plant area and remains under a U.S. Nuclear Regulatory Commission license. Future ownership of the 300 meter buffer surrounding the ISFSI will be negotiated as part of the property disposition.”)

And there’s lots of sites like Yankee Rowe that already exist, and more on the way. The map above, from Jeff McMahon at Forbes, shows sites in the US where nuclear fuel is stored in pools or dry casks. And more plants are shutting down – Yankee Rowe’s sister plant, Vermont Yankee, announced closure this week to speculation that nuclear plants aren’t affordable given the low cost of natural gas. Of course, given the realization that cleaning up Yankee Rowe has cost 16 times what the plant to build and will continue until the waste is in a permanent repository might give natural gas advocates pause – will we have similar discussions of the problems of remediating fracking sites in a few years or a few decades?

Projects like Safecast – and the projects I’m exploring this coming year under the heading of citizen infrastructure monitoring – have a challenge. Most participants aren’t going to uncover Ed Snowden-calibre information by driving around with a geiger counter or mapping wells in their communities. Lots of data collected is going to reveal that governments and corporations are doing their jobs, as my data suggests. It’s easy to track a path between collecting groundbreaking data and getting involved with deeper civic and political issues – will collecting data that the local nuclear plant is apparently safe get me more involved with issues of nuclear waste disposal?

It just might. One of the great potentials of citizen science and citizen infrastructure monitoring is the possibility of reducing the exotic to the routine. I suspect my vague unease about the safety of nuclear waste on a hillside is similar to the distaste people feel for casks of spent fuel passing through their towns on the way to a storage site. I feel a lot more comfortable with Yankee Rowe having read up on the measures taken to encase the waste in casks, and with the ability to verify radiation levels near the site. (Actually, being confronted by heavily armed men also reassures me.) I’m more persuaded that regional storage facilities are a good idea than I was before my experiment and reading yesterday – my opinion previously would have been based more on a kneejerk fear of radioactivity than consideration of other options. (The compact argument: if we’ve got fuel in hundreds of sites around the US, each protected by surveillance cameras and security teams, it seems a lot more efficient to concentrate that problem into a small number of very-well secured sites.)

If the straightforward motivation for citizen science and citizen monitoring is the hope of making a great discovery, maybe we need to think about how to make these activities routine, an ongoing civic ritual that’s as much a public duty as voting. Monitoring a geiger counter that never jumps over 40 counts per minute isn’t the most exciting experiment you can conduct, but it might be one that turns a plan like Yucca Mountain into one we can discuss reasonably, not one that triggers an understandable, if unhelpful, emotional reaction of “not in my backyard.”

– See more at:

Edinburgh’s crowdsourced symphony made with MIT apps

Edinburgh is about to host the first performance of Festival City – Europe’s first symphony to be composed using crowdsourced sounds and arrangements suggested by the public via specially-created computer apps.

It is the creation of Tod Machover – a professor from the Massachusetts Institute of Technology (MIT) Media Lab whose team previously helped create computer games Guitar Hero and Rock Band as well as technologies used by musicians Peter Gabriel, Prince and Yo-Yo Ma.

The 12-minute piece is being premiered at the Edinburgh International Festival and will be played by the Royal Scottish National Orchestra (RSNO), led by music director Peter Oundjian. The two men previously worked on a similar project for Toronto.

The BBC was given exclusive access to the rehearsals.

The Government is Offering Money for Videos About How Awesome Obamacare is

This weekend, when you and your friends are making a new Harlem Shake video or whatever it is kids are doing these days, make sure to mention Obamacare. The Department of Health and Human Services is running a contest encouraging people to upload YouTube videos touting the benefits of Obamacare. All in all, the government is giving away $30,000 worth of prizes.

HHS Secretary Kathleen Sebelius announced the contest at a press conference earlier this week. “We’re encouraging folks to create a song, or a graphic, or a video about the law’s benefits,” Sebelius said. “Like staying on their parent’s plan until they’re 26, not being denied coverage because of a pre-existing health condition.”

The first-place prize is a cool $6,500, plus the first 100 applicants will get a “Stay Healthy” kit, which “includes a t-shirt, first aid kit, sun protection kit and water bottle.”

The contest is being run along with the advocacy group “Young Invincibles,” an organization of young people under the age of 34-years-old fighting for healthcare reform. The Obama Administration is making a concentrated effort to sell the Affordable Care Act to the Millennial generation, which doesn’t necessarily see the benefits of health insurance due to its youth and relative good health.

Obviously the idea of a YouTube video contest is designed specifically to reach that young creative class, which is more likely to be uninsured. While Obamacare may not be a particularly mind-blowing topic, the White House is hoping that this competition will spark some creative videos that they can then use to sell the program to other demographics as well.

People will be able to vote for their favorite videos online, with the winners being chosen by a panel of judges at the Department of Health and Human Services.

Dump the Prizes (Stanford Social Innovation)

Contests, challenges, awards—they do more harm than good. Let’s get rid of them.

By Kevin Starr | 36 | Aug. 22, 2013

Iwas sitting at my desk the other day, writing a recommendation letter for some prize or another, and I found myself thinking, “Why the hell am I doing this?” It often takes me a couple of hours to write something I feel good about—and for what? The vast majority of contestants don’t win anything, and even when they do, it’s often shamefully small amounts of money and/or the dubious assumption that the attendant publicity will lead to bigger things.

After years of watching and participating in this stuff, I’ve concluded that it does more harm than good—and by “this stuff” I mean the whole contest/challenge/prize/award industry. Yes, this lumps together way too many disparate things; yes, there are exceptions to everything I say here; and yes, it deserves a more nuanced discussion. That’s all true, but on the whole, I think we could dump it all and not miss a thing. Here’s why:

1. It wastes huge amounts of time.

The Knight Foundation recently released a thoughtful, well-publicized report on its experience running a dozen or so open contests. These are well-run contests, but the report states that there have been 25,000 entries overall, with only 400 winners. That means there have been 24,600 losers. Let’s say that, on average, entrants spent 10 hours working on their entries—that’s 246,000 hours wasted, or 120 people working full-time for a year. Other contests generate worse numbers. I’ve spoken with capable organization leaders who’ve spent 40-plus hours on entries for these things, and too often they find out later that the eligibility criteria were misleading anyway. They are the last people whose time we should waste.

And it’s exploitive. For social sector organizations, money is the oxygen they need to stay alive, so leaders have to chase prizes just like they do other, more sensible sources of funding. Some in the industry justify this as a useful learning process. It’s not. Few competitions (with some notable exceptions) provide even the most rudimentary feedback. Too many of these contests and prizes seem like they are more about the givers than the getters anyway.

2. There is way too much emphasis on innovation and not nearly enough on implementation.

Ideas are easy; implementation is hard. Too many competitions are just about generating ideas and “innovation.” Novelty is fun, but there is already an immense limbo-land populated by successful pilots and proven innovations that have gone nowhere. I don’t want to fund anything that doesn’t have someone capable enough to execute on the idea and committed enough to make it work over the long haul. Great social entrepreneurs are people with high-impact ideas, the chops to execute on them, and the commitment to go the distance. They are rare, and they shouldn’t have to enter a contest to get what they need.

The current enthusiasm for crowdsourcing innovation reflects this fallacy that ideas are somehow in short supply. I’ve watched many capable professionals struggle to find implementation support for doable—even proven—real-world ideas, and it is galling to watch all the hoopla around well-intentioned ideas that are doomed to fail. Most crowdsourced ideas prove unworkable, but even if good ones emerge, there is no implementation fairy out there, no army of social entrepreneurs eager to execute on someone else’s idea. Much of what captures media attention and public awareness barely rises above the level of entertainment if judged by its potential to drive real impact.

3. It gets too much wrong and too little right.

The Hilton Humanitarian prize is a single winner-take-all award of $1.5 million to one lucky organization each year. With a huge prize like that, everyone feels compelled to apply (that is, get nominated), and I can’t tell you how much time I’ve wasted on fruitless recommendations. Very smart people from the foundation spend a lot of time investigating candidates—and I don’t understand why. The list of winners over the past ten years includes a bunch of very well-known, mostly wonderful organizations: BRAC, PIH, Tostan, PATH, Aravind, Doctors Without Borders. I mean, c’mon—you could pick these names out of a hat. BRAC, for example, is an organization we should all revere and imitate, but its budget in 2012 was $449 million, and it’s already won a zillion prizes. If you gave even a third of the Hilton prize to an up-and-coming organization, it could be transformative.

Too many of these things are winner-or-very-few-take-all, and too many focus on the usual suspects. In any case, the notion that even a smart selection jury can somehow discern which is best from a dozen stellar organizations is kind of silly. Too many juries are composed of unqualified people, and verdicts in this sector can be as capricious as those from an LA celebrity murder trial. There is also an obvious bandwagon effect: The more prizes you get, the more prizes you get. And while juries have their foibles, they are exponentially better than the Internet-based crowd-judging that is currently in vogue. That’s like having the passengers on a 747 vote on how to land the plane, and it has led to some remarkably dumb things rising to prominence.

4. It serves as a distraction from the social sector’s big problem.

The central problem with the social sector is that it does not function as a real market for impact, a market where smart funders channel the vast majority of resources toward those best able to create change. Contests are a sideshow masquerading as a main-stage event, a smokescreen that obscures the lack of efficient allocation of philanthropic and investment capital. We need real competition for impact among social sector organizations, not this faux version that makes the noise-to-signal ratio that much worse.

A lot of people argue that innovation competitions, challenges, and X Prizes are a vital part of that market and that they drive important advances that wouldn’t happen otherwise. I doubt it. There’s no real evidence for it, and I suspect that they do little more than speed things up a bit. The innovators I know do so to solve problems, not to win prizes. The only in-depth analysis of social impact contests I’ve seen was a 2009 McKinsey report, which began with a contests-are-wonderful perspective and carried on for 100 pages in the same vein without even a whiff of skepticism. Like many discussions of prizes, it confused anecdote with evidence and correlation with causation. We need a real study. More to the point, we need a real market for impact. There may be a role for contests in it, but contests didn’t drive Silicon Valley—it was investors and entrepreneurs playing in a functioning market.

There are a couple of exceptions in the prize/competition world that illustrate by contrast what it is wrong most of them. The first is the serious-investment-disguised-as-a-prize, something exemplified by the Skoll Award given to high-impact social entrepreneurs. I like the folks at Skoll because, among other things:

  1. They give you $1.25 million.
  2. They pick multiple winners annually.
  3. The winners have a clear track record, but most are at a place where the grant could vault them to the next level of size and impact.
  4. There is no application process; Skoll does its own (high-quality) homework.
  5. They give follow-on funding to many awardees.
  6. They work hard to connect awardees to each other and other funders.
  7. They give you $1.25 million.

Another exception is the garden-variety-business-plan-competition. Everyone needs a business plan anyway, and they need to know how to pitch. In these competitions, teams get up and perform in front of a knowledgeable audience for all-too-rare seed funding. They’re judged on the quality of their plans and their pitches, and even for those who don’t win, the process is inherently valuable. They get immediate feedback, and everything they do to prepare is useful going forward.

All things considered, I’d like to see most of this industry go away, but people love their contests and they’re going to prove hard to kill. That being the case, here are a few draft guidelines to improve both the optics and impact of contests:

  1. All prizes must come with a cash award that is at least 20 times the cost of the accompanying award ceremony/dinner, with an additional 30 percent premium if formal wear is involved.
  2. Entrants must create first-round proposals during a lunch break and submit them on a napkin.
  3. All juries must include a majority of judges who’ve at least tried to do the kind of thing they are judging.
  4. A committed adult with a plausible plan must accompany all ideas.
  5. All clever product ideas must come with an equally clever idea for distribution.
  6. Mandatory jail time for crowdsourcing or crowd-judging.

OK, so maybe the napkin thing wouldn’t work, and I might be willing cut some slack on the tuxedos, perhaps even concede on the jail time. But I’m sticking to the spirit and substance of them, because we at least need to do it better. You can propose your own damn rules; push back if you’re moved to do so. There are exceptions to everything I’ve talked about here and I’m probably not right about it all, but the truth is we haven’t taken a hard look at any of it and it’s high time we did. In the meantime, we’ll accomplish a lot more if we all do our homework, find the best stuff already out there, and fund on the basis of real impact that can go to scale.

NASA Reactivating Spacecraft To Hunt For Near-Earth Asteroids #NASAasteroid

NASA is bringing a retired spacecraft back into service to help search for asteroids that could pose a danger to Earth, the space agency announced on Wednesday.

The spacecraft’s three-year mission will begin next month “with the goal of discovering and characterizing near-Earth objects (NEOs), space rocks that can be found orbiting within 28 million miles (45 million kilometers) from Earth’s path around the sun,” NASA said in a statement.

The Wide-field Infrared Survey Explorer, or WISE, had been out of service since February 2011, after a 13-month mission to ” look for the glow of celestial heat sources from asteroids, stars and galaxies,” NASA said, calling its work ” the most accurate survey to date” of near-Earth objects.

Universe Today says the WISE spacecraft “has been sleeping in a polar orbit around Earth for two years” and “will be turned back on next month to hunt for more potentially hazardous asteroids, and perhaps search for an asteroid that NASA could capture and explore in the future.”

Reuters reports:

“NASA already has found about 95 percent of the near-Earth asteroids that are .62 miles or larger in diameter.

“The agency is about halfway through a 15-year effort to find 90 percent of all near-Earth objects that are as small as about 459 feet in diameter.

“The search took on a note of urgency after a small asteroid blasted through the skies above Chelyabinsk, Russia, in February 2013 and exploded with 20- to 30 times the force of the Hiroshima atomic bomb. More than 1,500 people were injured by flying glass and debris.

“Later that same day, a much larger but unrelated asteroid soared closer to Earth than the networks of communication satellites that ring the planet.

“The events prompted Congressional hearings and new calls for NASA and other agencies to step up their asteroid detection initiatives.”