Crowdsourcing contests for the environment

Crowdsourcing contests are increasingly popular with the growth of new crowdsourcing sites. Individuals can host competitions to acquire solutions and seek creative ideas by offering the winner an incentive [0]. Such contests rate and select submissions using a variety of rating mechanisms, including subjective expert ratings and crowd ratings, along with automated objective rating mechanisms (e.g. algorithm-based ratings for coding/programming contests) [1]. This a typical crowdsourcing contest procedure:

Screen Shot 2016-03-01 at 6.23.02 pm.png
Liang Chen and De Liu (July,2012) [1]
One such platform in search for environmental solutions is MIT’s Climate ColabScreen Shot 2016-03-01 at 4.56.25 pm.png

Climate Colab aims to harness the collective intelligence of thousands of people from all around the world to address global climate change. With 64 completed projects and 15 ongoing ones, Climate Colab has successfully gained a pool of resources and careful analysis from its participants

Climate Colab uses 2 types of rating mechanisms [2]:

Expert-ratings: On Climate Colab, judges review all the proposals and select the best one based on reasoning. Judges include ex-UN Special Envoy for Climate Change, Ms Gro Harlem Brundtland, and former United Nations High Commissioner for Human Rights, Ms Mary Robinson. These judges are extremely experienced and are scouting individuals to lead climate change efforts.

Crowd-ratings: The crowd can preview the various proposals and cast their votes. On Climate Colab, the proposal with the highest number of likes wins the “Popular Choice” award. The crowd is free to join and comment on proposals to allow contestants to refine them.

These competitions are at advantageous when seeking solutions because:

  1. Seekers have access to a larger pool of solvers, and thus may be able to find better solutions than the ones generated internally.
    • Instead of hearing from students or professionals in MIT, this platform opens up to anyone and gives a broader perspective and diversity of proposals. Screen Shot 2016-03-01 at 6.59.04 pm.png
    • A project on the global climate’s action plan garnered 39 proposals, of which 12 caught the attention of the judges.
  2. Seekers can evaluate designs or solutions using the same crowd
  3. Seekers only pay for successful innovations, but not for the failures, as the associated risks of failures are shifted to the solvers (3)
    • Climate Colab only awards the proposals that have been awarded “Judge’s Choice” and “Popular Choice”.
  4. The cost is generally lower for seekers.

Problems with usual crowdsourcing contests:

  1. When the contestants are sufficiently risk-averse, the firm may optimally offer more prizes than there are the desired submissions, thus awarding prizes even to submissions it does not eventually want [4].
    • For Climate Colab, the risk taken by contestants is the amount of time spent into doing a proposal. Topics that are less popular tend to pass with less than 10 proposals at hand and a winner still has to be chosen. The problem comes when the top proposals are still not of a good enough standard but the prize is a presentation opportunity at submits. Climate Colab should have a system that protects itself from these situations.
  2. Lack of information on the project may cause the results to be far from the expected outcome:
    • However, at Climate Colab, the sites aims to collect quality results by, first, providing teams with a rich resource base, clear project guidelines, special advisors and comments page for the public to provide feedback.
      • Screen Shot 2016-03-01 at 6.49.51 pm.png
      • Screen Shot 2016-03-01 at 6.49.54 pm.png
      • Screen Shot 2016-03-01 at 6.50.08 pm.png
  3. Problems with specific rating systems is that quality of arguments may not be the basis that proposals receive attention.
    1. Crowd-ratings can be problematic as individuals may not have the expertise to judge the practicality and feasibility of proposals. Aesthetic cues (e.g. professional reports and graphs) could play a part in swaying their decision.
    2. These are the 3 factors I found when doing research that could cause the best proposals to go unappreciated:

Screen Shot 2016-03-01 at 6.47.21 pm.png

Crowdsourcing contests has really attractive points but we have to be wary about the type of rating systems we use and how we construct each competition. In my next post, I will consider the type of motivators (e.g. prize- money, opportunities, etc.) and why people are motivated to join in environmental contests.


Citations

[0] Yang, Y., Chen, P. Y. and Banker, R. (2011) Winner determination of open innovation contests in online markets, International Conference on Information Systems 2011 Proceedings (ICIS2011), December 4-7, Shanghai, China, AIS, paper 16.

[1] Chen, L., Xu, P., & Liu, D. Comparing Two Rating Mechanisms in Crowdsourcing Contests. In of The Eighth China Summer Workshop on Information Management (CSWIM 2014) (p. 7).

[2] Liang Chen and De Liu, “Comparing Strategies for Winning Expert-rated and Crowd-rated Crowdsourcing Contests: First Findings” ( July 29, 2012). AMCIS 2012 Proceedings. Paper 16. http://aisel.aisnet.org/amcis2012/proceedings/VirtualCommunities/16

[3]Huang, Y., Singh, P., & Mukhopadhyay, T. (2012). How to design crowdsourcing contest: a structural empirical analysis. In Workshop of Information Systems and Economics (WISE) 2012.

[4] Christian Terwiesch, Yi Xu, (2008) Innovation Contests, Open Innovation, and Multiagent Problem Solving. Management Science 54(9):1529-1543. http://dx.doi.org/10.1287/mnsc.1080.0884

[5] Archak, Nikolay and Sundararajan, Arun, “Optimal Design of Crowdsourcing Contests” (2009). ICIS 2009 Proceedings. Paper 200. http://aisel.aisnet.org/icis2009/200

[IN TABLE]:

Chen, L., Xu, P., & Liu, D. Comparing Two Rating Mechanisms in Crowdsourcing Contests. In of The Eighth China Summer Workshop on Information Management (CSWIM 2014) (p. 7).

Duan, W., Gu, B., and Whinston, A. “Informational cascades and software adoption on the internet: an empirical investigation,” MIS Quarterly (33:1), 2009, pp. 23-48.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s