Data Collaboratives: Exchanging Data to Improve People’s Lives
Apr 22, 2015 04:34 pm
New Medium Blog by Stefaan Verhulst and David Sangokoya: “In late July 2014, a sick passenger from Liberia traveled to Nigeria and brought the Ebola virus to Lagos, Africa’s largest city, with a population of 21 million. In response, government agencies, universities and hospitals collaborated with private telecommunications companies and healthcare organizations to collect and share data on infected patients and trace those who had come into contact with them. State government health officials also initiated emergency steps to share information on a daily basis among actors involved in stemming the crisis. After two months, the virus was contained in Nigeria and the country declared Ebola-free.
Several of society’s greatest challenges — from addressing climate change to public health to job creation — require greater access to data, more collaboration between public- and private-sector entities, and an increased ability to analyze datasets. This relationship between data and public benefits was vividly demonstrated in case study after case study at the recently concluded Cartagena Data Festival.
Yet for all the potential, a limiting factor is that much of the data valuable for solving public problems actually resides within the private sector — for example, in the form of click histories, online purchases, sensor data, and, as in the case of the above example, call data records. Amid the proliferation of apps, platforms and sensors, data on how people and societies behave is increasingly privately owned. We believe that if we truly want to leverage the potential of data to improve people’s lives, then we need to accelerate the creation and use of “data collaboratives.”
The term data collaborative refers to a new form of public-private partnership in which participants from different sectors — including private companies, research institutions, and government agencies — can exchange data to help solve public problems. In the coming months and years, data collaboratives will be essential vehicles for harnessing the vast stores of privately held data toward the public good….(More)”
How Crowdsourcing And Machine Learning Will Change The Way We Design Cities
Apr 22, 2015 10:29 am
Shaunacy Ferro at FastCompany: “In 2011, researchers at the MIT Media Lab debuted Place Pulse, a website that served as a kind of “hot or not” for cities. Given two Google Street View images culled from a select few cities including New York City and Boston, the site asked users to click on the one that seemed safer, more affluent, or more unique. The result was an empirical way to measure urban aesthetics.
Now, that data is being used to predict what parts of cities feel the safest. StreetScore, a collaboration between the MIT Media Lab’s Macro Connections and Camera Culture groups, uses an algorithm to create a super high-resolution map of urban perceptions. The algorithmically generated data could one day be used to research the connection between urban perception and crime, as well as informing urban design decisions.
The algorithm, created by Nikhil Naik, a Ph.D. student in the Camera Culture lab, breaks an image down into its composite features—such as building texture, colors, and shapes. Based on how Place Pulse volunteers rated similar features, the algorithm assigns the streetscape a perceived safety score between 1 and 10. These scores are visualized as geographic points on a map, designed by MIT rising sophomore Jade Philipoom. Each image available from Google Maps in the two cities are represented by a colored dot: red for the locations that the algorithm tags as unsafe, and dark green for those that appear safest. The site, now limited to New York and Boston, will be expanded to feature Chicago and Detroit later this month, and eventually, with data collected from a new version of Place Pulse, will feature dozens of cities around the world….(More)”
Why a nudge is not enough: A social identity critique of governance by stealth
Apr 22, 2015 09:12 am
Paper by Frank Mols et al in the European Journal of Political Research: “Policy makers can use four different modes of governance: ‘hierarchy’, ‘markets’, ‘networks’ and ‘persuasion’. In this article, it is argued that ‘nudging’ represents a distinct (fifth) mode of governance. The effectiveness of nudging as a means of bringing about lasting behaviour change is questioned and it is argued that evidence for its success ignores the facts that many successful nudges are not in fact nudges; that there are instances when nudges backfire; and that there may be ethical concerns associated with nudges. Instead, and in contrast to nudging, behaviour change is more likely to be enduring where it involves social identity change and norm internalisation. The article concludes by urging public policy scholars to engage with the social identity literature on ‘social influence’, and the idea that those promoting lasting behaviour change need to engage with people not as individual cognitive misers, but as members of groups whose norms they internalise and enact. …(Also)”
Big Other: Surveillance Capitalism and the Prospects of an Information Civilization
Apr 22, 2015 08:58 am
New paper by Shoshana Zuboff in the Journal of Information Technology: “This article describes an emergent logic of accumulation in the networked sphere, ‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ Google is to surveillance capitalism what General Motors was to managerial capitalism. Therefore the institutionalizing practices and operational assumptions of Google Inc. are the primary lens for this analysis as they are rendered in two recent articles authored by Google Chief Economist Hal Varian. Varian asserts four uses that follow from computer-mediated transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitoring,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of the nature and consequences of these uses sheds light on the implicit logic of surveillance capitalism and the global architecture of computer mediation upon which it depends. This architecture produces a distributed and largely uncontested new expression of power that I christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centuries long evolution of market capitalism….(More)”
“Open Knowledge today announced plans to develop Open Trials, an open, online database of information about the world’s clinical research trials funded by The Laura and John Arnold Foundation. The project, which is designed to increase transparency and improve access to research, will be directed by Dr. Ben Goldacre, an internationally known leader on clinical transparency.
Open Trials will aggregate information from a wide variety of existing sources in order to provide a comprehensive picture of the data and documents related to all trials of medicines and other treatments around the world. Conducted in partnership with the Center for Open Science and supported by the Center’s Open Science Framework, the project will also track whether essential information about clinical trials is transparent and publicly accessible so as to improve understanding of whether specific treatments are effective and safe.
“There have been numerous positive statements about the need for greater transparency on information about clinical trials, over many years, but it has been almost impossible to track and audit exactly what is missing,” Dr. Goldacre, the project’s Chief Investigator and a Senior Clinical Research Fellow in the Centre for Evidence Based Medicine at the University of Oxford, explained. “This project aims to draw together everything that is known around each clinical trial. The end product will provide valuable information for patients, doctors, researchers, and policymakers—not just on individual trials, but also on how whole sectors, researchers, companies, and funders are performing. It will show who is failing to share information appropriately, who is doing well, and how standards can be improved.”
Patients, doctors, researchers, and policymakers use the evidence from clinical trials to make informed decisions about which treatments are best. But studies show that roughly half of all clinical trial results are not published, with positive results published twice as often as negative results. In addition, much of the important information about the methods and findings of clinical trials is only made available outside the normal indexes of academic journals….
Open Trials will help to automatically identify which trial results have not been disclosed by matching registry data on trials that have been conducted against documents containing trial results. This will facilitate routine public audit of undisclosed results. It will also improve discoverability of other documents around clinical trials, which will be indexed and, in some cases, hosted. Lastly, it will help improve recruitment for clinical trials by making information and commentary on ongoing trials more accessible….(More)”
Mo Ibrahim in the Financial Times: “As finance ministers gather this week in Washington DC they cannot but agree and commit to fighting extreme poverty. All of us must rejoice in the fact that over the past 15 years, the world has reportedly already “halved the number of poor people living on the planet”.
But none of us really knows it for sure. It could be less, it could be more. In fact, for every crucial issue related to human development, whether it is poverty, inequality, employment, environment or urbanization, there is a seminal crisis at the heart of global decision making – the crisis of poor data.
Because the challenges are huge and the resources scarce, on these issues more maybe than anywhere else, we need data, to monitor the results and adapt the strategies whenever needed. Bad data feed bad management, weak accountability, loss of resources and, of course, corruption.
It is rather bewildering that while we live in this technology-driven age, the development communities and many of our African governments are relying too much on guesswork. Our friends in the development sector and our African leaders would not dream of driving their cars or flying without instruments. But somehow they pretend they can manage and develop countries without reliable data.
The development community must admit it has a big problem. The sector is relying on dodgy data sets. Take the data on extreme poverty. The data we have are mainly extrapolations of estimates from years back – even up to a decade or more ago. For 38 out of 54 African countries, data on poverty and inequality are either out-dated or non-existent. How can we measure progress with such a shaky baseline? To make things worse we also don’t know how much countries spend on fighting poverty. Only 3 per cent of African citizens live in countries where governmental budgets and expenditures are made open, according to the Open Budget Index. We will never end extreme poverty if we don’t know who or where the poor are, or how much is being spent to help them.
Our African countries have all fought and won their political independence. They should now consider the battle for economic sovereignty, which begins with the ownership of sound and robust national data: how many citizens, living where, and how, to begin with.
There are three levels of intervention required.
First, a significant increase in resources for credible, independent, national statistical institutions. Establishing a statistical office is less eye-catching than building a hospital or school but data driven policy will ensure that more hospital and schools are delivered more effectively and efficiently. We urgently need these boring statistical offices. In 2013, out of a total aid budget of $134.8bn, a mere $280m went in support of statistics. Governments must also increase the resources they put into data.
Second, innovative means of collecting data. Mobile phones, geocoding, satellites and the civic engagement of young tech-savvy citizens to collect data can all secure rapid improvements in baseline data if harnessed.
Third, everyone must take on this challenge of the global public good dimension of high quality open data. Public registers of the ownership of companies, global standards on publishing payments and contracts in the extractives sector and a global charter for open data standards will help media and citizens to track corruption and expose mismanagement. Proposals for a new world statistics body – “Worldstat” – should be developed and implemented….(More)”
A Process Model for Crowdsourcing Design: A Case Study in Citizen Science
Apr 22, 2015 08:01 am
Chapter by Kazjon Grace et al in Design Computing and Cognition ’14: “Crowdsourcing design has been applied in various areas of graphic design, software design, and product design. This paper draws on those experiences and research in diversity, creativity and motivation to present a process model for crowdsourcing experience design. Crowdsourcing experience design for volunteer online communities serves two purposes: to increase the motivation of participants by making them stakeholders in the success of the project, and to increase the creativity of the design by increasing the diversity of expertise beyond experts in experience design. Our process model for crowdsourcing design extends the meta-design architecture, where for online communities is designed to be iteratively re-designed by its users. We describe how our model has been deployed and adapted to a citizen science project where nature preserve visitors can participate in the design of a system called NatureNet. The major contribution of this paper is a model for crowdsourcing experience design and a case study of how we have deployed it for the design and development of NatureNet….(More)”
These researchers want to turn phones into earthquake detectors
Apr 21, 2015 10:04 pm
Russell Brandom in TheVerge: “Early warning on earthquakes can help save lives, but many countries can’t afford them. That’s why scientists are turning to another location sensor already widespread in many countries: the smartphone. A single smartphone makes for a crappy earthquake sensor — but get enough of them reporting, and it won’t matter.
A new study, published today in Science Advances, says that the right network of cell phones might be able to substitute for modern seismograph arrays, providing a crucial early warning in the event of a quake. The study looks at historical earthquake data and modern smartphone hardware (based on the Nexus 5) and comes away with a map of how a smartphone-based earthquake detector might work. As it turns out, a phone’s GPS is more powerful than you might think.
A modern phone has almost everything you could want in an earthquake sensor
Early warning systems are designed to pick up the first tremors of an earthquake, projecting where the incoming quake is centered and how strong it’s likely to be. When they work, the systems are able to give citizens and first responders crucial time to prepare for the quake. There are already seismograph-based systems in place in California, Mexico, and Japan, but poorer countries often don’t have the means to implement and maintain them. This new method wouldn’t be as good as most scientific earthquake sensors, but those can cost tens of thousands of dollars each, making a smartphone-based sensor a lot cheaper. For countries that can’t afford a seismograph-based system (which includes much of the Southern Hemisphere), it could make a crucial difference in catching quakes early.
A modern phone has almost everything you could want in an earthquake sensor: specifically, a GPS-powered location sensor, an accelerometer, and multiple data connections. There are also a lot of them, even in poor countries, so a distributed system could count on getting data points from multiple angles….(More)”
DataShift: “Following a study to better understand the number, type and scale of citizen-generated data initiatives across the world, the DataShift has visualised the resulting data to create an interactive online platform. Users are presented with a definition of a citizen-generated data initiative before being invited to browse the multiple initiatives according to the various themes that they address….(More)”
Secure app could enable people to vote from their smartphone
Apr 21, 2015 03:46 pm
Springwise: “There has been a lot of talk about the outdated nature of voting infrastructures. Citizens can now shop, bank and date online, but are still required to visit a polling station in person to participate in democratic votes. Harvard start-up Voatz hopes to change that with their secure, global mobile voting and campaigning platform.
Voatz could enable members of the public to cast their vote, participate in opinion polls and make campaign donations from their smartphone during elections in the not too distant future. Voters would be required to undergo comprehensive identity verification and use a biometric-enabled smartphone in order to participate in the remote, electronic voting. Voatz hope the technology can help to make voting more simple and accessible using familiar technology…(More)”
The big medical data miss: challenges in establishing an open medical resource
Apr 21, 2015 03:42 pm
Eric J. Topol in Nature: ” I call for an international open medical resource to provide a database for every individual’s genomic, metabolomic, microbiomic, epigenomic and clinical information. This resource is needed in order to facilitate genetic diagnoses and transform medical care.
“We are each, in effect, one-person clinical trials”
Laurie Becklund was a noted journalist who died in February 2015 at age 66 from breast cancer. Soon thereafter, the Los Angeles Times published her op-ed entitled “As I lay dying” (Ref. 1). She lamented, “We are each, in effect, one-person clinical trials. Yet the knowledge generated from those trials will die with us because there is no comprehensive database of metastatic breast cancer patients, their characteristics and what treatments did and didn’t help them”. She went on to assert that, in the era of big data, the lack of such a resource is “criminal”, and she is absolutely right….
Around the same time of this important op-ed, the MIT Technology Review published their issue entitled “10 Breakthrough Technologies 2015” and on the list was the “Internet of DNA” (Ref. 2). While we are often reminded that the world we live in is becoming the “Internet of Things”, I have not seen this terminology applied to DNA before. The article on the “Internet of DNA” decried, “the unfolding calamity in genomics is that a great deal of life-saving information, though already collected, is inaccessible”. It called for a global network of millions of genomes and cited theMatchmaker Exchange as a frontrunner. For this international initiative, a growing number of research and clinical teams have come together to pool and exchange phenotypic and genotypic data for individual patients with rare disorders, in order to share this information and assist in the molecular diagnosis of individuals with rare diseases….
an Internet of DNA — or what I have referred to as a massive, open, online medicine resource (MOOM) — would help to quickly identify the genetic cause of the disorder4 and, in the process of doing so, precious guidance for prevention, if necessary, would become available for such families who are currently left in the lurch as to their risk of suddenly dying.
So why aren’t such MOOMs being assembled? ….
There has also been much discussion related to privacy concerns that patients might be unwilling to participate in a massive medical information resource. However, multiple global consumer surveys have shown that more than 80% of individuals are ready to share their medical data provided that they are anonymized and their privacy maximally assured4. Indeed, just 24 hours into Apple’s ResearchKit initiative, a smartphone-based medical research programme, there were tens of thousand of patients with Parkinson disease, asthma or heart disease who had signed on. Some individuals are even willing to be “open source” — that is, to make their genetic and clinical data fully available with free access online, without any assurance of privacy. This willingness is seen by the participants in the recently launched Open Humans initiative. Along with the Personal Genome Project, Go Viral and American Gut have joined in this initiative. Still, studies suggest that most individuals would only agree to be medical research participants if their identities would not be attainable. Unfortunately, to date, little has been done to protect individual medical privacy, for which there are both promising new data protection technological approaches4 and the need for additional governmental legislation.
This leaves us with perhaps the major obstacle that is holding back the development of MOOMs — researchers. Even with big, team science research projects culling together hundreds of investigators and institutions throughout the world, such as the Global Alliance for Genomics and Health (GA4GH), the data obtained clinically are just as Laurie Becklund asserted in her op-ed — “one-person clinical trials” (Ref. 1). While undertaking the construction of a MOOM is a huge endeavour, there is little motivation for researchers to take on this task, as this currently offers no academic credit and has no funding source. But the transformative potential of MOOMs to improve medical care is extraordinary. Rather than having the knowledge die with each of us, the time has come to take down the walls of academic medical centres and health-care systems around the world, and create a global knowledge medical resource that leverages each individual’s information to help one another…(More)”
Bloomberg Philanthropies Launches $42 Million “What Works Cities” Initiative
Apr 21, 2015 12:16 pm
Press Release: “Today, Bloomberg Philanthropies announced the launch of the What Works Cities initiative, a $42 million program to help 100 mid-sized cities better use data and evidence. What Works Cities is the latest initiative from Bloomberg Philanthropies’ Government Innovation portfolio which promotes public sector innovation and spreads effective ideas amongst cities.
Through partners, Bloomberg Philanthropies will help mayors and local leaders use data and evidence to engage the public, make government more effective and improve people’s lives. U.S. cities with populations between 100,000 and 1 million people are invited to apply.
“While cities are working to meet new challenges with limited resources, they have access to more data than ever – and they are increasingly using it to improve people’s lives,” said Michael R. Bloomberg. “We’ll help them build on their progress, and help even more cities take steps to put data to work. What works? That’s a question that every city leader should ask – and we want to help them find answers.”
The $42 million dollar effort is the nation’s most comprehensive philanthropic initiative to help accelerate the ability of local leaders to use data and evidence to improve the lives of their residents. What Works Cities will provide mayors with robust technical assistance, expertise, and peer-to-peer learning opportunities that will help them enhance their use of data and evidence to improve services to solve problems for communities. The program will help cities:
1. Create sustainable open data programs and policies that promote transparency and robust citizen engagement;
2. Better incorporate data into budget, operational, and policy decision making;
3. Conduct low-cost, rapid evaluations that allow cities to continually improve programs; and
4. Focus funding on approaches that deliver results for citizens.
Across the initiative, Bloomberg Philanthropies will document how cities currently use data and evidence in decision making, and how this unique program of support helps them advance. Over time, the initiative will also launch a benchmark system which will collect standardized, comparable data so that cities can understand their performance relative to peers.
In cities across the country, mayors are increasingly relying on data and evidence to deliver better results for city residents. For example, New Orleans’ City Hall used data to reduce blighted residences by 10,000 and increased the number of homes brought into compliance by 62% in 2 years. The City’s “BlightStat” program has put New Orleans, once behind in efforts to revitalize abandoned and decaying properties, at the forefront of national efforts.
In New York City and other jurisdictions, open data from transit agencies has led to the creation of hundreds of apps that residents now use to get around town, choose where to live based on commuting times, provide key transit information to the visually impaired, and more. And Louisville has asked volunteers to attach GPS trackers to their asthma inhalers to see where they have the hardest time breathing. The city is now using that data to better target the sources of air pollution….
A New Source of Data for Public Health Surveillance: Facebook Likes
Apr 21, 2015 12:03 pm
Paper by Steven Gittelman et al in the Journal of Medical Internet Research: “The development of the Internet and the explosion of social media have provided many new opportunities for health surveillance. The use of the Internet for personal health and participatory health research has exploded, largely due to the availability of online resources and health care information technology applications [1-8]. These online developments, plus a demand for more timely, widely available, and cost-effective data, have led to new ways epidemiological data are collected, such as digital disease surveillance and Internet surveys [8-25]. Over the past 2 decades, Internet technology has been used to identify disease outbreaks, track the spread of infectious disease, monitor self-care practices among those with chronic conditions, and to assess, respond, and evaluate natural and artificial disasters at a population level [6,8,11,12,14,15,17,22,26-28]. Use of these modern communication tools for public health surveillance has proven to be less costly and more timely than traditional population surveillance modes (eg, mail surveys, telephone surveys, and face-to-face household surveys).
The Internet has spawned several sources of big data, such as Facebook , Twitter , Instagram , Tumblr , Google , and Amazon . These online communication channels and market places provide a wealth of passively collected data that may be mined for purposes of public health, such as sociodemographic characteristics, lifestyle behaviors, and social and cultural constructs. Moreover, researchers have demonstrated that these digital data sources can be used to predict otherwise unavailable information, such as sociodemographic characteristics among anonymous Internet users [35-38]. For example, Goel et al  found no difference by demographic characteristics in the usage of social media and email. However, the frequency with which individuals accessed the Web for news, health care, and research was a predictor of gender, race/ethnicity, and educational attainment, potentially providing useful targeting information based on ethnicity and income . Integrating these big data sources into the practice of public health surveillance is vital to move the field of epidemiology into the 21st century as called for in the 2012 US “Big Data Research and Development Initiative” [19,39].
Understanding how big data can be used to predict lifestyle behavior and health-related data is a step toward the use of these electronic data sources for epidemiologic needs…(More)”
John Horrigan and Lee Rainie at Pew Internet, Technology and Science: “Government reformers and advocates believe that two contemporary phenomena hold the potential to change how people engage with governments at all levels. The first is data. There is more of it than ever before and there are more effective tools for sharing it. This creates new service-delivery possibilities for government through use of data that government agencies themselves collect and generate. The second is public desire to make government more responsive, transparent and effective in serving citizens — an impulse driven by tight budgets and declining citizens’ trust in government.
The upshot has been the appearance of a variety of “open data” and “open government” initiatives throughout the United States that try to use data as a lever to improve government performance and encourage warmer citizens’ attitudes toward government.
This report is based on the first national survey that seeks to benchmark public sentiment about the government initiatives that use data to cultivate the public square. The survey, conducted by Pew Research Center in association with the John S. and James L. Knight Foundation, captures public views at the emergent moment when new technology tools and techniques are being used to disseminate and capitalize on government data and specifically looks at:
People’s level of awareness of government efforts to share data
Whether these efforts translate into people using data to track government performance
If people think government data initiatives have made, or have the potential to make, government perform better or improve accountability
The more routine kinds of government-citizen online interactions, such as renewing licenses or searching for the hours of public facilities.
The results cover all three levels of government in America — federal, state and local — and show that government data initiatives are in their early stages in the minds of most Americans. Generally, people are optimistic that these initiatives can make government more accountable; even though many are less sure open data will improve government performance. And government does touch people online, as evidenced by high levels of use of the internet for routine information applications. But most Americans have yet to delve too deeply into government data and its possibilities to closely monitor government performance.
Among the survey’s main findings:
As open data and open government initiatives get underway, most Americans are still largely engaged in “e-Gov 1.0” online activities, with far fewer attuned to “Data-Gov 2.0” initiatives that involve agencies sharing data online for public use….
Minorities of Americans say they pay a lot of attention to how governments share data with the public and relatively few say they are aware of examples where government has done a good (or bad) job sharing data. Less than one quarter use government data to monitor how government performs in several different domains….
Americans have mixed hopes about government data initiatives. People see the potential in these initiatives as a force to improve government accountability. However, the jury is still out for many Americans as to whether government data initiatives will improve government performance….
People’s baseline level of trust in government strongly shapes how they view the possible impact of open data and open government initiatives on how government functions…
Americans’ perspectives on trusting government are shaped strongly by partisan affiliation, which in turn makes a difference in attitudes about the impacts of government data initiatives…
Americans are for the most part comfortable with government sharing online data about their communities, although they sound cautionary notes when the data hits close to home…
Smartphone users have embraced information-gathering using mobile apps that rely on government data to function, but not many see a strong link between the underlying government data and economic value…
Cass Sunstein at the New York Times: “Suppose that you value freedom of choice. Are you committed to the mere opportunity to choose, or will you also insist that people actually exercise that opportunity? Is it enough if the government, or a private institution, gives people the option of going their own way? Or is it particularly important to get people to say precisely what they want? In coming decades, these seemingly abstract questions will grow in importance, because they will decide central features of our lives.
Here’s an example. Until last month, all 50 states had a simple policy for voter registration: If you want to become a voter, you have the opportunity to register. Oregon is now the first state to adopt a radically different approach: If the relevant state officials know that you live in Oregon and are 18 or older, you’re automatically registered as a voter. If you don’t want to be one, you have the opportunity to opt out.
We could easily imagine a third approach. A state might decide that if you want some kind of benefit — say, a driver’s license — you have to say whether you want to register to vote. Under this approach, the state would require you to make an active choice about whether to be a voter. You would have to indicate your desires explicitly.
In countless contexts, the government, or some private institution, must decide among three possible approaches: Give people the opportunity to opt in; give people the opportunity to opt out; or require people to make some kind of active choice. For example, an employer may say that employees will be enrolled in a pension plan only if they opt in. Alternatively, it may automatically enroll employees in a pension plan (while allowing them the opportunity to opt out). Or it may instead tell employees that they can’t start work unless they say whether they want to participate in a pension plan.
You may think that while the decision raises philosophical puzzles, the stakes are small. If so, you would be wrong; the decision can have huge consequences. By itself, the opportunity to choose is not all that matters, because many people will not exercise that opportunity. Inertia has tremendous force, and people tend to procrastinate. If a state or a private company switches from a system of opt-out to one of opt-in, or vice versa, it can have major effects on people’s lives.
For example, Oregon expects that its new policy will produce up to 300,000 new registered voters. In 2004, Congress authorized the Department of Agriculture to allow states and localities to automatically enroll eligible poor children in school meal programs, rather than requiring their parents to sign them up. As a result, millions of such children now have access to school meals. In many nations, including the United States, Britain and Denmark, automatic enrollment in pension plans has significantly increased the number of employees who participate in pension plans. The Affordable Care Act builds on this practice with a provision that will require large employers to enroll employees automatically in health insurance plans.
In light of findings of this kind (and there are many more), a lot of people have argued that people would be much better off if many institutions switched, today or tomorrow, from “opt in” designs to “opt out.” Often they’re right; “opt out” can be a lot better. But from the standpoint of both welfare and personal freedom, opt out raises problems of its own, precisely because it does not involve an actual exercise of the power to choose….(More)
Daniel C. Dennett and Deb Roy in Scientific American: “More than half a billion years ago a spectacularly creative burst of biological innovation called the Cambrian explosion occurred. In a geologic “instant” of several million years, organisms developed strikingly new body shapes, new organs, and new predation strategies and defenses against them. Evolutionary biologists disagree about what triggered this prodigious wave of novelty, but a particularly compelling hypothesis, advanced by University of Oxford zoologist Andrew Parker, is that light was the trigger. Parker proposes that around 543 million years ago, the chemistry of the shallow oceans and the atmosphere suddenly changed to become much more transparent. At the time, all animal life was confined to the oceans, and as soon as the daylight flooded in, eyesight became the best trick in the sea. As eyes rapidly evolved, so did the behaviors and equipment that responded to them.
Whereas before all perception was proximal — by contact or by sensed differences in chemical concentration or pressure waves — now animals could identify and track things at a distance. Predators could home in on their prey; prey could see the predators coming and take evasive action. Locomotion is a slow and stupid business until you have eyes to guide you, and eyes are useless if you cannot engage in locomotion, so perception and action evolved together in an arms race. This arms race drove much of the basic diversification of the tree of life we have today.
Parker’s hypothesis about the Cambrian explosion provides an excellent parallel for understanding a new, seemingly unrelated phenomenon: the spread of digital technology. Although advances in communications technology have transformed our world many times in the past — the invention of writing signaled the end of prehistory; the printing press sent waves of change through all the major institutions of society — digital technology could have a greater impact than anything that has come before. It will enhance the powers of some individuals and organizations while subverting the powers of others, creating both opportunities and risks that could scarcely have been imagined a generation ago.
Through social media, the Internet has put global-scale communications tools in the hands of individuals. A wild new frontier has burst open. Services such as YouTube, Facebook, Twitter, Tumblr, Instagram, WhatsApp and SnapChat generate new media on a par with the telephone or television — and the speed with which these media are emerging is truly disruptive. It took decades for engineers to develop and deploy telephone and television networks, so organizations had some time to adapt. Today a social-media service can be developed in weeks, and hundreds of millions of people can be using it within months. This intense pace of innovation gives organizations no time to adapt to one medium before the arrival of the next.
The tremendous change in our world triggered by this media inundation can be summed up in a word: transparency. We can now see further, faster, and more cheaply and easily than ever before — and we can be seen. And you and I can see that everyone can see what we see, in a recursive hall of mirrors of mutual knowledge that both enables and hobbles. The age-old game of hide-and-seek that has shaped all life on the planet has suddenly shifted its playing field, its equipment and its rules. The players who cannot adjust will not last long.
The impact on our organizations and institutions will be profound. Governments, armies, churches, universities, banks and companies all evolved to thrive in a relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic. When these organizations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct. Just as a living cell needs an effective membrane to protect its internal machinery from the vicissitudes of the outside world, so human organizations need a protective interface between their internal affairs and the public world, and the old interfaces are losing their effectiveness….(More at Medium)”
21st-Century Public Servants: Using Prizes and Challenges to Spur Innovation
Apr 20, 2015 06:59 am
Jenn Gustetic at the Open Government Initiative Blog: “Thousands of Federal employees across the government are using a variety of modern tools and techniques to deliver services more effectively and efficiently, and to solve problems that relate to the missions of their Agencies. These 21st-century public servants are accomplishing meaningful results by applying new tools and techniques to their programs and projects, such as prizes and challenges, citizen science and crowdsourcing, open data, and human-centered design.
Prizes and challenges have been a particularly popular tool at Federal agencies. With 397 prizes and challenges posted on challenge.gov since September 2010, there are hundreds of examples of the many different ways these tools can be designed for a variety of goals. For example:
NASA’s Mars Balance Mass Challenge: When NASA’s Curiosity rover pummeled through the Martian atmosphere and came to rest on the surface of Mars in 2012, about 300 kilograms of solid tungsten mass had to be jettisoned to ensure the spacecraft was in a safe orientation for landing. In an effort to seek creative concepts for small science and technology payloads that could potentially replace a portion of such jettisoned mass on future missions, NASA released the Mars Balance Mass Challenge. In only two months, over 200 concepts were submitted by over 2,100 individuals from 43 different countries for NASA to review. Proposed concepts ranged from small drones and 3D printers to radiation detectors and pre-positioning supplies for future human missions to the planet’s surface. NASA awarded the $20,000 prize to Ted Ground of Rising Star, Texas for his idea to use the jettisoned payload to investigate the Mars atmosphere in a way similar to how NASA uses sounding rockets to study Earth’s atmosphere. This was the first time Ted worked with NASA, and NASA was impressed by the novelty and elegance of his proposal: a proposal that NASA likely would not have received through a traditional contract or grant because individuals, as opposed to organizations, are generally not eligible to participate in those types of competitions.
National Institutes of Health (NIH) Breast Cancer Startup Challenge (BCSC): The primary goals of the BCSC were to accelerate the process of bringing emerging breast cancer technologies to market, and to stimulate the creation of start-up businesses around nine federally conceived and owned inventions, and one invention from an Avon Foundation for Women portfolio grantee. While NIH has the capacity to enable collaborative research or to license technology to existing businesses, many technologies are at an early stage and are ideally suited for licensing by startup companies to further develop them into commercial products. This challenge established 11 new startups that have the potential to create new jobs and help promising NIH cancer inventions support the fight against breast cancer. The BCSC turned the traditional business plan competition model on its head to create a new channel to license inventions by crowdsourcing talent to create new startups.
These two examples of challenges are very different, in terms of their purpose and the process used to design and implement them. The success they have demonstrated shouldn’t be taken for granted. It takes access to resources (both information and people), mentoring, and practical experience to both understand how to identify opportunities for innovation tools, like prizes and challenges, to use them to achieve a desired outcome….
Last month, the Challenge.gov program at the General Services Administration (GSA), the Office of Personnel Management (OPM)’s Innovation Lab, the White House Office of Science and Technology Policy (OSTP), and a core team of Federal leaders in the prize-practitioner community began collaborating with the Federal Community of Practice for Challenges and Prizes to develop the other half of the open innovation toolkit, the prizes and challenges toolkit. In developing this toolkit, OSTP and GSA are thinking not only about the information and process resources that would be helpful to empower 21st-century public servants using these tools, but also how we help connect these people to one another to add another meaningful layer to the learning environment…..
Creating an inventory of skills and knowledge across the 600-person (and growing!) Federal community of practice in prizes and challenges will likely be an important resource in support of a useful toolkit. Prize design and implementation can involve tricky questions, such as:
Do I have the authority to conduct a prize or challenge?
How should I approach problem definition and prize design?
Can agencies own solutions that come out of challenges?
How should I engage the public in developing a prize concept or rules?
What types of incentives work best to motivate participation in challenges?
What legal requirements apply to my prize competition?
Can non-Federal employees be included as judges for my prizes?
How objective do the judging criteria need to be?
Can I partner to conduct a challenge? What’s the right agreement to use in a partnership?
Who can win prize money and who is eligible to compete? …(More)
Inaugural Lecture by Aske Plaat on the acceptance of the position of professor of Data Science at the Universiteit Leiden: “…Today, everybody and everything produces data. People produce large amounts of data in social networks and in commercial transactions. Medical, corporate, and government databases continue to grow. Ten years ago there were a billion Internet users. Now there are more than three billion, most of whom are mobile.1 Sensors continue to get cheaper and are increasingly connected, creating an Internet of Things. The next three billion users of the Internet will not all be human, and will generate a large amount of data. In every discipline, large, diverse, and rich data sets are emerging, from astrophysics, to the life sciences, to medicine, to the behavioral sciences, to finance and commerce, to the humanities and to the arts. In every discipline people want to organize, analyze, optimize and understand their data to answer questions and to deepen insights. The availability of so much data and the ability to interpret it are changing the way the world operates. The number of sciences using this approach is increasing. The science that is transforming this ocean of data into a sea of knowledge is called data science. In many sciences the impact on the research methodology is profound—some even call it a paradigm shift.
…I will address the question of why there is so much interest in data. I will answer this question by discussing one of the most visible recent challenges to public health of the moment, the 2014 Ebola outbreak in West Africa…(More)”
David C. Roberts at Quartz: “Every year, outdoor air pollution kills more people worldwide than malaria and HIV combined. People in China, particularly in its largest cities, are some of the most affected, since the country’s rapid economic growth has come at the cost of air quality. This issue remained largely unaddressed until the US embassy in Beijing began to tweet out air quality data in 2008, providing a remarkable demonstration of the transformative power of democratizing data. The tweets sparked an energetic environmental movement that forced China’s leaders to acknowledge the massive scale of the problem and begin to take measures to combat it.
The initiative to publicize air quality data was subsequently expanded to US consulates in several major Chinese cities, providing a wealth of new scientific data. I recently worked with Federico San Martini and Christa Hasenkopf (both atmospheric scientists at the US State Department who are involved in this program) to analyze this data…(More)”
Sumana Harihareswara at code4lib: “…Before I worked in open source, I worked in customer service. I saw first-hand how design flaws (in architecture, signage, and websites) could frustrate and drive away customers and make more work for me. Every time I participated in an open source project — AltLaw, GNOME, MediaWiki, and more — I’ve brought that experience with me. I found it particularly striking that small changes on Wikipedia could cause large changes in user behavior, as I discuss in this essay, which is adapted from my keynote speech.
This issue goes beyond software, as I explain with the healthcare and banking examples. The spark that caused me to write the speech was reading Professor Lisa J. Servon’s piece in The Atlantic about the usability of storefront check cashing services; I saw a pattern where poor user experience repels people from crucial and empowering services, and decided, in a flash of anger and inspiration, to write “User Experience is a Human Rights Issue.”…
The Last Mile Problem
The largest hurdles we as technologists face are choosing to make the right things in the first place and choosing to make them usable. In the 1990’s, telecommunications companies laid down a lot of fiber to connect big hubs to one another, but often it took years to connect those hubs to the actual houses and schools and shops and offices, because it was expensive, or because companies were not creative enough to do it well. This is called the “last mile problem,” and I think usability has a similar problem. We have to be creative and disciplined enough to actually provide services in a way that people can use them.
When we’re building services for people, we often have a lot more practice seeing things from the computer’s point of view or from the data’s point of view than from another person’s point of view. In tech, we understand how to build arteries better than we understand how to build capillaries. Personally, I think capillaries are more interesting than arteries. Maybe it’s just personal temperament, but I like all the little surprising details of how people end up experiencing the ripple effects of big new systems, and how users actually interact with the user interface of a service, especially ones that we don’t really think of as having a user interface. Like taxes, or healthcare, or hotels. All these big systems end in little capillaries, where people exchange information or get healed or get whatever they need. And when those capillaries aren’t working correctly, then those people just don’t get what they need. The hubs are connected to each other, but people aren’t connected to the hubs.
Over and over, in lots of different fields, we see that bad usability makes a huge difference. When choosing between two services, people will make very different choices, depending on which service actually seems designed around the user’s needs….(More)”