Copy

A weekly curation of new findings and developments on innovation in governance.

Table of Contents

Beta Release of the NETmundial Solutions Map

Apr 16, 2015 09:32 am

“…the GovLab is pleased to announce the beta release of the NETmundial Solutions Map for further public comment (from April 1 -May 1, 2015). The release is the culmination of a 6-month engagement and development strategy to ensure that the tool reflects input from a diverse set of global stakeholders. The NETmundial Solutions Map is co-developed by the GovLab and Second Rise, and is facilitated by the Internet Corporation for Assigned Names and Numbers (ICANN).

 

Screen Shot 2015-03-31 at 2.29.10 PM

The tool seeks to support innovation in global governance toward a more distributed Internet Governance approach. It is designed to enable information sharing and collaboration across Internet governance issues. It will serve as a repository of information that links issues, actors, solutions and resources, and help users understand the current landscape of Internet governance.

Today, information about internet governance is scattered and hard to find. At the same time we need more coordination and collaboration to address specific issues. The Map seeks to facilitate a more collaborative and distributed way of solving Internet governance issues by providing users with a baseline of what responses already exist and who is working on what — Stefaan Verhulst, Co-Founder and Chief of Research and Development of the GovLab.

..This beta version of the NETmundial Solutions Map seeks to explore how to map the Internet governance landscape in a useful and sustainable way. Future revisions will continue to be guided by community feedback.

To this end, we welcome your comments on the following (period runs till May 1st):

  • What do you feel works well in the map?
  • What needs improving?
  • How can the map help you in your work?
  • Would you want to be part of the next version as a content provider?”

Full Post: Beta Release of the NETmundial Solutions Map


Thinking Ahead – Essays on Big Data, Digital Revolution, and Participatory Market Society

Apr 16, 2015 12:59 am

New book by Dirk Helbing: “The rapidly progressing digital revolution is now touching the foundations of the governance of societal structures. Humans are on the verge of evolving from consumers to prosumers, and old, entrenched theories – in particular sociological and economic ones – are falling prey to these rapid developments. The original assumptions on which they are based are being questioned. Each year we produce as much data as in the entire human history – can we possibly create a global crystal ball to predict our future and to optimally govern our world? Do we need wide-scale surveillance to understand and manage the increasingly complex systems we are constructing, or would bottom-up approaches such as self-regulating systems be a better solution to creating a more innovative, more successful, more resilient, and ultimately happier society? Working at the interface of complexity theory, quantitative sociology and Big Data-driven risk and knowledge management, the author advocates the establishment of new participatory systems in our digital society to enhance coordination, reduce conflict and, above all, reduce the “tragedies of the commons,” resulting from the methods now used in political, economic and management decision-making….(More)”

Full Post: Thinking Ahead – Essays on Big Data, Digital Revolution, and Participatory Market Society


Inspiring and Informing Citizens Online: A Media Richness Analysis of Varied Civic Education Modalities

Apr 16, 2015 12:16 am

Paper by Brinker, David and Gastil, John and Richards, Robert C. in the Journal of Computer-Mediated Communication (Forthcoming): “Public deliberation on the Internet is a promising but unproven practice. Online deliberation can engage large numbers of citizens at relatively low cost, but it is unclear whether such programs have substantial civic impact. One factor in determining their effectiveness may be the communicative features of the online setting in which they occur. Within a Media Richness Theory framework, we conducted a quasi-experiment to assess the civic outcomes of interventions executed online by non-profit organizations prior to the 2012 U.S. presidential election. The results assess the impact of these interventions on issue knowledge and civic attitudes. Comparisons of the interventions illustrate the importance of considering media richness online, and our discussion considers the theoretical and practical implications of these findings….(More)”

Full Post: Inspiring and Informing Citizens Online: A Media Richness Analysis of Varied Civic Education Modalities


Solving the obesity crisis: knowledge, nudge or nanny?

Apr 15, 2015 11:51 pm

BioMedCentral Blog: ” The 5th Annual Oxford London Lecture (17 March 2015) was delivered by Professor Susan Jebb from Oxford University. The presentation was titled: ‘Knowledge, nudge and nanny: Opportunities to improve the nation’s diet’. In this guest blog Dr Helen Walls, Research Fellow at the London School of Hygiene and Tropical Medicine, covers key themes from this presentation.

“Obesity and related non-communicable disease such as diabetes, heart disease and cancer poses a significant health, social and economic burden in countries worldwide, including the United Kingdom. Whilst the need for action is clear, the nutrition policy response is a highly controversial topic. Professor Jebb raised the question of how best to achieve dietary change: through ‘knowledge, nudge or nanny’?

Education regarding healthy nutrition is an important strategy, but insufficient. People are notoriously bad at putting their knowledge to work. The inclination to overemphasise the importance of knowledge, whilst ignoring the influence of environmental factors on human behaviours, is termed the ‘fundamental attribution error’. Education may also contribute to widening inequities.

Our choices are strongly shaped by the environments in which we live. So if ‘knowledge’ is not enough, what sort of interventions are appropriate? This raises questions regarding individual choice and the role of government. Here, Professor Jebb introduced the Nuffield Intervention Ladder.

 

Nuffield Intervention Ladder
Nuffield Intervention Ladder
Nuffield Council on Bioethics. Public health ethical issues. London: Nuffield Council on Bioethics. 2007.

The Nuffield Intervention Ladder or what I will refer to as ‘the ladder’ describes intervention types from least to most intrusive on personal choice. With addressing diets and obesity, Professor Jebb believes we need a range of policy types, across the range of rungs on the ladder.

Less intrusive measures on the ladder could include provision of information about healthy and unhealthy foods, and provision of nutritional information on products (which helps knowledge be put into action). More effective than labelling is the signposting of healthier choices.

Taking a few steps up the ladder brings in ‘nudge’, a concept from behavioural economics. A nudge is any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding options or significantly changing economic incentives. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

The in-store environment has a huge influence over our choices, and many nudge options would fit here. For example, gondalar-end (end of aisle) promotions create a huge up-lift in sales. Removing unhealthy products from this position could make a considerable difference to the contents of supermarket baskets.

Nudge could be used to assist people make better nutritional choices, but it’s also unlikely to be enough. We celebrate the achievement we have made with tobacco control policies and smoking reduction. Here, we use a range of intervention types, including many legislative measures – the ‘nanny’ aspect of the title of this presentation….(More)”

Full Post: Solving the obesity crisis: knowledge, nudge or nanny?


Modern Methods for Sentiment Analysis

Apr 15, 2015 04:44 pm

Review by Michael Czerny: “Sentiment analysis is a common application of Natural Language Processing (NLP) methodologies, particularly classification, whose goal is to extract the emotional content in text. In this way, sentiment analysis can be seen as a method to quantify qualitative data with some sentiment score. While sentiment is largely subjective, sentiment quantification has enjoyed many useful implementations, such as businesses gaining understanding about consumer reactions to a product, or detecting hateful speech in online comments.

The simplest form of sentiment analysis is to use a dictionary of good and bad words. Each word in a sentence has a score, typically +1 for positive sentiment and -1 for negative. Then, we simply add up the scores of all the words in the sentence to get a final sentiment total. Clearly, this has many limitations, the most important being that it neglects context and surrounding words. For example, in our simple model the phrase “not good” may be classified as 0 sentiment, given “not” has a score of -1 and “good” a score of +1. A human would likely classify “not good” as negative, despite the presence of “good”.

Another common method is to treat a text as a “bag of words”. We treat each text as a 1 by N vector, where N is the size of our vocabulary. Each column is a word, and the value is the number of times that word appears. For example, the phrase “bag of bag of words” might be encoded as [2, 2, 1]. This could then be fed into a machine learning algorithm for classification, such as logistic regression or SVM, to predict sentiment on unseen data. Note that this requires data with known sentiment to train on in a supervised fashion. While this is an improvement over the previous method, it still ignores context, and the size of the data increases with the size of the vocabulary.

Word2Vec and Doc2Vec

Recently, Google developed a method called Word2Vec that captures the context of words, while at the same time reducing the size of the data. Word2Vec is actually two different methods: Continuous Bag of Words (CBOW) and Skip-gram. In the CBOW method, the goal is to predict a word given the surrounding words. Skip-gram is the converse: we want to predict a window of words given a single word (see Figure 1). Both methods use artificial neural networks as their classification algorithm. Initially, each word in the vocabulary is a random N-dimensional vector. During training, the algorithm learns the optimal vector for each word using the CBOW or Skip-gram method….(More)

Full Post: Modern Methods for Sentiment Analysis


The Rule of History

Apr 15, 2015 12:56 pm

Jill Lepore about Magna Carta, the Bill of Rights, and the hold of time in The New Yorker: “…Magna Carta has been taken as foundational to the rule of law, chiefly because in it King John promised that he would stop throwing people into dungeons whenever he wished, a provision that lies behind what is now known as due process of law and is understood not as a promise made by a king but as a right possessed by the people. Due process is a bulwark against injustice, but it wasn’t put in place in 1215; it is a wall built stone by stone, defended, and attacked, year after year. Much of the rest of Magna Carta, weathered by time and for centuries forgotten, has long since crumbled, an abandoned castle, a romantic ruin.

Magna Carta is written in Latin. The King and the barons spoke French. “Par les denz Dieu!” the King liked to swear, invoking the teeth of God. The peasants, who were illiterate, spoke English. Most of the charter concerns feudal financial arrangements (socage, burgage, and scutage), obsolete measures and descriptions of land and of husbandry (wapentakes and wainages), and obscure instruments for the seizure and inheritance of estates (disseisin and mort d’ancestor). “Men who live outside the forest are not henceforth to come before our justices of the forest through the common summonses, unless they are in a plea,” one article begins.

Magna Carta’s importance has often been overstated, and its meaning distorted. “The significance of King John’s promise has been anything but constant,” U.S. Supreme Court Justice John Paul Stevens aptly wrote, in 1992. It also has a very different legacy in the United States than it does in the United Kingdom, where only four of its original sixty-some provisions are still on the books. In 2012, three New Hampshire Republicans introduced into the state legislature a bill that required that “all members of the general court proposing bills and resolutions addressing individual rights or liberties shall include a direct quote from the Magna Carta which sets forth the article from which the individual right or liberty is derived.” For American originalists, in particular, Magna Carta has a special lastingness. “It is with us every day,” Justice Antonin Scalia said in a speech at a Federalist Society gathering last fall.

Much has been written of the rule of law, less of the rule of history. Magna Carta, an agreement between the King and his barons, was also meant to bind the past to the present, though perhaps not in quite the way it’s turned out. That’s how history always turns out: not the way it was meant to. In preparation for its anniversary, Magna Carta acquired a Twitter username: @MagnaCarta800th….(More)”

Full Post: The Rule of History


Citizen Science for Citizen Access to Law

Apr 15, 2015 11:29 am

Paper by Michael Curtotti, Wayne Weibel, Eric McCreath, Nicolas Ceynowa, Sara Frug, and Tom R Bruce: “This paper sits at the intersection of citizen access to law, legal informatics and plain language. The paper reports the results of a joint project of the Cornell University Legal Information Institute and the Australian National University which collected thousands of crowdsourced assessments of the readability of law through the Cornell LII site. The aim of the project is to enhance accuracy in the prediction of the readability of legal sentences. The study requested readers on legislative pages of the LII site to rate passages from the United States Code and the Code of Federal Regulations and other texts for readability and other characteristics. The research provides insight into who uses legal rules and how they do so. The study enables conclusions to be drawn as to the current readability of law and spread of readability among legal rules. The research is intended to enable the creation of a dataset of legal rules labelled by human judges as to readability. Such a dataset, in combination with machine learning, will assist in identifying factors in legal language which impede readability and access for citizens. As far as we are aware, this research is the largest ever study of readability and usability of legal language and the first research which has applied crowdsourcing to such an investigation. The research is an example of the possibilities open for enhancing access to law through engagement of end users in the online legal publishing environment for enhancement of legal accessibility and through collaboration between legal publishers and researchers….(More)”

Full Post: Citizen Science for Citizen Access to Law


Special Report: 50 Years of Moore’s Law

Apr 15, 2015 10:00 am

moore's law iconIEEE Spectrum: “Fifty years ago this month, Gordon Moore forecast a bright future for electronics. His ideas were later distilled into a single organizing principle—Moore’s Law—that has driven technology forward at a staggering clip. We have all benefited from this miraculous development, which has forcefully shaped our modern world.

In this special report, we find that the end won’t be sudden and apocalyptic but rather gradual and complicated. Moore’s Law truly is the gift that keeps on giving—and surprising, as well….(More)”

Full Post: Special Report: 50 Years of Moore’s Law


Open Data Literature Review

Apr 15, 2015 09:10 am

Review by Emmie Tran and Ginny Scholtes: “Open data describes large datasets that governments at all levels release online and free of charge for analysis by anyone for any purpose. Entrepreneurs may use open data to create new products and services, and citizens may use it to gain insight into the government. A plethora of time saving and other useful applications have emerged from open data feeds, including more accurate traffic information, real-time arrival of public transportation, and information about crimes in neighborhoods. But data held by the government is implicitly or explicitly about individuals. While open government is often presented as an unqualified good, sometimes open data can identify individuals or groups, leading to invasions of privacy and disparate impact on vulnerable populations.

This review provides background to parties interested in open data, specifically for those attending the 19th Annual BCLT/BTLJ Symposium on open data. Part I defines open data, focusing on the origins of the open data movement and the types of data subject to government retention and public access. Part II discusses how open data can benefit society, and Part III delves into the many challenges and dangers of open data. Part IV addresses these challenges, looking at how the United States and other countries have implemented open data regimes, and considering some of the proposed measures to mitigate the dangers of open data….(More)”

Full Post: Open Data Literature Review


Crowdsourcing Pedestrian and Cyclist Activity Data

Apr 15, 2015 09:00 am

Paper by Amy Smith: “This paper considers how crowdsourcing applications and crowdsourced data are currently being applied, as well as potential new uses for active transportation research and planning efforts of various types. The objectives of this white paper are to review crowdsourced bicycle and pedestrian data resources and crowdsourcing tools; discuss potential planning implementations of crowdsourced data for a variety of bicycle and pedestrian project types; and provide examples of how crowdsourcing is currently being used by the planning community. Due to software application turnover, many of the examples provided describe tools that may no longer be in use, have evolved significantly, or have been/will eventually be depreciated with the advance of new technologies. This paper is not intended to be a comprehensive outline of crowdsourcing applications in the transportation planning profession or a dictionary of crowdsourcing system types, but rather a resource for those interested in using crowdsourcing systems in active transportation planning and research. (Full Paper)”

Full Post: Crowdsourcing Pedestrian and Cyclist Activity Data


New surveys reveal dynamism, challenges of open data-driven businesses in developing countries

Apr 15, 2015 08:58 am

Alla Morrison at World Bank Open Data blog: “Was there a class of entrepreneurs emerging to take advantage of the economic possibilities offered by open data, were investors keen to back such companies, were governments tuned to and responsive to the demands of such companies, and what were some of the key financing challenges and opportunities in emerging markets? As we began our work on the concept of an Open Fund, we partnered with Ennovent(India), MDIF (East Asia and Latin America) and Digital Data Divide (Africa) to conduct short market surveys to answer these questions, with a focus on trying to understand whether a financing gap truly existed in these markets. The studies were fairly quick (4-6 weeks) and reached only a small number of companies (193 in India, 70 in Latin America, 63 in South East Asia, and 41 in Africa – and not everybody responded) but the findings were fairly consistent.

  • Open data is still a very nascent concept in emerging markets. and there’s only a small class of entrepreneurs/investors that is aware of the economic possibilities; there’s a lot of work to do in the ‘enabling environment’
    • In many regions the distinction between open data, big data, and private sector generated/scraped/collected data was blurry at best among entrepreneurs and investors (some of our findings consequently are better indicators of  data-driven rather than open data-driven businesses)
  • There’s a small but growing number of open data-driven companies in all the markets we surveyed and these companies target a wide range of consumers/users and are active in multiple sectors
    • A large percentage of identified companies operate in sectors with high social impact – health and wellness, environment, agriculture, transport. For instance, in India, after excluding business analytics companies, a third of data companies seeking financing are in healthcare and a fifth in food and agriculture, and some of them have the low-income population or the rural segment of India as an intended beneficiary segment. In Latin America, the number of companies in business services, research and analytics was closely followed by health, environment and agriculture. In Southeast Asia, business, consumer services, and transport came out in the lead.
    • We found the highest number of companies in Latin America and Asia with the following countries leading the way – Mexico, Chile, and Brazil, with Colombia and Argentina closely behind in Latin America; and India, Indonesia, Philippines, and Malaysia in Asia
  • An actionable pipeline of data-driven companies exists in Latin America and in Asia
    • We heard demand for different kinds of financing (equity, debt, working capital) but the majority of the need was for equity and quasi-equity in amounts ranging from $100,000 to $5 million USD, with averages of between $2 and $3 million USD depending on the region.
  • There’s a significant financing gap in all the markets
    • The investment sizes required, while they range up to several million dollars, are generally small. Analysis of more than 300 data companies in Latin America and Asia indicates a total estimated need for financing of more than $400 million
  • Venture capitals generally don’t recognize data as a separate sector and club data-driven companies with their standard information communication technology (ICT) investments
    • Interviews with founders suggest that moving beyond seed stage is particularly difficult for data-driven startups. While many companies are able to cobble together an initial seed round augmented by bootstrapping to get their idea off the ground, they face a great deal of difficulty when trying to raise a second, larger seed round or Series A investment.
    • From the perspective of startups, investors favor banal e-commerce (e.g., according toTech in Asia, out of the $645 million in technology investments made public across the region in 2013, 92% were related to fashion and online retail) or consumer service startups and ignore open data-focused startups even if they have a strong business model and solid key performance indicators. The space is ripe for a long-term investor with a generous risk appetite and multiple bottom line goals.
  • Poor data quality was the number one issue these companies reported.
    • Companies reported significant waste and inefficiency in accessing/scraping/cleaning data.

The analysis below borrows heavily from the work done by the partners. We should of course mention that the findings are provisional and should not be considered authoritative (please see the section on methodology for more details)….(More).”

Full Post: New surveys reveal dynamism, challenges of open data-driven businesses in developing countries


White House Releases 150 Data Sets to Fight Climate Change

Apr 15, 2015 04:56 am

 at GovTech: “To support the president’s Climate Data Initiative, the White House revealed on Tuesday, April 7, a series of data projects and partnerships that includes more than 150 new open data sets, as well as commitments from Google, Microsoft and others to cultivate climate analysis.

The undertakings were released at a White House climate and health conference where John Holdren, director of the White House Office of Science and Technology Policy, pressed the need for greater data to compel decreases to greenhouse emissions.

“This is a science-based administration, a fact-based administration, and our climate policies have to be based on fact, have to be based on data, and we want to make those data available to everybody,” Holdren said.

The data initiative touches multiple agencies — including NASA, the Centers for Disease Control and Prevention, the National Institutes of Health and the Environmental Protection Agency — and is part of the White House proclamation of a new National Public Health Week, from April 6 to April 12, to spur national health solutions and awareness.

The 150-plus data sets are all connected to health, and are among the 560 climate-related data sets available on Data.gov, the U.S. government’s open data portal. Accompanying the release, the Department of Health and Human Services added a Health Care Facilities Toolkit on Toolkit.climate.gov, a site that delivers climate resilience techniques, strategies, case studies and tools for organizations attempting climate change initiatives.

Holdren was followed by White House Chief Data Scientist D.J. Patil, who moderated a tech industry panel with representatives from Google, Microsoft and GIS mapping software company Esri.

Google Earth Outreach Program Manager Allison Lieber confirmed that Google will continue to provide assistance with 10 million hours for high-performance computing for climate data projects — down from 50 million in 2014 — and the company will likewise provide climate data hosting on Google Earth….(More)”

Full Post: White House Releases 150 Data Sets to Fight Climate Change


Ready Steady Gov

Apr 15, 2015 02:44 am

Joshua Chambers at FutureGov: “…two public servants in Western Australia have come up with an alternative way of pushing forwards their government’s digital delivery.

Their new project, Ready Steady Gov, provides free web templates based on an open source CMS so that any agency can quickly upgrade their web site, for free. The officials’ templates are based on the web site guidance published by the state: the Web Governance Framework and the Common Website Elements documentation.

The site was motivated by a desire to quickly improve government web sites. “I’m sure you’ve heard the phrase… ‘Everything takes longer in government’. We want building websites to become an exception to this rule,” wrote Jessy Yuen and Vincent Manera, the project’s founders.

They have created five open source templates “which are lightly styled so that you can easily integrate your own branding”. They are responsive so that they fit all screen sizes, and meet the required accessibility standards….(More)”

Mobileview

Full Post: Ready Steady Gov


Government 5D Transparency

Apr 14, 2015 12:09 pm

“The latest ePSI Platform Topic Report focuses on the subject of Government Transparency, exploring the various different types of transparency and explaining the interconnections between them.

The report, written by Veronica Cretu and Nicolae Cretu, focuses on data transparency, process transparency, strategic transparency, transformational transparency, and radical transparency, and examines the added value for governments in relation to 5D transparency.

You can access the report by clicking here.”

Full Post: Government 5D Transparency


Innovating for Impact in Public Policy

Apr 14, 2015 10:51 am

Post by Derek B. Miller and Lisa Rudnick: “Political systems across democratic countries are becoming more ideologically and politically divided over how to use increasingly limited resources. In the face of these pressures everyone wants results: they want them cheap and they want them now. This demand for better results is falling squarely on civil servants.

In the performance of their jobs, everyone is being asked to do more with less. This demand comes independent of theme, scope, or size of the public institution. It is as true for those working in transportation as it is for those in education or public health or international peace and security; whether in local government or at UN agencies; or else in the NGOs, think tanks, and community-based organizations that partner with them. Even private industry feels the squeeze.

When we say “do more with less” we mean more impact, better results, and more effective outcomes than ever before with less money and time, fewer people, and (often) less political support.

In taking a cue from the private sector, the public sector is looking for solutions in “Innovation.”

Innovation is the act of making possible that which was previously impossible in order to solve a problem. Given that present performance is insufficient to meet demand, there is a turn to innovation (broadly defined) to maximize resources through new methods to achieve goals. In this way, innovation is being treated as a strategic imperative for successful governance.

From our vantage point — having worked on innovation and public policy for over a decade, mostly from within the UN — we see two driving forces for innovation that we believe are going to shape the future of public policy performance and, by extension, the character of democratic governance in the years to come. Managing the convergence of these two approaches to innovation is going to be one of the most important public policy agendas for the next several decades (for a detailed discussion of this topic, see Trying it on for Size: Design and International Public Policy).

The first is evidence-based policymaking. The goal of evidence-based policymaking is to build a base of evidence — often about past performance —  so that lessons can be learned, best practices distilled, and new courses of action recommended (or required) to guide future organizational behavior for more efficient or effective outcomes.

The second force is going to be design. The field of design evolved in the crucible of the arts and not in the Academy. It is therefore a late-comer to public policy…(More)”

Full Post: Innovating for Impact in Public Policy


The End of Asymmetric Information

Apr 14, 2015 10:48 am

Essay byBy Alex Tabarrok and Tyler Cowen:Might the age of asymmetric information – for better or worse – be over?  Market institutions are rapidly evolving to a situation where very often the buyer and the seller have roughly equal knowledge. Technological developments are giving everyone who wants it access to the very best information when it comes to product quality, worker performance, matches to friends and partners, and the nature of financial transactions, among many other areas.

These developments will have implications for how markets work, how much consumers benefit, and also economic policy and the law. As we will see, there may be some problematic sides to these new arrangements, specifically when it comes to privacy. Still, a large amount of economic regulation seems directed at a set of problems which, in large part, no longer exist…

Many “public choice” problems are really problems of asymmetric information. In William Niskanen’s (1974) model of bureaucracy, government workers usually benefit from larger bureaus, and they are able to expand their bureaus to inefficient size because they are the primary providers of information to politicians. Some bureaus, such as the NSA and the CIA, may still be able to use secrecy to benefit from information asymmetry. For instance they can claim to politicians that they need more resources to deter or prevent threats, and it is hard for the politicians to have well-informed responses on the other side of the argument. Timely, rich information about most other bureaucracies, however, is easily available to politicians and increasingly to the public as well. As information becomes more symmetric, Niskanen’s (1974) model becomes less applicable, and this may help check the growth of unneeded bureaucracy.

Cheap sensors are greatly extending how much information can be economically gathered and analyzed. It’s not uncommon for office workers to have every key stroke logged. When calling customer service, who has not been told “this call may be monitored for quality control purposes?” Service-call workers have their location tracked through cell phones. Even information that once was thought to be purely subjective can now be collected and analyzed, often with the aid of smart software or artificial intelligence. One firm, for example, uses badges equipped with microphones, accelerometers, and location sensors to measure tone of voice, posture, and body language, as well as who spoke to whom and for how long (Lohr 2014). The purpose is not only to monitor workers but to deduce when, where and why workers are the most productive. We are again seeing trade-offs which bring greater productivity, and limit asymmetric information, albeit at the expense of some privacy.

As information becomes more prevalent and symmetric, earlier solutions to asymmetric problems will become less necessary. When employers do not easily observe workers, for example, employers may pay workers unusually high wages, generating a rent. Workers will then work at high levels despite infrequent employer observation, to maintain their future rents (Shapiro and Stiglitz 1984). But those higher wages involved a cost, namely that fewer workers were hired, and the hires that were made often were directed to people who were already known to the firm. Better monitoring of workers will mean that employers will hire more people and furthermore they may be more willing to take chances on risky outsiders, rather than those applicants who come with impeccable pedigree. If the outsider does not work out and produce at an acceptable level, it is easy enough to figure this out and fire them later on….(More)”

Full Post: The End of Asymmetric Information


The International Handbook Of Public Administration And Governance

Apr 14, 2015 07:55 am

New book edited by Andrew Massey and Karen Johnston: “…Handbook explores key questions around the ways in which public administration and governance challenges can be addressed by governments in an increasingly globalized world. World-leading experts explore contemporary issues of government and governance, as well as the relationship between civil society and the political class. The insights offered will allow policy makers and officials to explore options for policy making in a new and informed way.

Adopting global perspectives of governance and public sector management, the Handbook includes scrutiny of current issues such as: public policy capacity, wicked policy problems, public sector reforms, the challenges of globalization and complexity management. Practitioners and scholars of public administration deliver a range of perspectives on the abiding wicked issues and challenges to delivering public services, and the way that delivery is structured. The Handbook uniquely provides international coverage of perspectives from Africa, Asia, North and South America, Europe and Australia.

Practitioners and scholars of public administration, public policy, public sector management and international relations will learn a great deal from this Handbook about the issues and structures of government and governance in an increasingly complex world. (Full table of contents)… (More).”

Full Post: The International Handbook Of Public Administration And Governance


Big Data, Little Data, No Data

Apr 14, 2015 05:31 am

New book by Christine L. Borgman:“Big Data” is on the covers of Science, Nature, the Economist, and Wired magazines, on the front pages of the Wall Street Journal and the New York Times. But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data—because relevant data don’t exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines.

Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure—an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation—six “provocations” meant to inspire discussion about the uses of data in scholarship—Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship….(More)”

Full Post: Big Data, Little Data, No Data


Bloomberg Philanthropies Launches $100 Million Data for Health Program in Developing Countries

Apr 13, 2015 03:42 pm

Press Release: “Bloomberg Philanthropies, in partnership with the Australian government, is launching Data for Health, a $100 million initiative that will enable 20 low- and middle-income countries to vastly improve public health data collection.  Each year the World Health Organization estimates that 65% of all deaths worldwide – 35 million each year – go unrecorded. Millions more deaths lack a documented cause. This gap in data creates major obstacles for understanding and addressing public health problems. The Data for Health initiative seeks to provide governments, aid organizations, and public health leaders with tools and systems to better collect data – and use it to prioritize health challenges, develop policies, deploy resources, and measure success. Over the next four years, Data for Health aims to help 1.2 billion people in 20 countries across Africa, Asia, and Latin America live healthier, longer lives….

“Australia’s partnership on Data for Health coincides with the launch of innovationXchange, a new initiative to embrace exploration, experimentation, and risk through a focus on innovation,” said the Hon Julie Bishop MP, Australia’s Minister for Foreign Affairs. “Greater innovation in development assistance will allow us to do a better job of tackling the world’s most daunting problems, such as a lack of credible health data.”

In addition to improving the recording of births and deaths, Data for Health will support new mechanisms for conducting public health surveys. These surveys will monitor major risk factors for early death, including non-communicable diseases (chronic diseases that are not transmitted from person to person such as cancer and diabetes). With information from these surveys, illness caused by day-to-day behaviors such as tobacco use and poor nutrition habits can be targeted, addressed and prevented. Data for Health will take advantage of the wide-spread use of mobile phone devices in developing countries to enhance the efficiency of traditional household surveys, which are typically time-consuming and expensive…(More)”

Full Post: Bloomberg Philanthropies Launches $100 Million Data for Health Program in Developing Countries


International Statistical Agencies

Apr 13, 2015 03:26 pm

Via Census/BeSpacific – International Statistical Agencies – links to data from around the world. “The U.S. Census Bureau conducts demographic, economic, and geographic studies of other countries and strengthens statistical development around the world through technical assistance, training, and software products. For over 60 years, the Census Bureau has performed international analytical work and assisted in the collection, processing, analysis, dissemination, and use of statistics with counterpart governments in over 100 countries.”

Full Post: International Statistical Agencies


The Governance Lab

Polytechnic School of Engineering
New York University
2 MetroTech Center
Floor 9
Brooklyn, NY 11201
info@thegovlab.com

Not subscribed? Click here to subscribe.
 unsubscribe from this list | update subscription preferences