Copy

A weekly curation of new findings and developments on innovation in governance.

Table of Contents

Encouraging and Sustaining Innovation in Government: Technology and Innovation in the Next Administration

Sep 01, 2016 11:19 am

New report by Beth Simone Noveck and Stefaan Verhulst: “…With rates of trust in government at an all-time low, technology and innovation will be essential to achieve the next administration’s goals and to deliver services more effectively and efficiently. The next administration must prioritize using technology to improve governing and must develop plans to do so in the transition… This paper provides analysis and a set of concrete recommendations, both for the period of transition before the inauguration, and for the start of the next presidency, to encourage and sustain innovation in government. Leveraging the insights from the experts who participated in a day-long discussion, we endeavor to explain how government can improve its use of using digital technologies to create more effective policies, solve problems faster and deliver services more effectively at the federal, state and local levels….

The broad recommendations are:

  • Scale Data Driven Governance: Platforms such as data.gov represent initial steps in the direction of enabling data-driven governance. Much more can be done, however, to open-up data and for the agencies to become better consumers of data, to improve decision-making and scale up evidence-based governance. This includes better use of predictive analytics, more public engagement; and greater use of cutting-edge methods like machine learning.
  • Scale Collaborative Innovation: Collaborative innovation takes place when government and the public work together, thus widening the pool of expertise and knowledge brought to bear on public problems. The next administration can reach out more effectively, not just to the public at large, but to conduct targeted outreach to public officials and citizens who possess the most relevant skills or expertise for the problems at hand.
  • Promote a Culture of Innovation: Institutionalizing a culture of technology-enabled innovation will require embedding and institutionalizing innovation and technology skills more widely across the federal enterprise. For example, contracting, grants and personnel officials need to have a deeper understanding of how technology can help them do their jobs more efficiently, and more people need to be trained in human-centered design, gamification, data science, data visualization, crowdsourcing and other new ways of working.
  • Utilize Evidence-Based Innovation: In order to better direct government investments, leaders need a much better sense of what works and what doesn’t. The government spends billions on research in the private and university sectors, but very little experimenting with, testing, and evaluating its own programs. The next administration should continue developing an evidence-based approach to governance, including a greater use of methods like A/B testing (a method of comparing two versions of a webpage or app against each other to determine which one performs the best); establishing a clearinghouse for success and failure stories and best practices; and encouraging overseers to be more open to innovation.
  • Make Innovation a Priority in the Transition: The transition period represents a unique opportunity to seed the foundations for long-lasting change. By explicitly incorporating innovation into the structure, goals and activities of the transition teams, the next administration can get a fast start in implementing policy goals and improving government operations through innovation approaches….(More)”

Full Post: Encouraging and Sustaining Innovation in Government: Technology and Innovation in the Next Administration


How the Federal Government is thinking about Artificial Intelligence

Sep 01, 2016 08:22 am

Mohana Ravindranath at NextGov: “Since May, the White House has been exploring the use of artificial intelligence and machine learning for the public: that is, how the federal government should be investing in the technology to improve its own operations. The technologies, often modeled after the way humans take in, store and use new information, could help researchers find patterns in genetic data or help judges decide sentences for criminals based on their likelihood to end up there again, among other applications. …

Here’s a look at how some federal groups are thinking about the technology:

  • Police data: At a recent White House workshop, Office of Science and Technology Policy Senior Adviser Lynn Overmann said artificial intelligence could help police departments comb through hundreds of thousands of hours of body-worn camera footage, potentially identifying the police officers who are good at de-escalating situations. It also could help cities determine which individuals are likely to end up in jail or prison and officials could rethink programs. For example, if there’s a large overlap between substance abuse and jail time, public health organizations might decide to focus their efforts on helping people reduce their substance abuse to keep them out of jail.
  • Explainable artificial intelligence: The Pentagon’s research and development agency is looking for technology that can explain to analysts how it makes decisions. If people can’t understand how a system works, they’re not likely to use it, according to a broad agency announcement from the Defense Advanced Research Projects Agency. Intelligence analysts who might rely on a computer for recommendations on investigative leads must “understand why the algorithm has recommended certain activity,” as do employees overseeing autonomous drone missions.
  • Weather detection: The Coast Guard recently posted its intent to sole-source a contract for technology that could autonomously gather information about traffic, crosswind, and aircraft emergencies. That technology contains built-in artificial intelligence technology so it can “provide only operational relevant information.”
  • Cybersecurity: The Air Force wants to make cyber defense operations as autonomous as possible, and is looking at artificial intelligence that could potentially identify or block attempts to compromise a system, among others.

While there are endless applications in government, computers won’t completely replace federal employees anytime soon….(More)”

Full Post: How the Federal Government is thinking about Artificial Intelligence


How Tech Giants Are Devising Real Ethics for Artificial Intelligence

Sep 01, 2016 08:17 am

For years, science-fiction moviemakers have been making us fear the bad things that artificially intelligent machines might do to their human creators. But for the next decade or two, our biggest concern is more likely to be that robots will take away our jobs or bump into us on the highway.

Now five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence. While science fiction has focused on the existential threat of A.I. to humans,researchers at Google’s parent company, Alphabet, and those from Amazon,Facebook, IBM and Microsoft have been meeting to discuss more tangible issues, such as the impact of A.I. on jobs, transportation and even warfare.

Tech companies have long overpromised what artificially intelligent machines can do. In recent years, however, the A.I. field has made rapid advances in a range of areas, from self-driving cars and machines that understand speech, like Amazon’s Echo device, to a new generation of weapons systems that threaten to automate combat.

The specifics of what the industry group will do or say — even its name —have yet to be hashed out. But the basic intention is clear: to ensure thatA.I. research is focused on benefiting people, not hurting them, according to four people involved in the creation of the industry partnership who are not authorized to speak about it publicly.

The importance of the industry effort is underscored in a report issued onThursday by a Stanford University group funded by Eric Horvitz, a Microsoft researcher who is one of the executives in the industry discussions. The Stanford project, called the One Hundred Year Study onArtificial Intelligence, lays out a plan to produce a detailed report on the impact of A.I. on society every five years for the next century….The Stanford report attempts to define the issues that citizens of a typicalNorth American city will face in computers and robotic systems that mimic human capabilities. The authors explore eight aspects of modern life,including health care, education, entertainment and employment, but specifically do not look at the issue of warfare..(More)”

Full Post: How Tech Giants Are Devising Real Ethics for Artificial Intelligence


Policy in the data age: Data enablement for the common good

Sep 01, 2016 05:44 am

Karim Tadjeddine and Martin Lundqvist of McKinsey: “Like companies in the private sector, governments from national to local can smooth the process of digital transformation—and improve services to their “customers,” the public—by adhering to certain core principles. Here’s a road map.

By virtue of their sheer size, visibility, and economic clout, national, state or provincial, and local governments are central to any societal transformation effort, in particular a digital transformation. Governments at all levels, which account for 30 to 50 percent of most countries’ GDP, exert profound influence not only by executing their own digital transformations but also by catalyzing digital transformations in other societal sectors (Exhibit 1).

The tremendous impact that digital services have had on governments and society has been the subject of extensive research that has documented the rapid, extensive adoption of public-sector digital services around the globe. We believe that the coming data revolution will be even more deeply transformational and that data enablement will produce a radical shift in the public sector’s quality of service, empowering governments to deliver better constituent service, better policy outcomes, and more-productive operations….(More)”

Full Post: Policy in the data age: Data enablement for the common good


Designing Serious Games for Citizen Engagement in Public Service Processes

Sep 01, 2016 02:44 am

Paper by Nicolas Pflanzl , Tadeu Classe, Renata Araujo, and Gottfried Vossen: “One of the challenges envisioned for eGovernment is how to actively involve citizens in the improvement of public services, allowing governments to offer better services. However, citizen involvement in public service design through ICT is not an easy goal. Services have been deployed internally in public organizations, making it difficult to be leveraged by citizens, specifically those without an IT background. This research moves towards decreasing the gap between public services process opacity and complexity and citizens’ lack of interest or competencies to understand them. The paper discusses game design as an approach to motivate, engage and change citizens’ behavior with respect to public services improvement. The design of a sample serious game is proposed; benefits and challenges are discussed using a public service delivery scenario from Brazil….(More)”

Full Post: Designing Serious Games for Citizen Engagement in Public Service Processes


The risks of relying on robots for fairer staff recruitment

Aug 31, 2016 05:18 pm

Sarah O’Connor at the Financial Times: “Robots are not just taking people’s jobs away, they are beginning to hand them out, too. Go to any recruitment industry event and you will find the air is thick with terms like “machine learning”, “big data” and “predictive analytics”.

The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.

This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.

For a start, any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding “human error and unconscious bias”.

But a model like this would absorb any human biases at work in the original recruitment decisions. For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate. Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them….(More)”

Full Post: The risks of relying on robots for fairer staff recruitment


The SAGE Handbook of Digital Journalism

Aug 31, 2016 05:12 pm

Book edited by Tamara WitschgeC. W. AndersonDavid Domingo, and Alfred Hermida: “The production and consumption of news in the digital era is blurring the boundaries between professionals, citizens and activists. Actors producing information are multiplying, but still media companies hold central position. Journalism research faces important challenges to capture, examine, and understand the current news environment. The SAGE Handbook of Digital Journalism starts from the pressing need for a thorough and bold debate to redefine the assumptions of research in the changing field of journalism. The 38 chapters, written by a team of global experts, are organised into four key areas:

Section A: Changing Contexts

Section B: News Practices in the Digital Era

Section C: Conceptualizations of Journalism

Section D: Research Strategies

By addressing both institutional and non-institutional news production and providing ample attention to the question ‘who is a journalist?’ and the changing practices of news audiences in the digital era, this Handbook shapes the field and defines the roadmap for the research challenges that scholars will face in the coming decades….(More)”

Full Post: The SAGE Handbook of Digital Journalism


Technology can boost active citizenship – if it’s chosen well

Aug 31, 2016 05:06 pm

In Taiwan, for instance, tech activists have built online databases to track political contributions and create channels for public participation in parliamentary debates. In South Africa, anti-corruption organisation Corruption Watch has used online and mobile platforms to gather public votes for Public Protector candidates.

But research I recently completed with partners in Africa and Europe suggests that few of these organisations may be choosing the right technological tools to make their initiatives work.

We interviewed people in Kenya and South Africa who are responsible for choosing technologies when implementing transparency and accountability initiatives. In many cases, they’re not choosing their tech well. They often only recognised in retrospect how important their technology choices were. Most would have chosen differently if they were put in the same position again.

Our findings challenge a common mantra which holds that technological failures are usually caused by people or strategies rather than technologies. It’s certainly true that human agency matters. However powerful technologies may seem, choices are made by people – not the machines they invent. But our research supports the idea that technology isn’t neutral. It suggests that sometimes the problem really is the tech….

So what should those working in civic technology do about improving tool selection? From our research, we developed six “rules” for better tool choices. These are:

  • first work out what you don’t know;
  • think twice before building a new tool;
  • get a second opinion;
  • try it before you buy it;
  • plan for failure; and
  • share what you learn.

Possibly the most important of these recommendations is to try or “trial” technologies before making a final selection. This might seem obvious. But it was rarely done in our sample….(More)”

Full Post: Technology can boost active citizenship – if it’s chosen well


Data and Democracy

Aug 30, 2016 02:24 pm

(Free) book by Andrew Therriault:  “The 2016 US elections will be remembered for many things, but for those who work in politics, 2016 may be best remembered as the year that the use of data in politics reached its maturity. Through a collection of essays from leading experts in the field, this report explores how political data science helps to drive everything from overall strategy and messaging to individual voter contacts and advertising.

Curated by Andrew Therriault, former Director of Data Science for the Democratic National Committee, this illuminating report includes first-hand accounts from Democrats, Republicans, and members of the media. Tech-savvy readers will get a comprehensive account of how data analysis has prevailed over political instinct and experience and examples of the challenges these practitioners face.

Essays include:

  • The Role of Data in Campaigns—Andrew Therriault, former Director of Data Science for the Democratic National Committee
  • Essentials of Modeling and Microtargeting—Dan Castleman, cofounder and Director of Analytics at Clarity Campaign Labs, a leading modeler in Democratic politics
  • Data Management for Political Campaigns—Audra Grassia, Deputy Political Director for the Democratic Governors Association in 2014
  • How Technology Is Changing the Polling Industry—Patrick Ruffini, cofounder of Echelon Insights and Founder/Chairman of Engage, was a digital strategist for President Bush in 2004 and for the Republican National Committee in 2006
  • Data-Driven Media Optimization—Alex Lundry, cofounder and Chief Data Scientist at Deep Root Analytics, a leading expert on media and voter analytics, electoral targeting, and political data mining
  • How (and Why) to Follow the Money in Politics—Derek Willis, ProPublica’s news applications developer, formerly with The New York Times
  • Digital Advertising in the Post-Obama Era—Daniel Scarvalone, Associate Director of Research and Data at Bully Pulpit Interactive (BPI), a digital marketer for the Democratic party
  • Election Forecasting in the Media—Natalie Jackson, Senior Polling Editor atThe Huffington Post…(More)”

Full Post: Data and Democracy


Nudges That Fail

Aug 30, 2016 02:20 pm

Paper by Cass R. Sunstein: “Why are some nudges ineffective, or at least less effective than choice architects hope and expect? Focusing primarily on default rules, this essay emphasizes two reasons. The first involves strong antecedent preferences on the part of choosers. The second involves successful “counternudges,” which persuade people to choose in a way that confounds the efforts of choice architects. Nudges might also be ineffective, and less effective than expected, for five other reasons. (1) Some nudges produce confusion on the part of the target audience. (2) Some nudges have only short-term effects. (3) Some nudges produce “reactance” (though this appears to be rare) (4) Some nudges are based on an inaccurate (though initially plausible) understanding on the part of choice architects of what kinds of choice architecture will move people in particular contexts. (5) Some nudges produce compensating behavior, resulting in no net effect. When a nudge turns out to be insufficiently effective, choice architects have three potential responses: (1) Do nothing; (2) nudge better (or different); and (3) fortify the effects of the nudge, perhaps through counter-counternudges, perhaps through incentives, mandates, or bans….(More)”.

Full Post: Nudges That Fail


White House, Transportation Dept. want help using open data to prevent traffic crashes

Aug 30, 2016 03:02 am

Samantha Ehlinger in FedScoop: “The Transportation Department is looking for public input on how to better interpret and use data on fatal crashes after 2015 data revealed a startling spike of 7.2 percent more deaths in traffic accidents that year.

Looking for new solutions that could prevent more deaths on the roads, the department released three months earlier than usual the 2015 open dataset about each fatal crash. With it, the department and the White House announced a call to action for people to use the data set as a jumping off point for a dialogue on how to prevent crashes, as well as understand what might be causing the spike.

“What we’re ultimately looking for is getting more people engaged in the data … matching this with other publicly available data, or data that the private sector might be willing to make available, to dive in and to tell these stories,” said Bryan Thomas, communications director for the National Highway Traffic Safety Administration, to FedScoop.

One striking statistic was that “pedestrian and pedalcyclist fatalities increased to a level not seen in 20 years,” according to a DOT press release. …

“We want folks to be engaged directly with our own data scientists, so we can help people through the dataset and help answer their questions as they work their way through, bounce ideas off of us, etc.,” Thomas said. “We really want to be accessible in that way.”

He added that as ideas “come to fruition,” there will be opportunities to present what people have learned.

“It’s a very, very rich data set, there’s a lot of information there,” Thomas said. “Our own ability is, frankly, limited to investigate all of the questions that you might have of it. And so we want to get the public really diving in as well.”…

Here are the questions “worth exploring,” according to the call to action:

  • How might improving economic conditions around the country change how Americans are getting around? What models can we develop to identify communities that might be at a higher risk for fatal crashes?
  • How might climate change increase the risk of fatal crashes in a community?
  • How might we use studies of attitudes toward speeding, distracted driving, and seat belt use to better target marketing and behavioral change campaigns?
  • How might we monitor public health indicators and behavior risk indicators to target communities that might have a high prevalence of behaviors linked with fatal crashes (drinking, drug use/addiction, etc.)? What countermeasures should we create to address these issues?”…(More)”

Full Post: White House, Transportation Dept. want help using open data to prevent traffic crashes


Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response

Aug 30, 2016 02:56 am

Femke Mulder, Julie Ferguson, Peter Groenewegen, Kees Boersma, and Jeroen Wolbers in Big Data and Society: “The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief….(More)”.

Full Post: Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response


Counterterrorism and Counterintelligence: Crowdsourcing Approach

Aug 30, 2016 02:53 am

Literature review by Sanket Subhash Khanwalkar: “Despite heavy investment by the United States and several other national governments, terrorism related problems are rising at an alarming rate. Lone-wolf terrorism, in particular, in the last decade, has caused 70% of all terrorism related deaths in the US and the West. This literature survey describes lone-wolf terrorism in detail to analyse its structure, characteristics, strengths and weaknesses. It also investigates crowdsourcing intelligence, as an unorthodox approach to counter lone-wolf terrorism, by reviewing its current state-of-the-art and identifying the areas for improvement….(More)”

Full Post: Counterterrorism and Counterintelligence: Crowdsourcing Approach


Rethinking Nudge: Libertarian paternalism and classical utilitarianism

Aug 30, 2016 02:45 am

Hiroaki Itai, Akira Inoue, and Satoshi Kodama in Special Issue on Nudging of The Tocqueville Review/La revue Tocqueville: “Recently, libertarian paternalism has been intensely debated. It recommends us to employ policies and practices that “nudge” ordinary people to make better choices without forcing them to do so. Nudging policies and practices have penetrated our society, in cases like purchasing life insurance or a residence. They are also used for preventing people from addictive acts that may be harmful to them in the long run, such as having too much sugary or fatty food. In nudging people to act rationally, various kinds of cognitive effects impacting the consumers’ decision-making process should be considered, given the growing influence of consumer advertising. Since libertarian paternalism makes use of such effects in light of the recent development of behavioral economics and cognitive psychology in a principled manner, libertarian paternalism and its justification of nudges attract our attention as an approach providing a normative guidance for our action. 

This paper has two aims: the first is to examine whether libertarian paternalism can give an appropriate theoretical foundation to the idea and practice of nudges. The second is to show that utilitarianism, or, more precisely, the classical version of utilitarianism, treats nudges in a more consistent and plausible manner. To achieve these two aims, first of all, we dwell on how Cass Sunstein—one of the founder of libertarian paternalism—misconceives Mill’s harm principle, and that this may prompt us to see that utilitarianism can reasonably legitimate nudging policies (section one). We then point to two biases that embarrass libertarian paternalism (the scientism bias and the dominant-culture bias), which we believe stem from the fact that libertarian paternalism assumes the informed preference satisfaction view of welfare (section two). We finally argue that classical utilitarianism not only can overcome the two biases, but can also reasonably endorse any system monitoring a choice architect to discharge his or her responsibility (section three)….(More)”

Full Post: Rethinking Nudge: Libertarian paternalism and classical utilitarianism


Smart Economy in Smart Cities

Aug 30, 2016 02:39 am

Book edited by Vinod Kumar, T. M.: “The present book highlights studies that show how smart cities promote urban economic development. The book surveys the state of the art of Smart City Economic Development through a literature survey. The book uses 13 in depth city research case studies in 10 countries such as the North America, Europe, Africa and Asia to explain how a smart economy changes the urban spatial system and vice versa. This book focuses on exploratory city studies in different countries, which investigate how urban spatial systems adapt to the specific needs of smart urban economy. The theory of smart city economic development is not yet entirely understood and applied in metropolitan regional plans. Smart urban economies are largely the result of the influence of ICT applications on all aspects of urban economy, which in turn changes the land-use system. It points out that the dynamics of smart city GDP creation takes ‘different paths,’ which need further empirical study, hypothesis testing and mathematical modelling. Although there are hypotheses on how smart cities generate wealth and social benefits for nations, there are no significant empirical studies available on how they generate urban economic development through urban spatial adaptation.  This book with 13 cities research studies is one attempt to fill in the gap in knowledge base….(More)”

Full Post: Smart Economy in Smart Cities


Make Data Sharing Routine to Prepare for Public Health Emergencies

Aug 30, 2016 02:35 am

Jean-Paul Chretien, Caitlin M. Rivers, and Michael A. Johansson in PLOS Medicine: “In February 2016, Wellcome Trust organized a pledge among leading scientific organizations and health agencies encouraging researchers to release data relevant to the Zika outbreak as rapidly and widely as possible [1]. This initiative echoed a September 2015 World Health Organization (WHO) consultation that assessed data sharing during the recent West Africa Ebola outbreak and called on researchers to make data publicly available during public health emergencies [2]. These statements were necessary because the traditional way of communicating research results—publication in peer-reviewed journals, often months or years after data collection—is too slow during an emergency.

The acute health threat of outbreaks provides a strong argument for more complete, quick, and broad sharing of research data during emergencies. But the Ebola and Zika outbreaks suggest that data sharing cannot be limited to emergencies without compromising emergency preparedness. To prepare for future outbreaks, the scientific community should expand data sharing for all health research….

Open data deserves recognition and support as a key component of emergency preparedness. Initiatives to facilitate discovery of datasets and track their use [4042]; provide measures of academic contribution, including data sharing that enables secondary analysis [43]; establish common platforms for sharing and integrating research data [44]; and improve data-sharing capacity in resource-limited areas [45] are critical to improving preparedness and response.

Research sponsors, scholarly journals, and collaborative research networks can leverage these new opportunities with enhanced data-sharing requirements for both nonemergency and emergency settings. A proposal to amend the International Health Regulations with clear codes of practice for data sharing warrants serious consideration [46]. Any new requirements should allow scientists to conduct and communicate the results of secondary analyses, broadening the scope of inquiry and catalyzing discovery. Publication embargo periods, such as one under consideration for genetic sequences of pandemic-potential influenza viruses [47], may lower barriers to data sharing but may also slow the timely use of data for public health.

Integrating open science approaches into routine research should make data sharing more effective during emergencies, but this evolution is more than just practice for emergencies. The cause and context of the next outbreak are unknowable; research that seems routine now may be critical tomorrow. Establishing openness as the standard will help build the scientific foundation needed to contain the next outbreak.

Recent epidemics were surprises—Zika and chikungunya sweeping through the Americas; an Ebola pandemic with more than 10,000 deaths; the emergence of severe acute respiratory syndrome and Middle East respiratory syndrome, and an influenza pandemic (influenza A[H1N1]pdm09) originating in Mexico—and we can be sure there are more surprises to come. Opening all research provides the best chance to accelerate discovery and development that will help during the next surprise….(More)”

Full Post: Make Data Sharing Routine to Prepare for Public Health Emergencies


Achieving Open Justice through Citizen Participation and Transparency

Aug 29, 2016 03:45 pm

Book edited by Carlos E. Jiménez-Gómez and Mila Gascó-Hernández: “Open government initiatives have become a defining goal for public administrators around the world. However, progress is still necessary outside of the executive and legislative sectors.

Achieving Open Justice through Citizen Participation and Transparency is a pivotal reference source for the latest scholarly research on the implementation of open government within the judiciary field, emphasizing the effectiveness and accountability achieved through these actions. Highlighting the application of open government concepts in a global context, this book is ideally designed for public officials, researchers, professionals, and practitioners interested in the improvement of governance and democracy….(More)

 

Full Post: Achieving Open Justice through Citizen Participation and Transparency


Everyday ‘Placebo Buttons’ Create Semblance of Control

Aug 29, 2016 06:35 am
crosswalk buttons
Crosswalk buttons by Peter Kazanjy

Each of these seemingly disconnected everyday buttons you pressed may have something in common: it is quite possible that none of them did a thing to influence the world around you. Any perceived impact may simply have been imaginary, a placebo effect giving you the illusion of control.

In the early 2000s, New York City transportation officials finally admitted what many had suspected: the majority of crosswalk buttons in the city are completely disconnected from the traffic light system. Thousands of these initially worked to request a signal change but most no longer do anything, even if their signage suggests otherwise.

Naturally, a number of street art projects have popped up around the humorous futility of pedestrians pressing placebo buttons:

Crosswalk buttons were originally introduced to NYC during the 1960s. At the time, there was less congestion and it made sense to leave green lights on for major thoroughfares until cross traffic came along … or until a pedestrian wanting to cross the street pushed a button.

Today, a combination of carefully orchestrated automation and higher traffic has made most of these buttons obsolete. Citywide, there are around 100 crosswalk buttons that still work in NYC but close to 1,000 more that do nothing at all. So why not take them down? Removing the remaining nonfunctional buttons would cost the city millions, a potential waste of already limited funds for civic infrastructure….(More)”

Full Post: Everyday ‘Placebo Buttons’ Create Semblance of Control


Managing Federal Information as a Strategic Resource

Aug 29, 2016 05:39 am

White House: “Today the Office of Management and Budget (OMB) is releasing an update to the Federal Government’s governing document for the management of Federal information resources: Circular A-130, Managing Information as a Strategic Resource.

The way we manage information technology(IT), security, data governance, and privacy has rapidly evolved since A-130 was last updated in 2000.  In today’s digital world, we are creating and collecting large volumes of data to carry out the Federal Government’s various missions to serve the American people.  This data is duplicated, stored, processed, analyzed, and transferred with ease.  As government continues to digitize, we must ensure we manage data to not only keep it secure, but also allow us to harness this information to provide the best possible service to our citizens.

Today’s update to Circular A-130 gathers in one resource a wide range of policy updates for Federal agencies regarding cybersecurity, information governance, privacy, records management, open data, and acquisitions.  It also establishes general policy for IT planning and budgeting through governance, acquisition, and management of Federal information, personnel, equipment, funds, IT resources, and supporting infrastructure and services.  In particular, A-130 focuses on three key elements to help spur innovation throughout the government:

  • Real Time Knowledge of the Environment.  In today’s rapidly changing environment, threats and technology are evolving at previously unimagined speeds.  In such a setting, the Government cannot afford to authorize a system and not look at it again for years at a time.  In order to keep pace, we must move away from periodic, compliance-driven assessment exercises and, instead, continuously assess our systems and build-in security and privacy with every update and re-design.  Throughout the Circular, we make clear the shift away from check-list exercises and toward the ongoing monitoring, assessment, and evaluation of Federal information resources.
  • Proactive Risk ManagementTo keep pace with the needs of citizens, we must constantly innovate.  As part of such efforts, however, the Federal Government must modernize the way it identifies, categorizes, and handles risk to ensure both privacy and security.  Significant increases in the volume of data processed and utilized by Federal resources requires new ways of storing, transferring, and managing it Circular A-130 emphasizes the need for strong data governance that encourages agencies to proactively identify risks, determine practical and implementable solutions to address said risks, and implement and continually test the solutions.  This repeated testing of agency solutions will help to proactively identify additional risks, starting the process anew.
  • Shared ResponsibilityCitizens are connecting with each other in ways never before imagined.  From social media to email, the connectivity we have with one another can lead to tremendous advances.  The updated A-130 helps to ensure everyone remains responsible and accountable for assuring privacy and security of information – from managers to employees to citizens interacting with government services. …(More)”

Full Post: Managing Federal Information as a Strategic Resource


Wikipedia Is A Giant Unfathomable Universe—Now You Can Explore It Like One

Aug 29, 2016 05:33 am

Conceptually, Wikiverse isn’t that hard to grasp. Every star in the Wikiverse is first grouped into constellations, representing the articles it is most closely linked to: for example, in the Wikiverse, you might have a constellation of German romantic poets, or a constellation of transgender athletes. From there, these constellations get pulled together into nebula, which represent sub-categories, which in turn get gravitationally drawn towards one another, to form galaxy categories like philosophy, religion, politics, and more. No matter where you are in the Wikiverse, though, you can always zoom way in to click on a star and read it’s associated Wiki entry….Now consider the fact that the Wikiverse is only 1/400,000th the size of the realUniverse, and put into perspective exactly how insignificant the “universe” of human knowledge really is.

Play with the Wikiverse for yourself here.”

Full Post: Wikipedia Is A Giant Unfathomable Universe—Now You Can Explore It Like One


The Governance Lab

Tandon School of Engineering
New York University
2 MetroTech Center
Floor 9
Brooklyn, NY 11201
info@thegovlab.com

Not subscribed? Click here to subscribe.
 unsubscribe from this list | update subscription preferences