Chat with us, powered by LiveChat COMPAR AND CONTRAST DECISION MAKING | excelpaper.org/
+1(978)310-4246 credencewriters@gmail.com
  

394 Public Administration Review • May | June 2016

Bridging the Divide between Evidence and Policy in
Public Sector Decision Making:

A Practitioner ’ s Perspective

Max K. Arinder recently retired after

34 years of service to the Mississippi Joint

Legislative Committee on Performance

Evaluation and Expenditure Review, having

served 15 years as chief analyst for planning

and support and 19 years as executive

director. He holds a PhD in experimental

psychology from the University of Southern

Mississippi and has served as staff chair

of the National Conference of State

Legislatures and the National Legislative

Program Evaluation Society.

E-mail: [email protected]

Abstract : While policy advocates can help bridge the divide between evidence and policy in decision making by
focusing on ambiguity and uncertainty, policy makers must also play a role by promoting and preserving deliberative
processes that value evidence as a core element in leveling raw constituent opinion, ultimately resulting in a better-
informed electorate. Building on existing research and analytic capability, state legislatures can increase the demand
for and delivery of relevant information, giving the institution the capacity to keep abreast of research in critical public
policy areas. By implementing data and time-conscious evaluative frameworks that emphasize evidence-based decision
making and longitudinal cost–benefit analytics at critical policy-making junctures, the institutional culture can
become less unpredictable and the “rules of the game” can be more transparent. In 2015, Mississippi ’ s legislative leaders
created a system to review requests for new programs and funding using such an evidence screen.

Kimberley R. Isett, Brian W. Head, and Gary VanLandingham, Editors

Max K. Arinder

Evidence in Public
Administration

Bridging the Divide between Evidence and Policy in Public Sector Decision Making: A Practitioner’s Perspective 395

As a longtime nonpartisan legislative staffer, I agree with Paul Cairney, Kathryn Oliver, and Adam Wellstead ’ s assertion that there are significant
differences in academic and political cultures and in how academics
and policy makers view and use “evidence” in decision making.
Their article “To Bridge the Divide between Evidence and Policy:
Reduce Ambiguity as Much as Uncertainty” provides a thought-
provoking perspective on how the scientific community could
use elements of public policy theory to better communicate in
a political culture, thus giving more weight to their empirical
observations in a deliberative process.

While I agree that there needs to be a culture shift in the way
scientists advocate for policy, experience tells me that there also
needs to be a culture shift in the policy-making institutions
themselves. Yes, policy makers often balance evidence, emotions,
and beliefs in making a decision. The question is whether we
can create an environment in which an optimum balance can be
achieved among these elements for a given policy question. The
answer can be found in an exploration of the “culture” of a given
policy-making body.

The conclusions drawn by Cairney, Oliver, and Wellstead would
not be foreign in many of the legislative program evaluation shops
around the country. One of the ongoing challenges they face as
nonpartisan legislative staff is to get their work recognized and
used in policy debate. Most provide regular in-service training
on ways to achieve that goal through improved written and oral
communication. I see this as a parallel goal to those espoused for
scientists by Cairney, Oliver, and Wellstead .

A State Legislature as a Policy Culture
Professionals in the legislative arena see daily the swirl of facts,
opinions, feelings, and arguments surrounding policy makers. Amid
this swirl, some legislators routinely rely on empirical evidence in
key policy areas, while others use emotions and beliefs based largely
on pressing constituent concerns. “My people tell me” is often heard
around the capital, reflecting the importance of constituent opinion
in any decision-making process. This may well be the natural state
of any politically based public environment. Constituent opinion
is a strong voice and will certainly continue to be a part of any
legislative arena. But how well informed is that voice? True, we are
a society awash in information. But how good is that information?
A significant element of a sound policy culture should be the
promotion and preservation of a deliberative process that values
evidence as a core element in leveling raw constituent opinion.
Failure may well negate one of the strengths of representative
democracy itself: controlling tyranny by the majority.

Understanding the deliberative core of a particular political
institution is a critical first step in bridging any divide between
evidence and policy in decision making. The opportunities for
change can be found within the bounds of the long-standing
rules, values, and practices of legislative bodies. Generally, these
institutions will have both expectations for empirical rigor and
latitude for constituent opinion, whether that opinion be rigorously
developed or not. While legislatures can be, as Cairney, Oliver, and
Wellstead describe, “unpredictable policy-making environment[s]
in which attention can lurch from issue to issue,” they do possess

a basic culture that allows them to function fairly efficiently to
keep government service structures operational and to address
perceived needs. If we can understand this natural balance in a
given legislature, we can better see its strengths and understand
its weaknesses regarding the nature and utility of different types
of evidence in the decision-making process and perhaps find
opportunities for constructive change.

Changing the Policy Culture of a State Legislature
While the scientific community can provide an important impetus
to change through actions referenced in the work of Cairney,
Oliver, and Wellstead, a core change in a legislative culture must
ultimately be accomplished by the membership of the institution
itself, optimally with assistance from a sound technical staff
and supported by an informed constituency. Just as constituent
opinion is important in policy making, it is also an important
element in determining the policy culture of a legislature.
Paradoxically, achieving the needed constituent support for change
may depend on the will of policy makers to take the first steps
in developing a transparent, evidence-informed policy process
that increases public knowledge in critical ways. To do so, elected
officials must acknowledge that, while they are elected to represent
their constituents’ opinions on important policy issues, they are
also chosen to help determine whether those opinions, regardless
of how formed, can withstand empirical debate—certainly a
political risk, but one worth taking in the interest of both the
common good and the health of the representative democratic
process.

The wider processes of debate, coalition formation, and persuasion
to reduce ambiguity could be at home in such an environment
because the culture itself prescribes the method of framing policy
questions, demanding scientific evidence as a basic element when
needed. In such an environment, policy advocates would be
expected to meet established standards for rigor and completeness
if they expect their proposal or program to be seriously considered.
Such an environment would help with the demand for and delivery
of relevant information as well as give the institution the capacity to
keep abreast of research in critical public policy areas.

Can Such an Ambitious Culture Shift Be Achieved?
Since the 1970s, legislatures around the country have recognized
their lack of the research evaluation skills needed to identify
and appropriately use robust evidence. In response to that
need, legislatures have established internal audit and evaluation
capacities that enable them to carry out their role as a coequal
branch of government through legitimate research and analytic
capability. Over the years, this capacity has reached different levels
of development in different legislatures, but all are marked by
more scientifically trained teams that can, as recommended by
Cairney, Oliver, and Wellstead, help build research capacity in
government, reduce the loss of institutional memory, and generate
a clearer research question when policy makers commission
evidence.

In addition, the bipartisan National Conference of State Legislatures
continually highlights the challenges legislatures face in developing
sound responses to pressing public needs. This work is underpinned
by an informal multistate agreement on the core values that should

396 Public Administration Review • May | June 2016

—Program Purpose
1.a. What public problem is this program seeking to address?

1.b. Briefly stated, how will this program address the public problem identified in Question 1.a?

1.c. Does this proposed program effort link to a statewide goal or benchmark identified in Building a Better Mississippi: The
Statewide Strategic Plan for Performance and Budgetary Success? (yes or no)

1.d. If the answer to Question 1.c was “yes,” specify the statewide goal or benchmark to which the proposed program links.

1.e. Explain where this program fits into your agency’s strategic plan; i.e., specify the agency goal, objective, and strategy
that the proposed program seeks to address.

2

1

—Needs Assessment
What is the statewide extent of the problem identified in Question 1.a, stated in numerical and geographic terms?

3—Program Description
3.a. What specific service efforts/activities will you be carrying out to achieve the outcomes identified in Question 5.a?

3.b. Describe all start-up activities needed to implement the program.

3.c. Provide a time line showing when each start-up activity will take place and the date when you expect the program to be fully operational.

3.d. Over the time period for which you are requesting funding:

i. How many of each of the service efforts and activities identified in Question 3.a do you intend to provide and in which
geographic locations?

ii. How many individuals do you intend to serve?

4—Return on Investment
4.a. What are the estimated start-up costs for this program, by each start-up activity described in Question 3.b?

4.b. Once the program is fully operational, what is the estimated cost per unit of service?

4.c. List each expected benefit of this program per unit of service provided. If known, include each benefit’s monetized value.

4.d. What is the expected benefit to cost ratio for this program, i.e., total monetized benefits divided by total costs?

5—Measurement and Evaluation

5.a. What specific outcomes do you expect to achieve with this program? Each outcome must be stated in measurable terms
that include each of the five elements specified in the following example:

Required Elements of a Measurable Outcome Example of a Measurable Outcome

1 Targeted Outcome

2 How the Outcome Is Calculated

3 Decrease

Infant mortality rate (number of deaths of
children less than one year of age per 1000 live births)

Number of deaths of children less than one year of age during
a specified time period [generally one calendar year, unless
otherwise noted] divided by the number of live births during
the same period, multiplied by 1,000

Direction of Desired Change (increase, decrease or maintain)

In order to establish a performance baseline, for each outcome measure reported in answer to Question 5.a, report the
most recent data available at the time of your request and the reporting period for the data.

5.c.

5.b.

For each outcome measure reported in answer to Question 5.a, explain how you arrived at the expected rate of change
by the target date.

5.d. How often will you measure and evaluate this program?

5.e. What specific performance measures will you report to the Legislature for this program? At a minimum, you should
include measures of program outputs, outcomes, and efficiency.

6—Research and Evidence Filter

6.a. As defined in MISS. CODE ANN. Section 27-103-159 (1972), if there is an evidence base, research base, promising
practice or best practice on which your agency is basing its proposed new program, attach a copy of or online link to
the relevant research.

6.b. If there is no existing research supporting this program, describe in detail how you will evaluate your pilot program with
sufficient rigor to add to the research base as required by MISS. CODE ANN. Section 27-103-159 (1972).

7—Fidelity Plan

7.a. For programs with an existing research base, explain the specific steps that you will take to ensure that the program is
implemented with fidelity to the evidence/researc h/best practice on which it is based.

7.b. If there is no existing research base for this program, explain the key components critical to the success of your pilot
program and how you will ensure that these components are implemented in accordance with program design.

4 Targeted % Change 18.5% decrease in the infant mortality rate per 1,000 live births

5 Date Targeted to Achieve Desired Change One year from full implementation of program

Figure 1 Seven Elements of Quality Program Design

Bridging the Divide between Evidence and Policy in Public Sector Decision Making: A Practitioner’s Perspective 397

mark any truly representative political body and, if followed, should
establish a solid foundation for implementing a more evidence-
informed policy culture.

Finally, with the impetus of multistate participation in the
Pew-MacArthur Results First Initiative, a number of states are
implementing data and time-conscious evaluative frameworks
that emphasize evidence-based decision making and longitudinal
cost–benefit analytics as an important element in the policy-making
process. With Pew-MacArthur ’ s technical assistance, participating
states are able to better use their own technical staffs in employing
methods that will help meet an evolving demand for clearer evidence
at critical points in the policy-making process, thus providing policy
makers with the information needed to justify decisions based on
evidence that can be balanced with constituent opinion.

Developing an Evidence-Sensitive Policy Culture
Through the years, there have been many efforts at introducing
“scientific” management principles into governmental
administrative and budgetary practices. Sound in principle, these
efforts were embraced with great expectations but often fell short for
various reasons, not the least of which has been that they have been
tied more to political cycles and personalities than to foundational
changes in the way we think about program and budgetary
accountability.

This was certainly true in Mississippi 20-plus years ago when
the legislature passed the Performance Budgeting and Strategic
Planning Act of 1994. The act itself is sound, but a retrospective
look at implementation reveals critical failures in follow-through
that compromised its utility as a budgetary tool that could be used
to build a priority and data-driven budget. For example, elements
of the act that would have provided the resources to analyze
and perfect strategic planning and performance data were not
implemented. As a result, legislators were presented with raw data
that often was not helpful in legislative deliberation.

However, Mississippi ’ s current legislative leadership has
acknowledged this long standing flaw, has embraced the importance
of data analytics to sound policy processes, and has adopted
a strategic view of budgeting that can fundamentally alter the
budgetary culture of the state. Backed by a professional staff
dedicated to establishing and maintaining a framework of evidence,
performance, and cost-based data to support sound policy debate,
the budget and appropriations committees will now be able to
develop clearer budgetary recommendations to fund those programs
and services that help the state reach its overall policy objectives and
eliminate those that do not.

The key elements of this system can be summarized as follows: a
legislatively developed and maintained statewide strategic planning
effort; a comprehensive statewide program inventory; and mastery
of the Pew-MacArthur Results First Initiative as a tool for bringing
data-driven decision making and cost–benefit analytics to bear on
the state ’ s budgetary process.

These three deceptively simple key elements contain other supporting
elements that also need to be developed or implemented. Examples
include a system for keeping the state-level strategic planning process

relevant across election cycles and responsive to executive branch
initiatives without losing its strategic value; a transparent, longitudinal
tracking system to monitor progress in achieving state-level outcomes;
increased capacity to identify program-level costs and monetize
relevant benefits; a needs assessment process that allows cost–benefit
analytics to be better utilized in selecting programs; a strategy for
assessing the efficiency and effectiveness of administrative support and
other nonintervention programs; an evidence-based research focus for
newly proposed intervention programs; routine use of cost–benefit
analytics and performance-based outcomes to identify programs for
possible elimination and resource redirection; and expanded capacity
for fidelity studies and performance evaluations, to name a few.
Creating such a framing process allows the institutional culture to
become less unpredictable and the “rules of the game” to be more
transparent.

While the thoughts by Cairney, Oliver, and Wellstead on the
multilevel, unpredictable nature of policy environments are certainly
noteworthy, critical junctures exist in the policy process where a
disciplined approach to screening policy proposals can become
pivotal in a budget culture, determining in large part the future of
the initiative or program being advocated. It is at these junctures,
most critically in the appropriations committees, that political
leaders have an opportunity to insist that the rules of the game
require vetting of every proposal (regardless of politics) against a
core of critical questions designed to assess the potential and cost of
the program against anticipated benefits relative to the need being
addressed.

For example, in 2015, Mississippi ’ s legislative leaders created a
system to review requests for new programs and funding using an
evidence screen. This process, labeled the Seven Elements of Quality
Program Design (see figure 1 ), requires agencies to meet certain
criteria to qualify for funding. For instance, agencies must report
whether a requested program has “an evidence base, research base,
promising practice or best practice” model, describe the monitoring
system that will be used to ensure that evidence-based programs
are implemented with fidelity, and explain how they will measure
the results the program is achieving. While this does not guarantee
removal of politics from the process, it does create a point where the
quality of evidence and sound cost–benefit analytics can take center
stage for all to see.

This analytic tool reflects the spirit of the times in Mississippi and is
a clear example of how the evidence/policy gap can be reduced in a
real-world decision-making process. With tools like this—and this
is only one example—policy makers now have the option of seeking
advice from an independent broker that they can trust, using the
information called for by the Seven Elements. Such an approach
allows legislative policy makers to ask key questions that are directly
in line with their political and public policy interests.

Conclusion
While agreeing that empirical facts cannot be completely separated
from human values, ongoing changes in the policy culture of state
legislatures over the last 40-plus years lead the author to believe
that we can make research-sensitive strategies a more important
part of our core value system in the selection and funding of policy
initiatives and programs. To do so, we—scientists, independent

398 Public Administration Review • May | June 2016

reviewers, and policy makers alike—must all work
to support any efforts made in policy-making
institutions to shape the policy environment in
such a way that the demand for empirical evidence
becomes a required, appropriately targeted part of

the policy-making culture. While decisions may
still be made in light of other considerations, the
actual supporting data in such an environment will
be available and transparent for all to see, with one
desired outcome being a better informed electorate.

Public Administration Review,

Vol. 76, Iss. 3, pp. 394–398. © 2016 by

The American Society for Public Administration.

DOI: 10.1111/puar.12572.

If there is an area of Public Administration process or practice where you feel there is a mismatch

between the accumulated evidence and its use, and the substantive content is salient to today’s

political or institutional environment, please contact one of the editors of Evidence in

Public Administration with a proposal for consideration: Kim Isett ([email protected]),

Gary VanLandingham ([email protected]), or Brian Head ([email protected]).

Copyright of Public Administration Review is the property of Wiley-Blackwell and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.

error: Content is protected !!