Chat with us, powered by LiveChat Norfolk State University Program Intervention Evaluation Plan Essay | excelpaper.org/
+1(978)310-4246 credencewriters@gmail.com
  

The agency is The Community Builders Agency

The program under evaluation is the Workforce Development Program. This program consists of making sure all residents are working if not in school. Also, assisting with things such as resumes, cover letters, interviewing techniques, and job searches.

A. Description of Intervention (Program/Service) Selected

a. Select a program or service (Intervention) that you are going to evaluate in your community
or from your field placement (agency).

b. Briefly describe the intervention, program, or service that you plan to evaluate and identify
the core components.

B. Problem Identification & Purpose of Evaluation

a. Identify and describe the problem or need your evaluation aims to address. See page 113
table 6.1 in your textbook, which highlights the difference between a problem and a need.

b. Conduct a critical review of empirical literature on the problem or need that your
selected program/intervention addresses; and literature on programs, interventions, or
services that address the problem or need (if available). Include Three (3) empirical
sources, at least. Justify why it is important to address the problem or need using
evaluation. Please use data and research findings to support your justification. Describe
the diversity of your population (e.g., gender, age, race, ethnicity and so on) and its
relevance to why you choose the group.

c. State the purpose or aim of the evaluation.

C. Types of Evaluation Chosen

a. Select type(s) of evaluation that you are going to conduct. It could be formative or
summative or both. Note: This may not apply to needs assessment.

b. State why you chose the evaluation type(s) and describe characteristics, including core
components, of the evaluation.

D. Target Population

a. Identify the group or groups affected by the problem or need and/or who will benefit from
the intervention, program, or service being evaluated.

E. Goals & Objectives of the Program/Intervention

a. Describe the goals and measurable objectives of the program.

b. Identify any specific activities/strategies for achieving goals and objectives of the
program.

c. Using the logic model, explain the theory or assumptions guiding your intervention,
program or service to achieve its goals & objectives (e.g., change, prevent, or treat the
specified problem).

F. Evaluation Research Design

a. Specify the research design planned for the evaluation and why it was selected.
I.e., Single System Designs, Group Designs, Pre & Post Test Design, etc.

b. Describe how you conduct your evaluation applying the research design chosen.

c. Describe how the research design will address the goals and objectives, and outcomes.

G. Sampling

a. Describe the type of sampling that will be selected and why it was selected.

b. Identify the inclusion/exclusion criteria used for selecting participants and a rationale for
the criteria.

c. Describe how you will recruit and retain participants.

d. Briefly explain how you will protect the rights of participants (e.g., ethical issues of
informed consent, voluntary participation, and protection of sensitive data).

H. Data Collection

a. Describe how and where you will get your data for the evaluation.

b. Specify whether new, primary data or secondary data.

I. Conclusion

a. Identify the strengths and limitations of the planned evaluation. Please give examples to
support your response.

J. References

References are used appropriately and in APA format.
The paper should be 8-10 pages in text, excluding the title and references pages. (I included the textbook in the files)

Social Work Evaluation

Social Work Evaluation
Enhancing What We Do

THIRD EDITION
JAMES R. DUDLEY
University of North Carolina at Charlotte
1
3
Oxford University Press is a department of the University of Oxford. It furthers the University’s
objective of excellence in research, scholarship, and education by publishing worldwide.
Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America.
© Oxford University Press 2020
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, without the prior permission in writing of Oxford
University Press, or as expressly permitted by law, by license, or under terms agreed with the
appropriate reproduction rights organization. Inquiries concerning reproduction outside the
scope of the above should be sent to the Rights Department, Oxford University Press, at the
address above.
You must not circulate this work in any other form
and you must impose this same condition on any acquirer.
Library of Congress Cataloging-​in-​Publication Data
Names: Dudley, James R., author.
Title: Social work evaluation : enhancing what we do / James R. Dudley.
Description: Third Edition. | New York : Oxford University Press, 2020. |
Revised edition of the author’s Social work evaluation, [2014] |
Includes bibliographical references and index.
Identifiers: LCCN 2019032564 (print) | LCCN 2019032565 (ebook) |
ISBN 9780190916657 (paperback) | ISBN 9780190916671 (epub) | ISBN 9780190916664 (updf)
Subjects: LCSH: Social service—Evaluation. | Evaluation research (Social action programs)
Classification: LCC HV41. D83 2019 (print) | LCC HV41 (ebook) | DDC 361.3072—dc23
LC record available at https://lccn.loc.gov/2019032564
LC ebook record available at https://lccn.loc.gov/2019032565
1 3 5 7 9 8 6 4 2
Printed by Marquis, Canada
I dedicate this book to my students, who have inspired and encouraged me
over many years. I am deeply grateful to them!
C ON T E N T S

CSWE’s Core Competency Fulfillment Guide:
How It Is Covered in the Book xiii
Preface
xvii
New to this Edition xviii
Other Special Features xix
Organization of the Book xxi
Acknowledgments
xxiii
part i INTRODUCTION
Chapter 1
Evaluation and Social Work: Making the
Connection 3
A Focus on Both Programs and Practice 4
Practice is Embedded in a Program 5
Introduction to Evaluation 7
A Three-​Stage Approach 7
Different Purposes of Evaluations 7
Common Characteristics of Evaluations 10
Seven Steps in Conducting an Evaluation 20
Defining and Clarifying Important Terms 23
Summary 28
Key Terms 29
Discussion Questions and Assignments 29
References 30
vii
viii C ontents
part ii ORIENTATION TO THE BIGGER PICTURE
OF EVALUATIONS: WHAT’S NEXT?
Chapter 2
The Influence of History and Varying Theoretical
Views on Evaluations 35
Relevant Events in History 36
Varying Views on Theoretical Approaches 40
Synthesis of These Evaluation Perspectives 44
Key Perspectives for the Book 50
Three-​Stage Approach 50
Summary 52
Key Terms 53
Discussion Questions and Assignments 53
References 54
Chapter 3
Chapter 4
Chapter 5
The Role of Ethics in Evaluations
Ethics for Conducting Evaluations 58
Diversity and Social Justice 67
Summary 74
Key Terms 74
Discussion Questions and Assignments
References 76
56
74
Common Types of Evaluations 78
Common Program Evaluations 78
Common Practice Evaluations 89
Common Evaluations and the Three-​Stage Approach
Summary 94
Key Terms 94
Discussion Questions and Assignments 94
References 95
Focusing an Evaluation 96
Important Initial Questions 96
Crafting Good Study Questions for an Evaluation
as the Focus 99
Guidelines for Focusing an Evaluation 100
A Practical Tool 106
Summary 110
Key Terms 110
Discussion Questions and Assignments 110
References 111
93
C ontents  ix
part iii THE PLANNING OR INPUT STAGE
Chapter 6
Chapter 7
Needs Assessments
115
The Logic Model 116
The Link Between Problems and Needs 118
The Underlying Causes 120
Input Stage and Planning a Proposed Program 121
Why Conduct a Needs Assessment? 122
Some Purposes of Needs Assessments 122
Methods of Conducting Needs Assessments 125
Needs Assessments and Practice Interventions 140
Suggestions for How to Conduct a Needs Assessment
Summary 143
Key Terms 144
Discussion Questions and Assignments 144
References 146
Crafting Goals and Objectives
141
149
Goals for Program and Practice Interventions 150
Characteristics of Goals 151
Limitations of Goals 154
Crafting Measurable Objectives 156
Three Properties: Performance, Conditions, and Criteria 160
Differences Between Measurable Objectives of Programs
and Practice 164
Summary 166
Key Terms 166
Discussion Questions and Assignments 167
References 168
part iv THE IMPLEMENTATION STAGE
Chapter 8
Improving How Programs and Practice Work 171
James R. Dudley and Robert Herman-​Smith
Link the Intervention to the Clients’ Problems 172
Implement the Intervention as Proposed 175
Adopt and Promote Evidence-​Based Interventions 179
Focus on Staff Members 184
Accessibility of the Intervention 189
Program Quality 194
Client Satisfaction 196
Evaluating Practice Processes: Some Additional Thoughts
Summary 207
202
x C ontents
Key Terms 207
Discussion Questions and Assignments
References 208
207
part v THE OUTCOME STAGE
Chapter 9
Is the Intervention Effective?
215
The Nature of Outcomes 216
Varied Ways to Measure Outcomes 219
Criteria for Choosing Outcome Measures 222
Outcomes and Program Costs 223
Evidence-​Based Interventions 224
Determining a Causal Relationship 227
Group Designs for Programs 229
Outcome Evaluations for Practice 236
Summary 247
Key Terms 247
Discussion Questions and Assignments 248
References 250
part vi FINAL STEPS IN COMPLETING AN
EVALUATION
Chapter 10
Chapter 11
Analyzing Evaluation Data
255
James R. Dudley and Jeffrey Shears
Formative or Summative Evaluations and Data Analysis
Stages of Interventions and Data Analysis 257
Summary of Pertinent Tools for Qualitative Data
Analysis 260
Summary of Pertinent Tools for Quantitative Data
Analysis 264
Mixed Methods and Data Analysis 271
Summary 274
Key Terms 274
Discussion Questions and Assignments 275
References 275
Preparing and Disseminating a Report of
Findings 276
Considering the Input of Stakeholders
Format of the Report 278
277
255
C ontents  xi
Strategies for Preparing a Report 283
Strategies for Disseminating Reports 287
Summary 289
Key Terms 290
Discussion Questions and Assignments 290
References 291
part vii CONSUMING EVALUATION REPORTS
Chapter 12
Becoming Critical Consumers
of Evaluations 295
Daniel Freedman and James R. Dudley
Stakeholders Who Consume Evaluation Reports 296
Critical Consumption of an Evaluation Report 299
The Need for Multiple Strategies on Reports 310
Helping Clients Become Critical Consumers 311
Summary 313
Key Terms 313
Discussion Questions and Assignments 313
References 314
Appendix A: American Evaluation Association
Guiding Principles for Evaluators:
2018 Updated Guiding Principles 317
A.  Systematic Inquiry: Evaluators Conduct Data-Based
Inquiries That Are Thorough, Methodical, and
Contextually Relevant 317
B.  Competence: Evaluators Provide Skilled Professional
Services to Stakeholders 317
C.  Integrity: Evaluators Behave With Honesty and
Transparency in Order to Ensure the Integrity of the
Evaluation 318
D.  Respect for People: Evaluators Honor the Dignity,
Well-being, and Self-Worth of Individuals and
Acknowledge the Influence of Culture Within
and Across Groups 319
E. Common Good and Equity: Evaluators Strive to
Contribute to the Common Good and Advancement
of an Equitable and Just Society 319
Appendix B: Glossary
Index
329
321
C S W E’ S C OR E C OM P E T E NC Y F U L F I L L M E N T
G U I DE : H OW I T I S C OV E R E D I N T H E B O OK
CSWE’S NINE SOCIAL WORK COMPETENCIES
COVERED IN THE BOOK
Competency
Chapters
Competency 1: Demonstrate Ethical and Professional
Behavior
•  Make ethical decisions by applying the standards of the
NASW Code of Ethics, relevant laws and regulations, models
for ethical decision-​making, ethical conduct of research, and
additional codes of ethics as appropriate to context;
•  Use reflection and self-​regulation to manage personal values
and maintain professionalism in practice situations;
•  Demonstrate professional demeanor in behavior; appearance; and oral, written, and electronic communication;
•  Use technology ethically and appropriately to facilitate practice outcomes; and
•  Use supervision and consultation to guide professional judgment and behavior.
Competency 2: Engage Diversity and Difference in Practice
•  Apply and communicate understanding of the importance of
diversity and difference in shaping life experiences in practice at the micro, mezzo, and macro levels;
•  Present themselves as learners and engage clients and constituencies as experts of their own experiences; and
•  Apply self-​awareness and self-​regulation to manage the influence of personal biases and values in working with diverse
clients and constituencies.
Competency 3: Advance Human Rights and Social,
Economic, and Environmental Justice
•  Apply their understanding of social, economic, and environmental justice to advocate for human rights at the individual
and system levels;
•  Engage in practices that advance social, economic, and environmental justice.
xiii
1, 2, 3, 9,
10, 11, 12
2, 3, 12
1, 3, 5, 8, 10, 11
3, 6, 10, 11
3, 4, 5, 8
2, 3, 5, 7, 8
1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12
2, 3, 7, 8, 10, 12
1, 2, 3, 5, 6,
8, 10, 11
1, 2, 3, 6, 7, 8,
9, 11, 12
xiv C S W E’ s C ore C ompetency F ulfillment G uide
Competency
Competency 4: Engage in Practice-​informed Research and
Research-​informed Practice
•  Use practice experience and theory to inform scientific
inquiry and research;
•  Apply critical thinking to engage in analysis of quantitative
and qualitative research methods and research findings;
•  Use and translate research evidence to inform and improve
practice, policy, and service delivery.
Competency 5: Engage in Policy Practice
•  Identify social policy at the local, state, and federal level that
impacts well-​being, service delivery, and access to social
services;
•  Assess how social welfare and economic policies impact the
delivery of and access to social services;
•  Apply critical thinking to analyze, formulate, and advocate
for policies that advance human rights and social, economic,
and environmental justice.
Competency 6: Engage with Individuals, Families, Groups,
Organizations, and Communities
•  Apply knowledge of human behavior and the social environment, person-​in-​environment, and other multidisciplinary theoretical frameworks to engage with clients and
constituencies;
•  Use empathy, reflection, and interpersonal skills to effectively
engage diverse clients and constituencies.
Competency 7: Assess Individuals, Families, Groups,
Organizations, and Communities
•  Collect and organize data, and apply critical thinking to
interpret information from clients and constituencies;
•  Apply knowledge of human behavior and the social environment, person-​in-​environment, and other multidisciplinary
theoretical frameworks in the analysis of assessment data
from clients and constituencies;
•  Develop mutually agreed-​on intervention goals and objectives based on the critical assessment of strengths, needs, and
challenges within clients and constituencies;
•  Select appropriate intervention strategies based on the assessment, research knowledge, and values and preferences of
clients and constituencies.
Chapters
1, 2, 4, 5, 11
2, 4, 6, 7, 9,
10, 11, 12
1, 2, 4, 6, 9, 10,
11, 12
2, 5, 6, 11
4, 6, 8, 11
1, 2, 3, 5, 6, 7,
8, 9, 10, 11, 12
1, 2, 3, 4, 6,
7, 8, 9
2, 3, 4, 5, 6,
8, 12
1, 3, 4, 6, 10, 11
1, 2, 4, 5, 6, 7,
8, 10, 11, 12
1, 2, 3, 4,
5, 7, 11
1, 2, 4, 5, 6, 7,
8, 11, 12
C S W E’ s C ore C ompetency F ulfillment G uide  xv
Competency
Competency 8: Intervene with Individuals, Families,
Groups, Organizations, and Communities
•  Critically choose and implement interventions to achieve
practice goals and enhance capacities of clients and
constituencies;
•  Apply knowledge of human behavior and the social environment, person-​in-​environment, and other multidisciplinary
theoretical frameworks in interventions with clients and
constituencies;
•  Use interprofessional collaboration as appropriate to achieve
beneficial practice outcomes;
•  Negotiate, mediate, and advocate with and on behalf of
diverse clients and constituencies; and
•  Facilitate effective transitions and endings that advance
mutually agreed-​on goals.
Competency 9: Evaluate Practice with Individuals, Families,
Groups, Organizations, and Communities
•  Select and use appropriate methods for evaluation of
outcomes;
•  Apply knowledge of human behavior and the social environment, person-​in-​environment, and other multidisciplinary
theoretical frameworks in the evaluation of outcomes;
•  Critically analyze, monitor, and evaluate intervention and
program processes and outcomes;
•  Apply evaluation findings to improve practice effectiveness at
the micro, mezzo, and macro levels.
Chapters
1, 2, 3, 4, 5, 7, 8,
9, 11, 12
1, 2, 3, 4, 6, 7, 8,
9, 11, 12
1, 2, 4, 6, 8, 11
1, 2, 3, 4, 5, 7, 8
1, 4, 5, 7, 9, 11, 12
1, 2, 4, 5, 7,
9, 10 11
1, 2, 3, 4, 6, 7,
9, 10, 11, 12
1, 2, 4, 5, 7, 8,
9, 10, 11, 12
1, 2, 4, 6, 7, 10,
11, 12
Note. CSWE = Council on Social Work Education; NASW = National Association of Social Workers.
P R E FAC E

E
very social worker is expected to know how to conduct evaluations of his or her
practice. In addition, growing numbers of social workers will also be assuming
a program evaluator role at some time in their careers because of the increasing
demands for program accountability. Yet, many social workers are still inadequately
prepared to design and implement evaluations. Social Work Evaluation: Enhancing
What We Do introduces social workers and other human service workers to a broad
array of knowledge, ethics, and skills on how to conduct evaluations. The book
prepares you to conduct evaluations at both the program and practice levels.
The book presents evaluation material in a form that is easily understood and
especially relevant to social work students. Research is among the most difficult content areas for social work students to comprehend. This is partially because it is difficult to see the applicability of research to social work practice. The statistical and
other technical aspects of research content also tend to be unfamiliar to students
and difficult to comprehend. This book is especially designed to overcome these and
other types of barriers more than other social work evaluation texts do because it
continually discusses evaluation in the context of social work programs and practice
and uses numerous pertinent examples.
The book is organized around a three-​stage approach of evaluation. The stages
divide evaluation into activities during the planning of an intervention, its implementation, and, afterward, to measure its impact on the recipients. In addition, the text
describes seven general steps to follow in conducting evaluations. These steps offer
a flexible set of guidelines to follow in implementing an evaluation with all its practicalities. The book also gives significant attention to evidence-​based interventions
and how evaluations can generate evidence as a central goal. Readers are also given
several specific suggestions for how to promote evidence-​based practice.
This book can be used for several research and practice courses in both Bachelor
of Social Work (BSW) and Master of Social Work (MSW) programs. It is designed
for primary use in a one-​semester evaluation course in MSW programs. It can also
be a primary text along with a research methods text for a two-​course research
sequence in BSW programs. The book can also be very useful as a secondary text
xvii
xviii P reface
in BSW and MSW practice courses at all system levels and policy courses. In addition, it is an excellent handbook for the helping professions in other fields such as
counseling, psychology, and gerontology.
NEW TO THIS EDITION
The entire book has been carefully reviewed, revised, and updated, and summaries
are added to each chapter. Also, new material is added in several sections. A strength
of the book is that it covers both program and practice evaluations. In the new edition, greater attention is now given to programs and practice as key concepts and
how the evaluation process offers more understanding of each of them and their
relationship to each other. Evaluations at both levels have much in common. In addition, there is frequently a need to distinguish between these two levels of evaluation.
In the new edition, separate sections are provided for both program and practice
evaluations when there is a need to explain their differences and how each can
be implemented. A symbol has been added to the text to let you know when the
material following the symbol covers only programs or practice.
Accreditation standards of social work mandated by the Council on Social Work
Education (CSWE) are updated and highlighted in a “Core Competency Fulfillment
Guide” at the beginning of the text. These standards are frequently addressed in the
content of every chapter. Content on the six core social work values of the National
Association of Social Workers (NASW) Code of Ethics are also added in the new
edition and elaborated on in the ethics chapter to highlight how they provide the
foundation for the ethics used in evaluations.
Content is expanded on using the logic model as an analytic tool in conducting
evaluations. This gives practitioners the capacity to have continual oversight of
evaluation concerns. Most important, this tool helps remind social workers of the
importance of the logical links among the clients’ problems, needs, and their causes,
their goals, and the interventions chosen to reach their goals. The logic model is also
useful for supporting evidence-​based practice and giving clients greater assurance
that that they will be successful in reaching their goals.
The seven steps for conducting an evaluation are emphasized throughout the
book and provide a helpful guide for the readers to follow. An emphasis on client-​
centered change highlighted in earlier editions is strengthened in this edition in
these seven steps. Client-​centered change is promoted through innovative ways of
assisting clients, staff members, and community groups in becoming more actively
involved in the evaluation process. Ultimately, these changes are intended to help
clients succeed as recipients of these interventions. Clients are presented throughout
the book as a key group of stakeholders who are often overlooked in other texts.
A new Teacher and Student Resource website has been added and is available
from Oxford University Press. It will contain all the resources provided with the
book in earlier editions along with some new helpful aids for both teachers and
students.
P reface  xix
OTHER SPECIAL FEATURES
Both qualitative and quantitative methods of evaluation are described and
highlighted throughout the book. While quantitative methods are pertinent to both
summative and formative evaluations, qualitative methods are presented as especially relevant to many types of formative evaluations. Criteria are offered for when
to use qualitative methods and when to use quantitative ones, and examples of both
are provided. Mixed methods are also encouraged and often suggested as the best
option.
Many efforts have been made throughout the book to help students and
practitioners view evaluation as being helpful and relevant not only to programs but
also to their own practice. Throughout the book, the evaluation content on practice
interventions offers the readers practical insights and tools for enhancing their own
practice and increasing their capacity to impact their clients’ well-​being.
The planning stage for new programs and practice interventions is presented
as perhaps the most critical stage before new programs and practice interventions
are implemented. Unfortunately, most agencies do not invest nearly enough time,
thought, and resources to the tasks of this critical planning period. The tasks of
planning include clearly identify and describing the clients’ problems and needs to
be addressed, along with the goals for resolving them. In addition, the proposed
interventions need to be carefully developed to uniquely fit the problems and needs
of their clients. Moreover, evidence that these interventions can be effective are paramount to develop and emphasize.
The evaluation process is described as a collaborative effort that encourages the
participation of the clients and other important stakeholders in some of the steps.
A periodic focus on the principles of participant action research is highlighted in
some sections to emphasize how evaluation can be used to promote client involvement, empowerment, and social change. Also, special emphasis is placed on staff
and client involvement in consuming evaluation findings and becoming more active
gatekeepers.
As mentioned earlier, another feature of the text is that it directly addresses all
the current accreditation standards of the CSWE, the national accrediting organization for social workers. The CSWE promulgates minimum curriculum standards
for all BSW and MSW programs, including research and evaluation content. This
book devotes extensive attention to several competencies related to evaluation with
a special focus on three areas: ethics, diversity, and social and economic justice.
Because of the importance of these three competency areas, they are highlighted
in numerous examples and exercises throughout the book. In addition, practice, an
overall competency of the social work curriculum, is often highlighted as it relates
to evaluation. Evaluation is described throughout the book as a vital and necessary
component of practice at both the MSW and the BSW levels.
While a social work perspective is emphasized that helps in understanding
the connections of evaluation with practice, ethics, diversity issues, and social
justice, other human service professionals will also find these topics pertinent.
xx P reface
Professionals with disciplines in psychology, family and individual therapy, public
health, nursing, mental health, criminal justice, school counseling, special education, addictions, sociology, and others will find this text to be a very useful
handbook.
Technology skills are infused in different parts of the text. Social work
practitioners must know how to use various electronic tools like the Google, e-​
mail, electronic discussion lists, and data analysis programs like SPSS (Statistical
Package for the Social Sciences). The book includes electronic exercises and other
assignments that involve students using such tools. Emphasis is given to electronic
skills that help students obtain access to the latest information on client populations,
practice and program interventions, information from professional organizations,
relevant articles, and helpful discussion lists.
Another distinguishing aspect of this book is the extensive use of case examples.
It has been the author’s experience that students’ learning is enhanced when they can
immediately see the application of abstract concepts to human service situations.
Specific evaluation studies from professional journals, websites, and books are frequently highlighted to illustrate concepts, findings, data analyses, and other issues.
Numerous examples of evaluations that Dudley has conducted are frequently used.
Exemplary evaluation activities of social work students and practitioners are also
generously included. These illustrations reflect what students will often find in field
placement agencies and social agencies where they are hired. Figures and graphs
are also used and designed to appeal to students with a range of learning styles. The
book also contains a glossary of terms.
In addition, the book is user-​friendly for faculty who teach evaluation courses.
Sometimes social work educators who do not have the time or interest in conducting
their own evaluations teach research courses. Such faculty may often feel less qualified to teach an evaluation course. This text is understandable to both inexperienced
and experienced faculty. Also, discussion questions included at the end of each
chapter can serve as a focus for class discussions, quizzes, and tests.
A chapter, “Becoming Critical Consumers of Evaluations,” is also included
to stress the importance of the consumer role in reading and utilizing evaluation
studies of other researchers. The chapter walks the readers through each of the
seven steps of conducting an evaluation, pointing out strengths and weaknesses of
evaluation reports using a recently published evaluation report as an illustration.
This chapter and others provide guidelines for how to cautiously and tentatively
consider how to apply the findings of someone else’s evaluation to your own practice with clients.
In addition, a Teacher and Student Resource website is an online ancillary
resource that is available with the purchase of the book, available from Oxford
University Press. It elaborates on how the content of the book can be used and
suggests helpful ways to involve students in understanding and using it. The
teacher’s guide includes a sample syllabus, PowerPoint presentations for each
chapter, and a test bank of multiple-​choice exam questions that includes questions
for each chapter.
P reface  xxi
ORGANIZATION OF THE BOOK
The book is organized into seven parts. Part I, the first chapter, introduces evaluation and how it is described and defined in the book. The chapter begins with a
persuasive rationale for why social workers should be proficient in evaluation. The
concepts of program and practice are introduced along with how they are similar
and different. Definitions of program and practice evaluations, their characteristics
and aims, and the larger social contexts for evaluations are introduced. The misuses
of the term evaluation are also pointed out. Also, evidence-​based interventions are
introduced as an indispensable concept in the context of evaluation.
Part II is an orientation to the bigger picture about evaluations. Chapter 2
highlights key historical events that have helped to shape current public policies and
stresses the importance of conducting evaluations. Also, five different theoretical
perspectives on evaluation are introduced to remind readers that evaluation is not a
monolithic enterprise; to the contrary, its purposes vary widely depending on who
is conducting the evaluation and what they are attempting to accomplish. Aspects of
all these theoretical perspectives contribute to the concept of evaluation adopted in
the book. Chapter 3 focuses on the ethics of evaluation, drawing on the NASW Code
of Ethics and the ethical principles of the American Evaluation Association. The
chapter explains how the accreditation standards of the CSWE can be implemented,
including the ethics of social work and the importance of diversity and social and
economic justice. Chapter 4 introduces readers to several types of program and practice evaluation that are commonly practiced in the settings in which social workers
and other human service workers are employed. They are introduced in this chapter
to help readers be able to identify them in various field settings. These common
evaluations range from client satisfaction studies to outcome studies, licensing
of professionals and programs, quality assurance, and judicial decisions. Finally,
Chapter 5 offers guidelines for focusing an evaluation and presents a tool that can be
used to craft a focus for any evaluation.
Part III covers the first of three stages of evaluation activities, the planning stage,
when a program or practice intervention is being conceptualized and important
details are being worked out. The planning stage is presented as a critical time
for evaluation activities, especially to document the need for a new intervention.
Chapter 6 is devoted to conducting needs assessments, especially during the planning stage. The chapter explains why needs assessments are so important, highlights
a variety of assessment tools, and describes the steps involved in conducting a needs
assessment. Crafting goals and objectives for a new program or practice intervention
are highlighted in Chapter 7. Characteristics of goals, limitations of goals, and the
importance of measurable objectives are highlighted. A practical approach to crafting
measurable objectives is described with several examples and exercises to ensure
that readers can understand how to craft objectives for their own interventions.
Part IV, consisting of Chapter 8, focuses on the second of three stages, the
implementation stage, when numerous types of evaluation activities can occur.
Implementation or process evaluations can address a wide variety of important
xxii P reface
issues, and this chapter describes the central ones. The chapter explores an array of
evaluations, including critiquing an intervention based on the logic model, monitoring whether the actual intervention is being implemented as it was proposed,
and focusing on staff issues. Implementation evaluations are also introduced that
investigate the quality of an intervention and its degree of accessibility. Finally, client
satisfaction is introduced at both the program and the practice levels.
Part V, consisting of Chapter 9, covers the third of the three stages, the outcome
stage, when evaluations are used to determine whether an intervention was effective
in helping clients. The chapter portrays outcomes as multidimensional and complex.
Criteria are described for choosing outcome measures. Also, the enormous challenge
of adequately documenting that an intervention is the cause of any improvement in
the clients’ lives is explained in some detail. Several outcome designs are introduced
for evaluating both program and practice interventions, and the advantages and limitations of each design are highlighted. These designs are presented in a practical way
so that readers can easily implement them. Ethical issues in selecting designs are also
discussed.
Part VI discusses the final steps in conducting an evaluation, data analysis
and preparing and disseminating the final report of the evaluation. Data analysis
is an important step to understand and implement as discussed in Chapter 10. The
chapter discusses the many options available for analyzing both qualitative and
quantitative data. Several statistical tools are described for analyzing data for quantitative evaluations, and three different approaches are offered for analyzing data from
qualitative evaluations. Although the principles of data analysis in an evaluation are
similar to those used in a research study, several differences are also evident and
noted in this chapter. Most important, analysis of evaluation data begins and ends
with questions of who needs to know what, when, and why. Chapter 11 addresses the
final steps in conducting an evaluation, preparation and dissemination of the report
of the findings. The chapter emphasizes involving stakeholders in the planning of
the report(s). Several options for report formats are explored. Also, several strategies
are offered to both prepare a report and disseminate it to stakeholders and others.
Chapter 12 is devoted to consuming and using evaluation reports. Several
questions are addressed. How is the consumer role carried out? What do consumers
look for? How can consumers critically consume a report? How can they use it in their
own work as administrators, practitioners, students, clients, or regulatory entities?
All these questions and others are addressed. Clients of social services are likely to
have the most at stake in the critical consumption of evaluation reports. They need
to know if the interventions they receive are effective and a good fit for what each of
them personally needs. Therefore, a special section of the chapter discusses how to
do more to present evaluation reports in a form that clients can understand and use.
AC K NOW L E D G M E N T S

N
umerous people have graciously assisted in the preparation of this book and
contributed significantly to its conceptualization and organization.
Let’s add the last two reviewers David P. Moxley, University of Alaska, Anchorage
and Michael Cronin, Monmouth University, first for their helpful suggestions;
include their university affiliations in a brief bio.
I am also grateful for Tyan Parker Dominguez, Thomas Meenaghan, Shweta
Singh, Brandon Youker, and Robert Fischer for thoughtful and thorough reviews
in past editions. Scott Wilson, Israel Colon, Cy Rosenthal, Dennis Brunn, and Bill
Perry, former colleagues at Temple University, initially introduced me to the complex enterprise of evaluations and their relevance to social work and social change.
Several colleagues at the University of North Carolina at Charlotte provided support
along the way, particularly at times when it was most needed. Among them, Dennis
Long, Vivian Lord, and Schnavia Hatcher supported work on this project as past
and present directors of the school. Jeff Shears contributed to the writing of one of
the chapters. Robert Herman-​Smith contributed to the writing of another chapter,
and Daniel Freedman contributed to the writing of a third chapter. Janet Baker and
Cheryl Whitley of the administrative staff helped in countless ways of which I am
grateful.
My many MSW social work students from eight years of teaching evaluation
at UNC Charlotte and numerous other students over a prior eight years of teaching
evaluation at Temple University gave me invaluable feedback and support. They consistently assured me that these evaluation courses were useful and valuable to their
professional development; they also gave me the initial encouragement to develop
a textbook of my own that reflects the multifaceted content areas that I cover in the
courses.
Most important, I deeply appreciate the many ways that the editor, Alyssa
Palazzo, at Oxford University Press, supported me in preparing the new edition.
(I will complete this section and add others at OUP when I have the complete list.)
xxiii
PA RT I

Introduction
C
hapter 1 introduces foundational material for understanding virtually all the
material in the remaining chapters. The book begins by indicating that both
programs and practice are its focus. The concept of an evaluation and the steps in
conducting an evaluation are introduced. Evaluations are viewed broadly and can
be conducted at three different possible stages of development, when programs and
practice interventions are being planned, implemented, and during an outcome
stage after implementation.
C HA P T E R 1

Evaluation and Social Work
Making the Connection
Let’s begin by considering three important questions:
1. Is evaluation an important practice area of social work?
2. Is the evaluator role an important one for social workers?
3. How can evaluations help improve or enhance social work interventions?
These questions may be your questions as you begin to read this book. They are
questions that many social work practitioners and students have pondered. This
book is about evaluation so the responses to the first two questions, in brief, will
be no surprise to you. Yes, evaluation is an important area of social work. In addition, the evaluator role is an important one for every social worker to prepare to
practice. Think about this. Some social workers will be evaluators of programs, and
virtually every social work practitioner will be an evaluator of their own practice.
It’s like asking whether social workers need to know whether they are doing a good
job. A good job should include knowing whether your interventions are effective
in helping your clients. The third question, asking how evaluation can enhance or
improve social work interventions, is the focus of this text.
The underlying theme driving the book is that evaluation is a vital element of
any social work approach and is critical for ensuring that social work does work!
A reassuring theme is that evaluation is a practice area that BSW and MSW students
and practitioners alike can learn. Social workers and students wanting to maximize
their impact in their jobs will find what they need in the knowledge, ethics, and skills
about evaluations covered in this book. Learning about them will both help you
enhance your practice and have a greater impact on your clients’ well-​being.
This book provides the needed preparation for evaluation in both a comprehensive and readable format. The primary emphasis is on the various kinds of small
and mid-​range formative evaluations that are often implemented at the local agency
level; less emphasis is placed on the large, complex national and regional evaluations.
These smaller formative evaluations are critical ones that social workers either are
assigned or may want to take on as an opportunity to expand their practice. Such
evaluations can be instrumental in determining whether the programs in which you
are working will continue and possibly expand.
3
4 Part I • introduction
Example of a Small Formative Evaluation
An agency that provides an anger management program to perpetrators of
domestic violence offers a series of 10 psychoeducational group sessions to help
them manage their anger. The agency also conducts an evaluation of this program
that is vital to it. An anger management scale is used to measure changes that
occur in the participants’ anger after they have completed all 10 sessions of a
group program. Throughout the series, the specific items of the anger management scale (e.g., being respectful, having self-​control, being self-​aware, learning
alternatives to violent behavior) identify some of the key discussion topics of the
group sessions. In this way, the intervention and its evaluation go hand in hand in
helping practitioners and clients engage in a partnership to meet the goals of the
   program.
A FOCUS ON BOTH PROGRAMS AND PRACTICE
Both programs and social work practice are the focus of the book. While programs
are covered quite extensively in most evaluation texts, evaluation of practice is covered less. Programs can be large-​or medium-​sized entities serving many clients
while practice refers to the activities of a single social worker with one client system.
While program entities typically are much larger than practice, evaluations at both
levels are important. It’s sort of like saying a family system is important, and the parts
played by each family member are as well.
Virtually every social worker is or should be responsible for evaluating their
practice. Based on this reality, all social workers need to have competencies in
conducting practice evaluations if they are to be accountable for the effectiveness of
their practice interventions. This will include evaluations at different system levels
of practice including work with one client, a group of clients, a family, a community,
or an organization.
We need to know both what a program and practice intervention are and how
they are different prior to understanding evaluations. Think of these two concepts
as subunits of something larger. Programs are subunits of a social agency, and the
practice of one worker is a subunit of a program. Their respective definitions are
Program
Practice
A subunit of a social
agency that provides a
set of interventions with
common goals for
clients.
A subunit of a
program that
provides a set of
interventions by one
worker to one client
system.
Chapter 1 • Evaluation and Social Work
5
Both programs and practice are referred to as interventions in the text and are
often referred to as program interventions or practice interventions. For brevity,
sometimes they are also addressed simply as services. Throughout the text, the
square symbol will be a symbol that indicates the text in this section will focus only
on program evaluations and a circle symbol will be a symbol of evaluations only of
practice. Absence these symbols, the material in the text is largely relevant to both
programs and practice.
Exercise
1. As an exercise, identify a program of which you are familiar. Find out the
program’s interventions to clients (e.g., a home health program may have a social
worker referring clients to several agencies, a nurse assessing a client’s health
indicators, and a nurse’s aide bathing the client). Then try to identify a goal that is
a more general description of all the interventions (e.g., a goal of a home health
program is to help an elderly individual or couple stay in their own home rather
than be referred to a nursing home)?
2. Second, identify the practice of one social worker. This may be easier to do.
Practice should be the interventions of one social worker in helping a client system
(e.g., advocating for the repair of the heating system of the apartment of a client
family, referring them to obtain food stamps, or helping them clarify their rights to
certain Social Security benefits). Then see if you can identify a goal that is a more
general description of these interventions (e.g., helping a client family live success   fully in their own home).
Programs and practice are interdependent in many ways. Overall, practice
interventions are defined and guided by the program and its goals. Yet, the effectiveness of programs depends upon the effectiveness of the practice interventions.
PRACTICE IS EMBEDDED IN A PROGRAM
A practice intervention is likely to be embedded in a program. This means that
the program is instrumental in informing and shaping practice interventions. The
program context offers numerous ways of informing what the practice interventions
should be. As an example, one of the purposes of a program could be to reach out
and provide services to underserved people in the community such as those who are
the poorest and have the greatest material needs. In this case, the interventions of
the practitioners should include finding out who the underserved are and reaching
out to them in their neighborhoods. In addition, the practitioners could begin by
conducting an informal community needs assessment to find out who the underserved people are and then reach out to them in their neighborhoods to find out if
they need help.
Another way in which practice is embedded in a program is apparent in the
desired program goals chosen for clients. The goals of the program should inform
6 Part I • introduction
what the outcomes of practice should be. If the outcomes of a program, for example,
are to help mental health clients live independently in the community, then the
outcomes of practice would likely be to enhance indicators of living independently
in their communities. These practice indicators or outcomes could include, for
example, finding employment, making new friends, and linking to social supports
in the community such as a social club or a church choir.
A Program Is Dependent on What Happens in Practice
Practice is an important place where you can discover what works and doesn’t work
in programs. If the practice offered by a social worker is not reflecting the program in
content and quality, it may be counterproductive both to the program’s purpose and
its effectiveness. There can be several reasons why a worker’s practice may not reflect
the program’s purposes. Possibly, the worker is not implementing the approach that
supports the program’s purpose. This could be the result of inadequate communication within the agency about the practice approach and how to implement it. Or the
agency may have hired someone who is not qualified to carry out the practitioner
role. And there could be other reasons such as neglecting to monitor the practice
of a worker and thus failing to be able to correct such practice. For example, a residential program for people with a drug addiction directly depends upon each of its
practitioners and the effectiveness of their interventions. Several questions about
practice could be asked such as, Are each of the therapy groups run by people who
meet the staff qualifications for these groups? Have staff been adequately trained
about what is expected of them by the agency? and Are staff covering the important
topics relevant to program success in group sessions? Ultimately, are they achieving
the goals set by the program? These and other types of evaluation questions will be
discussed more fully later in the text.
Agency Training for Practice Is Important
Practice is not an entity developed in a vacuum or at the whim of a practitioner
or supervisor. As previously indicated, practice interventions need to be aligned
with the program within which they are embedded. Social work practitioners,
once they graduate, usually have a good beginning sense of an overall approach to
helping clients. They have likely been introduced to a variety of practice theories
taught in their professional program such as cognitive-​behavioral therapy, and they
have been able to test some of these theories in their field practicum experiences.
However, their beginning approach is not likely to be designed to serve a specific
client population or at least not the clients in the agency in which they will be
employed.
When a new social worker begins employment at an agency, school, or medical
facility, they will need to consider numerous new ways to prepare for their practice.
For example, their new agency scene will likely require that they make some changes
in their approach to helping. Learning about the client population is an important
part of that preparation. Who are your new clients? What kinds of problems and
needs do they have? What kinds of interventions are known to be most effective in
helping them?
Chapter 1 • Evaluation and Social Work
7
The agency and its programs typically have an approach (or approaches)
that the agency offers to help clients, and the agency is most likely to expect its
practitioners to understand and know how to implement this approach. Mastering
a new approach takes time and experience working with several clients. The agency
approach may be quite explicit and detailed, it may only be described theoretically (e.g., cognitive-​behavioral therapy), or it may even be vague and unnamed and
absent a written description. Practitioners new to an agency will need to find ways
to learn about the agency’s approach and how to implement it. Hopefully they will
have some agency training sessions and assistance from a supervisor who can provide valuable guidance. As new practitioners meet with growing numbers of clients,
they will also come to realize the subtle variations in how to implement an agency
approach and the unique issues of each client that need to be considered.
INTRODUCTION TO EVALUATION
Evaluation is a multifaceted approach that addresses some of the most vital questions
and issues facing programs and practice, such as the following:
• What kinds of clients are the proposed program or practice intervention intended to reach?
• What types of interventions do these clients need?
• How are the interventions to be implemented?
• What impact is the intervention expected to have on the clients?
A THREE-​S TAGE APPROACH
The author views evaluation broadly as important to consider at any of the stages of
development of a program or practice. The three basic stages that will be examined
in the book are referred to as the planning, implementation, and outcome stages. The
planning stage takes place prior to implementing the intervention and involves identifying the characteristics of the clients who are to be helped as well as their needs,
designing the interventions that will be introduced to help these clients, and crafting
the outcome measures that will be used to determine whether the clients have successfully reached their goals.
During the implementation stage, evaluations are introduced to monitor how
well the intervention is being implemented. Finally, after the clients have completed
receiving the intervention, an outcome stage occurs when its time to determine how
successful the clients have been in reaching their goals. Figure 1.1 above is an important
framework that helps begin to organize much of the content of the book and will be
presented periodically in appropriate chapters to highlight the focus of each of the
stages as they are being covered. Notice that it all begins with the clients’ needs.
DIFFERENT PURPOSES OF EVALUATIONS
Overall, evaluations can have different purposes. Usually, some purposes are emphasized more than others, and this will vary widely. Generally, the usual purposes
8 Part I • introduction
Needs
Interventions
Outcomes
Figure 1.1 Three-​stage approach.
of evaluations include investigations of the effectiveness, efficiency, and quality
of interventions (Martin & Kettner, 2010). Ideally, all three of these purposes are
important. However, sometimes they can be at cross purposes with each other. For
example, too much emphasis on efficiency could work against quality and effectiveness. This problem is often evident in some for-​profit agencies that tend to concentrate too much on efficiency to maximize their profits and less on what clients may
need such as more of the available resources.
Effectiveness
Effectiveness refers to having success in reaching the goals of an intervention. In the
case of programs and practice, effectiveness refers to maximizing positive changes
for clients. As an example, a public health agency that provides the flu vaccine is
effective if it prevents its patients from contracting the flu. An employment referral
agency serving chronically unemployed people will be effective if it finds them
long-​term employment in well-​paid jobs. Note that measure of effectiveness for the
employment program adds additional requirements beyond just any job (long-​term
and well-​paid).
Effectiveness is evident in the results that are observed and measured after an
intervention is completed. These results are often referred to as outcomes for clients.
They reflect positive changes that have occurred for clients because of an intervention. It is important to note that if an intervention does not result in positive client
changes (e.g., improved communication between a parent and her child), it must be
concluded that the intervention was not effective. In such instances, the intervention
may need to be revised or replaced by another intervention that potentially will be
effective. Typically, a program intervention needs to have documentation that it is
effective; otherwise, the program likely risks losing its funding.
Efficiency
The efficiency of a program or practice intervention refers to the financial cost
and other resources (e.g., staff, offices, and vehicles) necessary to provide the
interventions to clients. Efficiency efforts are concerned with judiciously channeling
available resources to what the intervention needs. Misdirected and wasted resources are to be avoided or minimized. Efficiency is particularly important because
resources for health care and human service programs are always likely to be limited
or even sometimes scarce. The more efficiently that interventions are delivered, the
Chapter 1 • Evaluation and Social Work
9
greater number of clients that can be potentially helped. Also, inefficient programs
and practice, even if effective, are vulnerable to being discontinued. An example of
efficiency could be to choose a less expensive new agency office after the rent goes
up. Another example is hire one less staff member and increase the staff workloads.
Quality
Quality refers to how well the program or practice interventions are delivered or
implemented. Are they delivered at a high level of performance? Sometimes this
high level has been referred to as “best practices.” Quality is obviously important
because a program or practice delivered very well will likely have a more positive
impact on the recipients than one that is of a lower quality. Quality depends, among
other things, on the quality of the staff members who are hired to provide the services. For example, do they have the most important credentials for the job such as
a professional degree, appropriate training, and enough prior experience? Also, are
new staff members responsible, ethical, reliable, and good communicators? In addition, does the program support enough home visits or group sessions for clients to
accomplish their goals? Further, are the interventions provided in a most accessible way so clients can easily find and use them? Many other questions could also
be asked such as, are the social workers to be hired culturally competent? Is clinical supervision readily available and provided by experienced personnel? These
examples are all likely to be associated with a higher quality of services.
Exercise
Determining quality can be complicated, and yet it is extremely important. As
an exercise, what would you do to augment the quality of a social worker to be
hired in the following situation? A social worker is to be hired to provide intensive counseling to emancipated adolescents who are about to be released from
foster care. The overall goal of the counseling is to help these adolescents begin to
live independently in the community. What information would you want to gather
from applicants, both in their resumes and in interview questions, to select a social
   worker who will likely provide high-​quality counseling with these adolescents?
Effort and Relevance
In addition to the three previously mentioned common purposes of programs and
practice, two more are also important if not crucial: evidence of substantial effort
on the part of the sponsor of an intervention and relevance to the clients and the
community. Evidence of effort is important regardless of the achievements of a
program. Effort refers to the extent of involvement of staff members, volunteers, and
administrators contributing to program and practice interventions. This involvement is most important when the interventions are being implemented. Questions
that can be asked about effort include, How much time was spent in helping clients?
10 Part I • introduction
How many home visits or office sessions occurred and what happened during these
sessions? Another important question is whether the effort is sufficiently responsive
to the expectations of the stakeholders including the funding agency, the program
director, and clients?
Relevance is another fundamental quality to consider in evaluating any intervention. Relevance of the intervention to the clients’ needs and the larger social
problems that clients are experiencing is most important. A program intervention
can be carried out with high performance standards, efficient use of resources, and
achievement of specific goals, but if it is not directly relevant to what the clients
need, it is incomplete and misguided. The concept of relevance is about addressing
the causes of the clients’ problems, not merely symptoms or short-​term surface issues of little significance. Relevance is also concerned with insuring that the diversity reflected in the clients who receive the intervention is similar to the diversity in
the larger population that is suffering from the social problem. Overall, relevance
involves seeking social justice for a client group that it serves. Diversity and social
justice issues, as well as other important issues of relevance, are covered extensively
in the text. As examples, both the National Association of Social Workers (NASW)
Code of Ethics and the American Evaluation Association (AEA) code highlight
numerous issues and responsibilities for evaluators to take seriously.
In summary, five important evaluation questions need to be asked in any serious
evaluation of a program or practice intervention. Please keep all of them in mind as
you proceed through the rest of the text.
Five Key Questions
1. Is the intervention effective in helping clients?
2. Does the intervention efficiently use all available resources?
3. Is the intervention being implemented at a high level of quality?
4. Is there evidence of substantial effort by staff members and others in
implementing the intervention?
5. Is the intervention sufficiently relevant to the needs of the clients and the
social problems confronting them?
  
COMMON CHARACTERISTICS OF EVALUATIONS
To fully understand the nature of an evaluation, we need to not only understand
the purposes of evaluation but also its common characteristics. When we examine
an evaluation, we should be looking for manifestations of these common characteristics. The absence of any of them may suggest that an evaluation has important
missing elements or shortcomings. These common characteristics include
Chapter 1 • Evaluation and Social Work
11
1. Be accountable.
2. Use scientific research methods.
3. Use the logic model as an analytic tool.
4. Be open to a diversity of stakeholders and a political process.
5. Be attentive to contextual issues in an agency and its environment.
6. Abide by an ethical code.
   7. Be critical thinkers.
We should keep in mind that each of these common characteristics continually
interacts with and influences the others.
Be Accountable
If there is one overall concept that explains why evaluations are so important, it
is accountability. Partially because of past failings of social programs, all governmental and most privately funded agencies are held accountable for how they use
their funds and what they achieve for their clients. Evaluations have become one
of the most reliable mechanisms incorporated into program proposals for ensuring
such accountability. Agency accountability is now inherent in the jurisdiction of virtually all funding and regulatory agencies, and it has become a key job expectation
of agency and program administrators.
These funding, regulatory, and administrative entities require accountability to
address questions such as the following:
• Is the intervention focusing on the target population with the greatest need?
• Is the intervention designed to meet the specified needs of the target
population?
• Is the intervention being implemented in the way that it was designed and
proposed?
• Is the intervention being implemented with high standards?
• Are the clients and their families satisfied with the intervention?
• Is the intervention achieving its goals and objectives?
• Is the intervention cost-​effective?
Ultimately, it is important for program sponsors to be accountable to the clients
they serve. Because of the power imbalance between an agency and clients, special
attention is needed to bring more balance between these two entities in the form of
greater power and protection for clients. In addition, agencies need to be accountable
to the communities that are intrinsically connected to clients, such as their family
members and the neighborhoods surrounding residential programs. Accountability
to clients and relevant communities often requires the introduction of empowerment strategies, such as client satisfaction surveys and client representation on
agency boards and advisory groups. Another strategy is to encourage agencies to
12 Part I • introduction
involve client groups as participants in program evaluations and to share the results
of their evaluations with them. Chapter 2 further elaborates on other empowerment
strategies.
Social workers who work in such programs must be accountable not only to
the agency employing them but also to their own professional groups, such as the
NASW, Political Action for Candidate Election (PACE, a political arm of NASW),
and their state-​level professional licensing boards. In these instances, accountability
refers to abiding by an ethical conduct of social work, commitments to clients’ dignity and well-​being, advocating for social justice, and implementation of sound and
evidence-​based professional practice.
Use Scientific Research Methods
Evaluation activities are expected to be scientific and consider using a wide range of
research methodologies (e.g., Dudley, 2011). Scientific research has long-​standing
values and principles that distinguish it from other types of information gathering.
Many of these principles are evident in evaluations.
Principles of Scientific-​Based Evaluations
• The search for something that exists rather than something that is desired
• Use of a methodology that minimizes the influence of biases and involves a
systematic set of steps or procedures that can be flexibly employed
• Abide by a special code of ethical conduct that includes a commitment to neutrality in conducting research and a demonstration of concern to protect the
people studied
• Assumption that the evaluation has a universal stance, representing the
concerns of all society, even though it may focus on a few subgroups of people
or a narrow topic
• Accurate report of the findings despite whether they are consistent with the
researcher’s viewpoints.
  
While these principles of scientific research are expected for all scientific studies,
they are evident in evaluation studies to varying degrees and along a continuum of
quality. The more an evaluation rigorously fulfills these principles, the more confident one can be that it is based on “good science.”
Use the Logic Model as an Analytic Tool
The logic model is an organizing framework that many evaluators use to analyze
both programs and practice. The logic model helps highlight how the stages of an
intervention mentioned earlier (planning, implementation, outcomes) should be
logically linked. In other words, this model anticipates that what was decided in the
Chapter 1 • Evaluation and Social Work
13
planning stage will be used to inform what happens during the implementation and
outcome stages. In turn, the planning and implementation stages directly influence
what the client outcomes will be and how they will be measured.
First, let’s consider programs at all three stages using the logic model.
The linkages among the three stages are important to understand in some
PROGRAM
depth. Using an illustration, assume that a group of stakeholders have the
purpose of setting up and implementing a program to help people who are chronically homeless find semi-​independent housing. They begin in the planning stage.
First, they will need to decide who their clients will be. In our illustration, this will
likely be people who are chronically homeless, and possibly they will also consider
other important client characteristics. The stakeholders will also need to decide what
their clients’ desired outcomes will be. This question addresses how these clients
and their social circumstances will be different after they have been helped. In our
example, the stakeholders are interested in helping the clients find and live successfully in a supportive, semi-​independent living arrangement.
This leads to a central question in the planning process. How will they help these
clients reach this outcome? Let’s say that the stakeholders have done some research
about housing for homeless people and have become interested in the Housing First
model that is being implemented many places across the United States. Agencies that
have used this model have accumulated extensive evidence of chronically homeless
people being helped to move directly from the streets and shelters into Housing
First housing arrangements with minimal eligibility criteria. If they have mental
health issues or substance problems, for example, this means that they will still be
able to move into a supportive housing arrangement and receive help from a multidisciplinary professional team on site as soon as they arrive (e.g., Urban Ministry
Center, 2017).
Note that the link or connection between the client outcome measures and
the interventions are critical to the logic model. In our example, this means that
the intervention that is chosen is expected to help clients successfully reach the
client outcomes identified. In other words, the intervention that is selected should
be evidence-​based if possible. Evidence-​based interventions are interventions that
have been implemented before with documented scientific evidence of their effectiveness. In our example, let’s assume that the chosen intervention, the Housing
First model, has been found to be effective in the past in helping chronically homeless people live successfully in semi-​independent housing (e.g., Urban Ministry
Center, 2017).
The implementation stage comes next. This stage involves implementing the
chosen intervention. Evaluations that are important during the implementation
stage primarily consist of monitoring how well the intervention is being carried out
with clients. Important evaluation questions at this stage include exploring whether
the intervention is being implemented in the manner that it was proposed and
observing whether the clients are responding favorably to the intervention’s effect.
In our housing example, the evaluators are attempting to determine if the clients are
14 Part I • introduction
receiving the professional assistance that they need such as person-​centered mental
health services and adjusting well to their new living arrangement.
The third stage is the outcome stage. This step comes once the clients have completed the intervention. At this point the evaluators must decide whether the clients
have succeeded in reaching the client outcomes identified in the planning stage. If
so, a claim can be made that the intervention was successful or effective. In our
housing example, positive measures of the outcomes would indicate that the clients
are successfully living in their new apartment building and meeting other important
needs related to successful housing, such as their mental health adjustment. Also,
successful outcomes will likely need to reflect efficiently used available resources and
costing less that any known alternative interventions.
Example of the Use of the Logic Model in Designing a Program
A group of MSW students was asked to design a program that would effectively
help clients overcome substance abuse problems. They used the logic model to
complete this exercise. They began by identifying some suspected causes of substance abuse, including heredity, peer influence, low self-​esteem, social isolation,
and inadequate coping skills. Next, they decided to design a program to address
only the suspected causes that revolved around interpersonal issues, including
peer influence, social isolation, low self-​esteem, and inadequate coping skills.
They decided to offer psychoeducational groups to teach clients the skills needed
to manage these and other personal and interpersonal issues. They decided to
cover specific topics such as how to find positive peer influences and avoid negative ones, how to find and participate in support groups, and some self-​esteem
building exercises. They anticipated that once participants had completed the
psychoeducational group sessions they would be able to identify the factors
that reduced their self-​esteem, identify specific ways to build more positive self-​
esteem, and learn three or four new coping skills. By completion of the program,
participants would also have made two or more visits to a support group in the
   community to help them stop using substances.
Practice should also be evaluated at all three-​stages using the logic model.
Inputs are addressed during the planning stage including deciding which clients
PRACTICE
(or client systems) will receive the intervention, identifying the desired client
outcomes, and choosing the intervention (approaches and processes) that the practitioner will use. During the time that the practice intervention is being implemented,
monitoring is needed to determine how well the approach and processes are being
implemented. Finally, after the intervention has been completed, evaluations of client
outcomes are completed that determine how much progress the clients have made
after they received help. This is largely determined by the outcome measures that were
selected during the planning stage.
In summary, the logic model helps the evaluator link the documented
problems and needs of the clients to the intervention that will address them, and
Chapter 1 • Evaluation and Social Work
15
the intervention is, in turn, linked to the client outcomes that are anticipated after
the intervention has been implemented. The logic model is elaborated on further in
Chapters 2, 6 and 8. The three-​stage approach is introduced more fully in the next
chapter.
Be Open to a Diversity of Stakeholders and a Political Process
While basic research is often considered apolitical, evaluations usually involve an
overtly political process, meaning differences of opinion, view, and interest are
present. Historical events and current political considerations need to be considered
when discussing, planning, and implementing an evaluation. Indeed, an evaluation
is a special type of research that intentionally incorporates political considerations
into its execution. An evaluation may have several different stakeholders, and
each one could have special interests that compete with that of other stakeholders.
This is because stakeholders are usually a varied group. They could include governmental funding and regulatory agencies, foundations, public officials, board
members, agency administrators, staff members, citizens, clients, advocacy groups,
accountants and auditors, and representatives of the surrounding community. As
you can imagine, each of them will likely have different interests.
When talking about an evaluation, political issues almost always come into play,
whether explicitly or implicitly. For example, political processes might be involved
in any of the following types of questions that agency administrators, in particular,
might raise.
• How can we help those with the greatest need?
• How can an evaluation help our program survive?
• How can an evaluation improve our chances of obtaining funding to expand
our program?
• How can the results of an evaluation be used to enhance our program’s identity in the larger network of agencies in our field?
• How can we report negative findings from an evaluation without jeopardizing our program’s existence?
Example of a Political Consideration
A graduate student conducted an evaluation of staff morale at her field agency.
She gave the staff members a questionnaire to fill out, asking them how important
they perceived each of several different issues that affected their morale. The issues included salaries, medical benefits, size of caseloads, hours of work, supervision (its quality and frequency), and openness of administration to staff concerns.
The findings revealed that their major concerns about morale related to their
problems with supervisors and administration. Because the administration of the
agency was taken by surprise and unprepared to seriously address these issues,
they decided to ignore them and instructed the graduate student to withhold
   sharing her findings with the staff members.
16 Part I • introduction
In contrast, a funding agency might ask very different questions about an evaluation,
such as:
• How can I be sure that this program is fulfilling its responsibilities?
• How can I determine whether this program is more important than another
program that we may want to fund?
• How can I get the most fiscal value out of this program?
• I like this program and its director, but how can I justify funding them when
the proposal falls short of what we want?
Political influences such as these must be considered at all stages of evaluating
interventions, including during planning, implementation, and outcome. An approach that can be used to identify and analyze these contextual forces within and
outside an agency is elaborated on later in the chapter. In general, this approach can
help evaluators consider the political issues and questions that they may need to address or avert in conducting an evaluation before they become serious problems or
ethical dilemmas. This approach helps evaluators identify both constraints that can
create conflicts for an evaluation and potential resources that can help in conducting
the evaluation. The identification of potential constraints and resources before implementation of an evaluation helps address both its feasibility and its ethical standing.
Be Attentive to Contextual Issues in an Agency and
Its Environment
A program and its practice interventions do not exist or operate in a vacuum. They
are part of a larger dynamic system of many forces and factors. They include a wide
range of policies, administrative leadership styles, staff and administrative communication patterns, varied perceptions of clients, and financial issues, all of which
theoretically need to be taken into consideration when conducting a program evaluation. Figure 1.2 provides a sketch of many of these factors, large and small, in an
agency and its environment.
These factors and their dynamic interplay can have a major influence on an
evaluation conducted by the agency. Social policies, for example, from several
sources have direct influence on evaluations since they give programs and practice
approaches meaning and purpose related to the problem to be addressed and the
proposed solution. Governmental policies (local, state, and federal) are important
to consider because they dictate what problems and solutions they will fund.
Agency policies also have a direct or indirect influence in a wide range of areas
such as financial matters, hiring, personnel, client admissions, and programmatic
issues. An agency may have a specific policy, for example, about which client groups
to prioritize for receiving services. Or they may take a strong stand supporting
evidence-​based interventions. Or client-​centered practice. Or they could have a
major commitment to strong fiscal policy and an expectation that benefits be in
line with costs. By the way, agency policies that explain something about the nature
of programs and practice may not be evident to new staff members and may need
to be explained to them.
Chapter 1 • Evaluation and Social Work
17
AGENCY’S EXTERNAL ENVIRONMENT
Perceived importance of
proposed intervention
Access to
outside
resources
Funding
agencies
Key agency
change agents
Previous
evaluation
experiences
Clients
Prior success of
programs and
practice areas
Formal
structures
Decisionmaking
processes
Facilities
Key outside
change agents
Advisory
groups
Informal
structures
Professional staff,
clerical staff
Technology
Extent of
community
involvement
Agency
policies
Board of
directors
Technical
staff
Staff
fatigue
factor
Regulatory
agencies
AGENCY
PROGRAM
AND PRACTICE
AREA
Budget and
available funds
Degree of
support for
evaluations
Local, state,
and federal
government
policies
Administrative
styles of leadership
Funding
requirements
Community
groups
Former and
prospective
clients
Figure 1.2 Contextual factors in evaluations.
Leadership style is another example of a factor of influence. One illustration of this dynamic interplay is administrative leadership styles. Administrators
can assume many different styles, including autocrat, collaborator, and delegator.
Administrators who are primarily collaborative, for example, are likely to have a different kind of influence on staff members when conducting an evaluation than those
of an autocratic administrator.
Also, organizational structures and processes, both informal and formal, are
important factors with respect to their interplay with decision making (Weissman,
Epstein, & Savage, 1983). In this case, while it is usually a good idea to consult
everyone in an organization who is interested in and affected by an evaluation,
some players will be more important to identify, including the people who formally
oversee the program in question and those who have informal influence regardless
of their formal title. These informal players, for example, could be instrumental in
supporting or undermining an agency’s formal structure. They could be, for example,
lower-​ranked employees, such as a highly invested secretary or a popular and outspoken staff member. All in all, evaluators can commit a serious and possibly fatal
18 Part I • introduction
error in an evaluation if they overlook informal stakeholders who may be potentially
central to the success of an evaluation but are excluded from evaluation discussions.
Many other contextual factors are also directly relevant to an evaluation. For
example, are the agency leaders well informed and educated about evaluations, as
well as unequivocal supporters of evaluations? Or do they comprise novices who
may be cautious and reluctant to pursue a study that can be risky? What is the
agency’s track record in this regard? Some standard questions of this sort could be
asked of agency leaders at the outset to find out:
• What kinds of expertise does the agency have for conducting a variety of
evaluations?
• How cooperative are staff members, both professional and support staff, in
taking on additional responsibilities such as filling out questionnaires or
searching for client records?
• What’s in it for the administration and direct service staff? Are all motives
openly known and transparent, or do some appear to be covert or hidden?
• Are there reasons staff members may be suspicious of the motives for an
evaluation or reluctant to participate for fear of jeopardizing their jobs?
Several contextual factors could also influence the extent to which the agency will
disseminate the findings of an evaluation and implement its recommendations,
including whether there are adequate resources, degree of desire to bring about
a change in direction, and openness to risk the program’s future. More attention
will be given to these various forces in later chapters within the context of specific
topics.
Abide by an Ethical Code
Ethical issues are extremely important to identify when addressing political issues.
The way in which decisions are made or not made should be partially based on an
ethical code such as the NASW Code of Ethics (www.naswdc.org) or the ethical
principles of the AEA (www.eval.org). As social workers and other human service
professionals know, those who participate in research and evaluation are obligated
to follow an ethical code. The NASW Code of Ethics is a basic code required of all
social workers. It obligates an evaluator to be well informed about ethical issues and
well versed in how to implement a variety of measures intended to prevent ethical
problems from occurring. In addition, the ethical principles of the AEA are designed
for professional evaluators specifically conducting evaluation studies. These principles are valuable to consult because they are directed toward issues essential for
a professional evaluator to address (the AEA ethical principles are described in
Appendix A).
Ethical problems include such things as physical and psychological harm to
research participants, invasion of their privacy, and misrepresentation of study
findings. Evaluators are obligated to prevent such ethical problems by implementing a
variety of ethical safeguards, including an informed consent protocol, confidentiality,
Chapter 1 • Evaluation and Social Work
19
and selection of evaluators with appropriate credentials and objectivity. Chapter 3
focuses on a more extensive introduction to many of the ethical concerns that are
evident in evaluations and how to address them. It examines the NASW Code of
Ethics and some of the ethical principles of the AEA, particularly as they pertain to
the ethical obligations of social workers conducting evaluations.
Think Critically
Another important characteristic of evaluations is critical thinking. Critical thinkers
are natural skeptics about how well an evaluation is conducted, whether it is someone
else’s evaluation or one’s own. Gambrill and Gibbs (2017) identify several types of
problems that program providers experience when they fail to be critical thinkers:
• Overlooking the people who may need the services of a program the most;
• Not understanding the larger social forces that influence the ways clients
behave;
• Misclassifying or misdiagnosing clients and their problems;
• Focusing on irrelevant factors that are not important in helping clients make
progress;
• Selecting interventions that are weak or inappropriate;
• Arranging for interventions to continue either too long or not long
enough; and
• Being overly preoccupied with financial profit and neglecting the impact of
such decisions on the clients’ well-​being (especially in for-​profit agencies).
The Council on Social Work Education (CSWE) views critical thinking as essential
to the practice of every social worker. Because of the importance of critical thinking,
CSWE refers to it as an element to be implemented in most of the nine required
social work competencies (CSWE, 2015).
A final note needs to be made about how the characteristics of evaluations are
different from other types of research, particularly social science research conducted
in academic settings. A closer look at the previously described common characteristics of an evaluation provides a helpful way of distinguishing evaluations and other
types of research. First, evaluations are conducted primarily to provide accountability to a funding agency, clients, and the larger community that an intervention
works effectively and efficiently, while social science research does not. Second,
an evaluation places major emphasis on the logic model while most social science
research does not. The logic model helps evaluators examine the links between
the clients’ problems, an intervention, and success in helping clients address these
problems. Third, successful evaluations seek the involvement of all the important
stakeholders while social science research may not. Evaluation stakeholders typically
include groups with widely varying perspectives such as clients, regulatory agencies,
and the agency administration overseeing the intervention. Fourth, an evaluation
is continually engaged in a political process that attempts to bring together these
widely different stakeholders so that ideally all their views are considered, and all are
20 Part I • introduction
participating in harmony with each other. This is not usually an emphasis of social
science research.
SEVEN STEPS IN CONDUCTING AN EVALUATION
A general approach for conducting an evaluation is introduced here and elaborated
on further in later chapters. The steps of this approach apply to both program and
practice evaluations. The approach involves seven general steps based on a modified version of the steps identified in other evaluation approaches (e.g., Bamberger,
Rugh, & Mabry, 2019; Linfield & Posavac, 2019; York, 2009). Since the word “steps”
is also used in other contexts in various parts of the book, whenever “steps” refers to
this approach, it will be identified as the Seven Evaluation Steps.
Seven Evaluation Steps
Step 1: Identify the Problem or Concern to Be Evaluated.
Step 2: Identify and Explore Ways to Involve Stakeholders.
Step 3: Determine the Purpose of the Evaluation.
Step 4: Plan the Evaluation.
Step 5: Implement the Evaluation.
Step 6: Prepare a Written or Oral Report.
   Step 7: Disseminate the Findings.
Step 1: Identify the Problem or Concern to Be Evaluated
During step 1, the evaluator becomes familiar with the problem or concern that an
evaluation will examine. Some general questions to ask include the following: What
is to be evaluated? Is the program working effectively or are there some problems
that are evident? Similarly, on the practice level, is the intervention with the client
working effectively? Why or why not?
During this step, it is also important to begin gathering information about the
context of the problem. It would be helpful to find out more about some of the pertinent components of the program or practice intervention. Also identifying the
client population that is served, the clients’ problems and needs that the intervention addresses, and the goals of the intervention. The services and goods that are
provided to reach these goals are also important to identify and understand.
Step 2: Identify and Explore Ways to Involve Stakeholders
A concurrent step with the information gathering of step 1 is to identify and explore
ways to involve the stakeholders of the program. Stakeholders are the people who
are invested in the intervention in some way, such as representatives of the funding
and regulatory groups that finance and set standards for the intervention and the
Chapter 1 • Evaluation and Social Work
21
administrators and board members of the agency sponsoring the program. Some
stakeholders, especially program directors, are likely to be evaluated based on the
intervention’s performance, so they also have an obvious stake in what happens.
Staff members who deliver the goods and services have an obvious stake in
the intervention as well. Their jobs depend on the program’s survival and may also
depend upon the success of their interventions. In addition, clients who are the
recipients of the intervention and their family members have a vital stake in what
happens to the intervention, as their daily functioning and very survival may depend
on how well it performs. Stakeholders are likely to be different for program and practice interventions. Program interventions tend to primarily be macro stakeholders,
such as members of a board of directors, community advisory boards, and others
in the public sector, while the main stakeholders of practice interventions are often
supervisors, practitioners, and client advocates.
Step 3: Determine the Purpose of the Evaluation
At this point, the evaluator needs to become familiar with the program or practice
interventions to be evaluated and any known problems. Concurrently, relationships
need to be developed with the stakeholders. Also, information needs to be gathered
about who wants the evaluation and why. These discussions can help the evaluator
find out how much the stakeholders know about evaluations, their views and past
experiences with them, and whether they have a narrow or broad understanding
of what an evaluation can be. These discussions should also be used for the evaluator to highlight the potential contributions of an evaluation, such as program
improvements, new opportunities to help clients, or assistance in making important
decisions. Examples of potential contributions to an evaluation are listed next.
Examples of Contributions and Resources to Consider









Concerns expressed by stakeholders are highly relevant to clients’ well-​being.
Stakeholders have considerable interest in an evaluation.
Existing policies of the agency are supportive.
Financial support is evident.
An awareness of a problem with a program is evident.
Openness to examine a problem further is expressed.
Some understanding and respect for evaluations are expressed.
Openness is evident to the evaluator as a resource.
Staff are supportive of an evaluation.
These discussions with stakeholders should not only uncover any contributions and
resources that can support an evaluation, but also any apprehensions or doubts of
stakeholders about evaluations generally. For example, could an evaluation be perceived as risky for some reason? Too costly? Take too much time? Interfere with
program or practice operations? Therefore, during this step, it is also important to
help the stakeholders identify any real or potential constraints such as the following.
22 Part I • introduction
Examples of Constraints or Resistant Forces










Limited time is available to conduct an evaluation.
Costs appear too high.
An evaluation is not supportive of existing policies.
Evaluation focus can be too subjective.
Fears it will open up the need to change.
Limits are evident about what can change.
Politics of the system are complex and unpredictable.
Evaluator lacks expertise.
The evaluation would have a problem accessing clients for feedback.
There is no need to justify such an evaluation to the funding agency.
Step 3 becomes complete when all stakeholders and the evaluator have agreed on
the general purpose of an evaluation. If a commonly agreed-​on purpose for an
evaluation cannot be identified, negotiations would likely be delayed or discontinued until a purpose could be identified. Identifying the purpose of a program
evaluation is important as it may lead to keeping, expanding, or eliminating a
program, and a practice evaluation may lead to varying an approach with some
types of clients.
Step 4: Plan the Evaluation
Once a general purpose for an evaluation is agreed on, a plan for conducting an
evaluation follows. Background work is needed at this point if not before. For
example, a literature review is often helpful in finding out more about the problem
that is the focus of the evaluation team. Other reports are also important including
those that provide ideas on evaluation methodologies, pertinent program and practice approaches, and, of course, other evaluations on similar topics. Next, several
aspects of a research design need to be developed, including a set of study questions
to explore and hypotheses to test, choosing data sources (e.g., clients, staff members),
developing a specific data collection instrument, and a data analysis plan. In addition, a plan to protect human participants of the study should not be overlooked or
minimized.
All aspects of a plan should involve discussions with the stakeholders so that
there is strong support for it. When appropriate, the plan should be prepared as a
readable written proposal or oral presentation understandable to all stakeholders.
With practice evaluations, it is important to engage the clients in understanding
and participating in the evaluation plan even though this will take additional time
and effort. For example, a goal attainment scale may be used in a practice evaluation
to measure the clients’ progress on their outcomes. In this case, the scale should be
described to the clients initially, and clients should be encouraged to help define
the specific outcome measures that fit their circumstances. Goal attainment scales
and the role that clients can play in developing them are described more fully in
Chapter 9.
Chapter 1 • Evaluation and Social Work
23
Step 5: Implement the Evaluation
The evaluation plan, often referred to as the evaluation design, is now ready to be
implemented. Often its implementation may involve several people in an agency,
such as secretaries searching for client case material, staff members who will interview clients individually or in focus groups, and a questionnaire that staff members
might need to hand out to clients. In a practice evaluation, staff members are likely to
implement one form or another of a single-​system design. Along with implementing
the data collection effort, quantitative data are coded and entered into a computer program for analysis. Qualitative data are usually prepared for analysis in
narrative form.
Step 6: Prepare a Written or Oral Report
Once the study has been completed, preparation of a report of the results follows.
Such reports are designed to address the major questions of stakeholders, usually reflected in the initial purpose worked out in step 3. The report can be oral, written, or
both. Report preparation involves several steps, including organizing, analyzing, and
interpreting the findings so that they are understandable; developing conclusions
and recommendations that are useful and practical; and exploring the use of visual
aids, such as tables and graphs, to assist in communication. Reports of program
evaluations are usually prepared for one set of stakeholders (e.g., funding agencies,
administrators, boards of directors, community groups) and the reports of practice
evaluations for another set (e.g., supervisors, program coordinators, clients).
Step 7: Disseminate the Findings
The last step of an evaluation is to disseminate the findings to stakeholders and
others. Unfortunately, this step is often overlooked, minimized, or assumed to be up
to those who would like to review it. The results are likely to be disseminated to several different types of stakeholders, some of which are obvious, such as the funding
and regulatory agencies and the agency administration. Other stakeholders may
be easily overlooked but are also important, including former and current clients
and relevant community groups. A report can be communicated in many forms—​
including oral or written, comprehensive or brief—​and in varied formats, such as
a technical report, a public meeting, a staff workshop, a series of discussions, and a
one-​page summary for clients and their families.
DEFINING AND CLARIFYING IMPORTANT TERMS
Several important terms are important to define before going further. They are relevant to answering numerous basic questions like, What is a program and how is
it different from services? What distinguishes programs from the practice of individual workers? What are program evaluations and practice evaluations? How are
they similar and different? Finally, what are evidence-​based interventions? Let’s
consider the basic terms: program, program theory, practice, practice theory,
24 Part I • introduction
services, interventions, program evaluation, practice evaluation, and evidence-​
based interventions.
As mentioned earlier in the chapter, a program is a subunit of a social agency
that provides clients with a set of goods and/​or services with common goals. These
goods and services are typically provided to a specific population of clients who
either voluntarily seek them or are required to receive them. A program typically
employs more than one and usually several staff members to provide goods and
services.
Chen (2014) expands on this definition by developing the notion of program
theory. Program theory is expected to encompass two important sets of documentation. First, it provides a descriptive documentation of the goals, outcomes, and
interventions of the program based on the perspectives of various stakeholders.
Second, program theory documents the nature of the causal relationship between the
program interventions and the desired outcomes for the target group of recipients. It
does this by offering research evidence that the proposed program model has been
effective in helping a group of clients with a specific set of characteristics in the past.
Also mentioned earlier, practice (or practice interventions) is a subunit of a
program that provides the services of one worker to one client system. Practice
consists of the helping processes provided by social workers and other staff that
help each client reach their program goals. A social worker’s practice can be offered
to an individual, a family, a small group of clients, an organization, a social policy
area, or a larger community. These helping processes of practice are a major focus
of practice courses in professional social work programs and draw from a broad
range of practice theories, such as generalist problem-​solving, cognitive behavioral,
person-​centered, solution-​focused treatments, and social action. In-​service training
programs of agencies are expected to periodically provide helpful updates on such
practice approaches used in the agency. Practice theory can be described like
program theory. In practice, goals, outcomes, and interventions are all documented
as are the causal relationship between a practice intervention and the desired
outcomes for a specific client.
Example of a Program and the Practice Interventions
of a Home Health Agency
A home health agency often sponsors one overall program, the goals of which are
to help medically challenged clients remain in their own homes independently
and prevent placement in a residential program such as an assisted living facility.
Such a program offers several practice interventions or services to clients who are
homebound, including counseling and referrals, nursing, physical therapy, and
occupational therapy. These services are provided by a team of social workers,
nurses, physical and occupational therapists, and others. They exist to help the
program meet its goals. Home health programs also offer goods, such as medical
   supplies and incontinence products.
Chapter 1 • Evaluation and Social Work
25
Distinguishing between programs and practice is important. For example, if you
were to describe a program to someone unfamiliar with what you do, you would
likely begin by referring to its goals and what it attempts to accomplish for clients.
In contrast, practices are embedded in and largely shaped by a program as mentioned earlier in the chapter. If you begin by describing the specific goals of a
practice intervention with one client, your explanation may appear incomplete
and beg for an explanation of why it exists or what it intends to accomplish in
general.
Because this book provides an emphasis on both programs and practice, the
term intervention is often used to refer to either the entire program or the individual practices of each practitioner. As an example, interventions can be evident in
a recovery program of a substance abuse agency as well as in the individual group
practice of one staff member. In the previous editions of the book, services were also
introduced as a term of importance. Services are the activities that both programs
and one practitioner can offer. Services focus on the processes that help clients
reach their goals. Since services do not distinguish between programs and practice
interventions, think of services as a generic term that can be used interchangeably
with program or practice interventions throughout the text.
Using working definitions of these key concepts (programs, practice, and
interventions), we can define program evaluations and practice evaluations.
A program evaluation is a study of a social program that uses the principles and
methods of scientific research. It concerns itself with the practical needs of an organization, not theoretical issues, and it abides by a professional ethical code. The primary purposes of a program evaluation are to provide accountability to its various
stakeholders and to determine how effective the program is in helping clients.
A practice evaluation is a study of a practitioner’s interventions with a client,
which can be at several different system levels (e.g., individual, group, neighborhood). Like a program evaluation, a practice evaluation uses the principles and
methods of scientific research and abides by a professiona…
Purchase answer to see full
attachment

error: Content is protected !!