Designing Effective Assessment - Trudy W. Banta - E-Book

Designing Effective Assessment E-Book

Trudy W. Banta

0,0
36,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Fifteen years ago Trudy Banta and her colleagues surveyed the national landscape for the campus examples that were published in the classic work Assessment in Practice. Since then, significant advances have occurred, including the use of technology to organize and manage the assessment process and increased reliance on assessment findings to make key decisions aimed at enhancing student learning. Trudy Banta, Elizabeth Jones, and Karen Black offer 49 detailed current examples of good practice in planning, implementing, and sustaining assessment that are practical and ready to apply in new settings. This important resource can help educators put in place an effective process for determining what works and which improvements will have the most impact in improving curriculum, methods of instruction, and student services on college and university campuses.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 530

Veröffentlichungsjahr: 2010

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

Dedication

PREFACE

THE AUTHORS

PART ONE: PRINCIPLES OF GOOD PRACTICE IN OUTCOMES ASSESSMENT

CHAPTER ONE: PLANNING EFFECTIVE ASSESSMENT

Engaging Stakeholders

Connecting Assessment to Valued Goals and Processes

Creating a Written Plan

Timing Assessment

Building a Culture Based on Evidence

CHAPTER TWO: IMPLEMENTING EFFECTIVE ASSESSMENT

Providing Leadership

Empowering Faculty and Staff to Assume Leadership Roles for Assessment

Providing Sufficient Resources

Educating Faculty and Staff about Good Assessment Practices

Assessing Processes as Well as Outcomes

Communicating and Using Assessment Findings

CHAPTER THREE: IMPROVING AND SUSTAINING EFFECTIVE ASSESSMENT

Providing Credible Evidence of Learning to Multiple Stakeholders

Reviewing Assessment Reports

Ensuring Use of Assessment Results

Evaluating the Assessment Process

PART TWO: PROFILES OF GOOD PRACTICE IN OUTCOMES ASSESSMENT

CHAPTER FOUR: GOOD PRACTICE IN IMPLEMENTING ASSESSMENT PLANNING

Institutions

Putting Students at the Center of Student Expected Learning Outcomes

Planning Assessment in Student Affairs

E Pluribus Unum:

Facilitating a Multicampus, Multidisciplinary General Education Assessment Process

Triangulation of Data Sources in Assessing Academic Outcomes

Assurance of Learning Initiative for Academic Degree Programs

CHAPTER FIVE: GENERAL EDUCATION PROFILES

Institutions

Assessing Critical Thinking and Higher-Order Reasoning in Service-Learning Enhanced Courses and Course Sequences

Improvement in Students’ Writing and Thinking through Assessment Discoveries

Assessing Learning Literacies

Using Direct and Indirect Evidence in General Education Assessment

Institutional Portfolio Assessment in General Education

Faculty Ownership: Making a Difference in Systematic General Education Assessment

CHAPTER SIX: UNDERGRADUATE ACADEMIC MAJORS PROFILES

Institutions

Assessing Scientific Research Skills of Physics Majors

E-Portfolios and Student Research in the Assessment of a Proficiency-Based Major

Integrating Student and Program Assessment with a Teacher Candidate Portfolio

CHAPTER SEVEN: FACULTY AND STAFF DEVELOPMENT PROFILES

Institutions

From Assessment to Action: Back-Mapping to the Future

Faculty Learning Communities as an Assessment Technique for Measuring General Education Outcomes

Assessing Course Syllabi to Determine Degree of Learner-Centeredness

Implementing Annual Cycles for Ongoing Assessment of Student Learning

CHAPTER EIGHT: USE OF TECHNOLOGY PROFILES

Institutions

Improving First-Year Student Retention and Success through a Networked Early-Warning System (NEWS)

Organizing the Chaos: Moving from Word to the Web

Multifaceted Portfolio Assessment: Writing Program Collaboration with Instructional Librarians and Electronic Portfolio Initiative

Using Surveys to Enhance Student Learning, Teaching, and Program Performance of a Three-Week Winter Session

CHAPTER NINE: PROGRAM REVIEW PROFILES

Institutions

Ongoing Systematic Assessment: One Unit at a Time

Connecting Assessment to Program Review

Integrating Assessment, Program Review, and Disciplinary Reports

A New Plan for College Park Scholars Assessment

Assessing Diversity and Equity at a Multicampus Institution

CHAPTER TEN: FIRST-YEAR EXPERIENCES, CIVIC ENGAGEMENT OPPORTUNITIES, AND INTERNATIONAL LEARNING EXPERIENCES PROFILES

Institutions

Organization

Using Assessment Data to Improve Student Engagement and Develop Coherent Core Curriculum Learning Outcomes

Using Assessment to Enhance Student Resource Use, Engagement, and Connections in the First Year

A Mixed-Method, Longitudinal Approach to Assessing Civic Learning Outcomes

Assessing International Learning Using a Student Survey and E-Portfolio Approach

CLASSE: Measuring Student Engagement at the Classroom Level

CHAPTER ELEVEN: STUDENT AFFAIRS PROFILES

Institutions

Creating and Implementing a Comprehensive Student Affairs Assessment Program

Career Services Assessment Using Telephone and Web-Based Surveys

Assessing Satisfaction and Use of Student Support Services

Assessing Educational Sanctions That Facilitate Student Learning with First-Time Alcohol Policy Violators

CHAPTER TWELVE: COMMUNITY COLLEGES PROFILES

Institutions

Mission-Based Assessment to Improve Student Learning and Institutional Effectiveness

Living Rubrics: Sustaining Collective Reflection, Deliberation, and Revision of Program Outcomes

General Education Assessment Teams: A GREAT Project

CHAPTER THIRTEEN: GRADUATE PROGRAMS PROFILES

Institutions

Using Reflective Learning Portfolio Reviews for Master’s and Doctoral Students

Making Learning Outcomes Explicit through Dissertation Rubrics

Cross-Discipline Assessment of MBA Capstone Projects

Measuring the Professionalism of Medical Students

CHAPTER FOURTEEN: GOOD PRACTICE IN IMPROVING AND SUSTAINING ASSESSMENT

Institutions

Peer Review of Assessment Plans in Liberal Studies

Assessment of Student Academic Achievement in Technical Programs

Assessing Achievement of the Mission as a Measure of Institutional Effectiveness

Linking Learning Outcomes Assessment with Program Review and Strategic Planning for a Higher-Stakes Planning Enterprise

Building a Context for Sustainable Assessment

RESOURCES

RESOURCES A: INSTITUTIONAL PROFILES BY INSTITUTION

RESOURCES B: INSTITUTIONAL PROFILES BY CATEGORY

RESOURCES C: PROFILED INSTITUTIONS BY CARNEGIE CLASSIFICATION

RESOURCES D: CONTRIBUTORS OF PROFILES INCLUDED IN THEIR ENTIRETY

REFERENCES

INDEX

End User License Agreement

List of Tables

CHAPTER ONE: PLANNING EFFECTIVE ASSESSMENT

TABLE 1.1. PLANNING FOR LEARNING AND ASSESSMENT.

CHAPTER FIVE: GENERAL EDUCATION PROFILES

TABLE 5.1. INFORMATION LITERACY SCORES.

CHAPTER SIX: UNDERGRADUATE ACADEMIC MAJORS PROFILES

TABLE 6.1. RESULTS FROM THE ASSESSMENT

CHAPTER SEVEN: FACULTY AND STAFF DEVELOPMENT PROFILES

TABLE 7.1. RUBRIC FOR DETERMINING DEGREE OF LEARNING-CENTEREDNESS IN COURSE SYLLABI.

CHAPTER EIGHT: USE OF TECHNOLOGY PROFILES

TABLE 8.1. INFORMATION LITERACY SKILLS, 2002–2007: SUMMARY OF PAPERS RECEIVING A RATING OF “2” OR HIGHER.

CHAPTER NINE: PROGRAM REVIEW PROFILES

TABLE 9.1. FIVE MOST COMMONLY CITED STRENGTHS—ACADEMIC UNIT REVIEWS.

TABLE 9.2. FIVE MOST COMMONLY CITED CHALLENGES—ACADEMIC UNIT REVIEWS.

TABLE 9.3. FIVE MOST COMMONLY CITED STRENGTHS—EDUCATIONAL SUPPORT UNIT REVIEWS.

TABLE 9.4. SEVEN MOST COMMONLY CITED CHALLENGES—EDUCATIONAL SUPPORT UNIT REVIEWS.

TABLE 9.5. BEST PRACTICES—COLLEGE PARK SCHOLARS.

TABLE 9.6. COLLEGE PARK SCHOLARS ASSESSMENT PLAN.

CHAPTER FOURTEEN: GOOD PRACTICE IN IMPROVING AND SUSTAINING ASSESSMENT

TABLE 14.1. IONA COLLEGE—MISSION KPI RESULTS: COMPARISON AND TRENDS 2004–2007.

List of Illustrations

CHAPTER FIVE: GENERAL EDUCATION PROFILES

FIGURE 5.1. MULTILEVEL AND MULTIPHASE PLAN FOR ENGAGING FACULTY AND ASSESSING THE FOUR LITERACIES.

Guide

Cover

Table of Contents

Begin Reading

Pages

cover

contents

i

ii

iii

iv

vii

viii

ix

x

xi

xii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

Designing Effective Assessment

Principles and Profiles of Good Practice

Trudy W. Banta

Elizabeth A. Jones

Karen E. Black

Copyright © 2009 by John Wiley & Sons, Inc. All rights reserved.

Published by Jossey-BassA Wiley Imprint989 Market Street, San Francisco, CA 94103-1741—www.josseybass.com

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at www.wiley.com/go/permissions.

Readers should be aware that Internet websites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Jossey-Bass books and products are available through most bookstores. To contact Jossey-Bass directly call our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317-572-3986, or fax 317-572-4002.

Jossey-Bass also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Library of Congress Cataloging-in-Publication Data

Banta, Trudy W.

Designing effective assessment : principles and profiles of good practice / Trudy W. Banta, Elizabeth A. Jones, Karen E. Black. p. cm.

Includes bibliographical references and index.

ISBN 978-0-470-39334-5 (pbk.)

1. Universities and colleges–United States–Examinations. 2. Education, Higher–United States–Evaluation. 3. Educational tests and measurements–United States. 4. Education, Higher–United States–Evaluation–Case studies. I. Jones, Elizabeth A. II. Black, Karen E. III. Title.

LB2366.2.B36 2009 378’.01–dc22

2009009809

THE JOSSEY-BASSHIGHER AND ADULT EDUCATION SERIES

ToHolly, Logan, and T. J.Father, Mother, and DebbieMarie, Mary, Earl, Joe, Mary Anne, Beth, Ryan,Brett, Claire, and Moses

And special thanks toShirley Yorger

PREFACE

Please send me some examples of assessment in general education.” “I need examples of assessment in engineering and business.” “How can we encourage faculty to engage in assessment?” “Can you name ten institutions that are doing good work in assessment?” These are the questions colleagues around the globe send us via e-mail or ask us at conferences or during campus visits. These are the questions that motivated the three authors of this book to develop its content on outcomes assessment in higher education.

Two of us—Karen Black and Trudy Banta—were involved in a similar project in the mid-1990s. With colleagues Jon P. Lund and Frances W. Oblander, we edited Assessment in Practice: Putting Principles to Work on College Campuses (Banta, Lund, Black, & Oblander, 1996). That book began with chapters on each of ten principles of good practice that had emanated from assessment experience prior to 1995 and continued with a section containing 86 short case studies of campus assessment practice categorized by the focus of assessment in each, including general education, student development, or classroom assessment. The principles and the cases in that 1996 publication are as relevant and useful today as they were then. In fact, two of us are still using the book as a reference and some of the cases as examples in the courses we teach for students enrolled in doctoral programs in higher education. Nevertheless, we decided that a new book organized similarly would give us even more examples to share when we are asked questions like those noted earlier.

First we posted a request on the ASSESS listserv for brief profiles of good practice in assessment. In addition, we sent some 800 e-mail requests to individuals who had contributed to Assessment in Practice, or to the bimonthly Assessment Update, or who had presented at the Assessment Institute in Indianapolis in recent years. We received approximately 180 expressions of interest in contributing a profile. We then wrote to these 180 individuals and asked them to prepare a 1,500-word profile using an outline we provided.

The outline we used for case studies for Assessment in Practice contained just four headings to guide authors in developing their narratives: Background and Purpose (of the Assessment Activity), Method, Findings and Their Use, and Success Factors. Now that more than a decade has passed, we wanted to know if the use of our findings had had a noticeable or measurable effect on practice, and more important, on student learning and success. We also were interested in details such as the years of implementation, and the cost of the assessment initiatives. Therefore, our outline for authors of profiles for this book contains the following headings: Background and Purpose(s) of Assessment, Assessment Method(s) and Year(s) of Implementation, Required Resources, Findings, Use of Findings, Impact of Using the Findings, Success Factors, and Relevant Institutional Web Sites Pertaining to This Assessment Practice.

We were surprised and pleased that a large proportion of the early expressions of interest we received led to the development of full profiles. By our deadline we had received 146 of these. After reviewing them we wrote Part One of this volume, illustrating the principles of good practice in assessment that we consider essential with examples from some of the 146 profiles. We used as the primary reference for the principles a section titled, “Characteristics of Effective Outcomes Assessment” in Building a Scholarship of Assessment (Banta & Associates, 2002). That listing was based on work by Hutchings (1993); Banta and Associates (1993); Banta et al. (1996); American Productivity and Quality Center (1998); and Jones, Voorhees, and Paulson (2002).

For Part Two of this volume we selected for inclusion in their entirety 49 of the most fully developed of the profiles we had received. As in Assessment in Practice, we placed each of the profiles in a category based on its primary focus, such as general education, academic major, or program review. The profiles in each category are preceded by a narrative that explains their most important features.

Initially we were quite frustrated by the fact that although we had received so many good profiles, we were able to use only a third of them due to space limitations. But then, after securing permission, we decided to list in Resource A all of the institutions and authors from the collection of 146 profiles. In almost every case we have provided a Web site that may be consulted for further information about the assessment practices under way at the institution identified. In Resource B all the profiles are categorized to make it easier for readers to find the type of assessment (general education or graduate programs) they seek. Resource C presents a list of institutions by Carnegie Classification for the 49 profiles used in their entirety. Resource D contains the titles of the authors of the 49 full profiles.

The institutional profiles of assessment practice that we received represent a range of public and private institutions, from community colleges to research universities. Representation is also national in scope: profiles were received from institutions in California and Massachusetts, Florida and Oregon, and many states in between. As is clear from reading the “Background and Purpose” sections of the profiles, accreditation, both regional and disciplinary, has been a major driving force behind assessment at many of these institutions. State requirements for public institutions also played a role in some of the examples.

As we know so well, state and national legislators and federal policy makers are calling on colleges and universities to furnish concrete evidence of their accountability. Many of our constituents believe that standardized test scores will provide the evidence of student learning that is needed, and tests of generic skills such as writing and critical thinking are being suggested as the sources of such evidence. The profiles we have reviewed will disappoint decision makers in this regard. In almost all cases where standardized tests of generic skills have been used at these institutions, the test scores are not being reported as a single source of evidence of student learning. Faculty who have studied the scores over several years with the intention of using them to provide direction for improvements have determined that test scores alone are not adequate to the task of defining what students learn in college, nor are they illuminating and dependable guides for making decisions about improvements in curriculum and methods of instruction that will enhance student learning. Where standardized tests of generic skills have been tried, in most cases they have been supplemented with indirect measures such as questionnaires and focus groups and/or faculty-developed direct measures such as classroom tests or capstone projects.

Few of these assessment profiles contain the kind of quantitative data that could be reported simply and grasped easily by external audiences. Moreover, the information in the section “Impact of Using Findings” is seldom expressed in measurable terms. But we have assembled a wealth of information we can use to respond to that oft-asked question of how to engage faculty in assessment. And the evidence of student learning, engagement, and satisfaction that has been amassed has, in fact, been used to add courses and other learning experiences to the curriculum, to educate faculty about better ways to teach, and to improve student support services such as advising. Faculty time and administrative leadership are the chief resources identified as critical to the success of assessment initiatives.

We sincerely hope that this book will be regarded by faculty, staff, and administrators as the rich resource of principles and profiles of good assessment practice that we envision.

September 2008

Trudy W. BantaElizabeth A. JonesKaren E. Black

THE AUTHORS

Trudy W. Banta is professor of higher education and senior advisor to the chancellor for academic planning and evaluation at Indiana University–Purdue University Indianapolis. She has developed and coordinated 21 national conferences and 15 international conferences on the topic of assessing quality in higher education. She has consulted with faculty and administrators in 46 states, Puerto Rico, South Africa, and the United Arab Emirates and has by invitation addressed national conferences on outcomes assessment in Canada, China, England, France, Germany, Spain, and Scotland. Dr. Banta has edited 15 published volumes on assessment, contributed 26 chapters to published works, and written more than 200 articles and reports. She is the founding editor of Assessment Update, a bimonthly periodical published since 1989. She has been recognized for her work by the American Association for Higher Education, American College Personnel Association, American Productivity and Quality Center, Association for Institutional Research, National Council on Measurement in Education, and National Consortium for Continuous Improvement in Higher Education.

Elizabeth A. Jones is professor of higher education leadership at West Virginia University (WVU). She has conducted assessment research supported by the National Postsecondary Education Cooperative that resulted in the publication of two books.

She served as the principal investigator of a general education assessment project supported by the Fund for the Improvement of Postsecondary Education. She has chaired the general education assessment committee at WVU and offered numerous professional development seminars to both student affairs staff and faculty members. Dr. Jones has published numerous articles pertaining to assessment and has presented at national conferences. She is currently the editor of the Journal of General Education published by the Pennsylvania State University Press.

Karen E. Black is director of program review at Indiana University–Purdue University Indianapolis where she teaches in the organizational leadership and supervision department and is an adjunct faculty member in University College. She is managing editor of Assessment Update.

PART ONEPRINCIPLES OF GOOD PRACTICE IN OUTCOMES ASSESSMENT

We introduce this volume with a set of principles for good practice in assessing the outcomes of higher education that have been drawn from several sources, principally from the “characteristics of effective outcomes assessment” in Building a Scholarship of Assessment (Banta & Associates, 2002, pp. 262–263). This collection of principles is by no means exhaustive, but it covers many of the components considered by practitioners to be essential to good practice. The principles are presented in three groups, each associated with a phase of assessment: first planning, then implementing, and finally improving and sustaining assessment initiatives. Current literature is cited in providing a foundation for the principles, and brief excerpts from some of the 146 profiles submitted for this book are used to illustrate them.

In Chapter 1, “Planning Effective Assessment,” we present the following principles as essential:

Engaging stakeholders

Connecting assessment to valued goals and processes

Creating a written plan

Timing assessment

Building a culture based on evidence

In Chapter 2, “Implementing Effective Assessment,” these principles are identified and discussed:

Providing leadership

Creating faculty and staff development opportunities

Assessing processes as well as outcomes

Communicating and using assessment findings

In Chapter 3, “Improving and Sustaining Effective Assessment,” the following principles are described and illustrated:

Providing credible evidence of learning to multiple stakeholders

Reviewing assessment reports

Ensuring use of assessment results

Evaluating the assessment process

CHAPTER ONEPLANNING EFFECTIVE ASSESSMENT

Effective assessment doesn’t just happen. It emerges over time as an outcome of thoughtful planning, and in the spirit of continuous improvement, it evolves as reflection on the processes of implementing and sustaining assessment suggests modifications.

Engaging Stakeholders

A first step in planning is to identify and engage appropriate stakeholders. Faculty members, academic administrators, and student affairs professionals must play principal roles in setting the course for assessment, but students can contribute ideas and so can trustees, employers, and other community representatives. We expect faculty to set broad learning outcomes for general education and more specific outcomes for academic majors. Trustees of an institution, employers, and other community representatives can review drafts of these outcomes and offer suggestions for revision based on their perspectives regarding community needs. Student affairs professionals can comment on the outcomes and devise their own complementary outcomes based on plans to extend learning into campus environments beyond the classroom. Students have the ability to translate the language of the academy, where necessary, into terms that their peers will understand. Students also can help to design data-gathering strategies and instruments as assessment moves from the planning phase to implementation. Finally, regional accreditors and national disciplinary and professional organizations contribute ideas for the planning phase of assessment. They often set standards for assessing student learning and provide resources in the form of written materials and workshops at their periodic meetings.

Connecting Assessment to Valued Goals and Processes

Connecting assessment to institution-wide strategic planning is a way to increase the perceived value of assessment. Assessment may be viewed as the mechanism for gauging progress on every aspect of an institution’s plan. In the planning process the need to demonstrate accountability for student learning may become a mechanism for ensuring that student learning outcomes, and their assessment, are included in the institutional plan. However assessment is used, plans to carry it out must be based on clear, explicit goals.

Since 1992 assessment of progress has been one of the chief mechanisms for shaping three strategic plans at Pace University (Barbara Pennipede and Joseph Morreale, see Resource A, p. 289). In 1997 the success of the first 5-year plan was assessed via a survey of the 15 administrators and 10 faculty leaders who had been responsible for implementing the plan. In 2001, in addition to interviews with the principal implementers, other faculty, staff, and students, as well as trustees, were questioned in focus groups and open meetings and via e-mail.

By 2003 the Pace president had decided that assessment of progress on the plan needed to occur more often—annually rather than every fifth year. Pace faculty and staff developed a strategic plan assessment grid, and data such as student performance on licensing exams, participation in key campus programs, and responses to the UCLA freshman survey were entered in appropriate cells of the grid to be monitored over time.

Likewise, at Iona College 25 dashboard indicators are used to track progress on all elements of Iona’s mission (Warren Rosenberg, see p. 262). Iona’s Key Performance Indicators, which are called KPIs, include statistics supplied by the institutional research office on such measures as diversity of the faculty and student body (percentages of females and nonwhite constituents), 6-year graduation rates, and percentage of graduates completing internships. Student responses to relevant items on the National Survey of Student Engagement (NSSE) are used in monitoring progress toward the mission element stated “Iona College graduates will be sought after because they will be skilled decision-makers … independent thinkers … lifelong learners … adaptable to new information and technologies.”

According to Thomas P. Judd and Bruce Keith (see p. 46), “the overarching academic goal” that supports the mission of the U.S. Military Academy is this: “Graduates anticipate and respond effectively to the uncertainties of a changing technological, social, political, and economic world.” This broad goal is implemented through ten more specific goals such as ensuring that graduates can think and act creatively, recognize moral issues and apply ethical considerations in decision making, understand human behavior, and be proficient in the fundamentals of engineering and information technology. Each of these goals yields clear, explicit statements of student outcomes. Faculty at West Point set performance standards for each outcome and apply rubrics in assessing student work. The ten goals provide guidance for the development of 30 core courses that are taken by all students at the Military Academy.

Outcomes assessment cannot be undertaken solely for its own sake. Assessment that spins in its own orbit, not intersecting with other processes that are valued in the academy, will surely fail the test of relevance once it is applied by decision makers. Assessment will become relevant in the eyes of faculty and administrators when it becomes a part of the following: strategic planning for programs and the institution; implementation of new academic and student affairs programs; making decisions about the competence of students; comprehensive program (peer) review; faculty and professional staff development; and/or faculty and staff reward and recognition systems.

Creating a Written Plan

As Suskie (2004, p. 57) puts it, planning for assessment requires “written guidance on who does what when.” Which academic programs and student support or administrative units will be assessing which aspects of student learning or components of their programs each year? Who will be responsible for each assessment activity?

A matrix can be helpful in charting progress. As illustrated in Table 1.1, we first set a broad goal or learning outcome in which we are interested, then develop aspects of the goal in the form of specific measurable objectives. A third consideration is where the objective will be taught and learned. Then how will the objective be assessed? What are the assessment findings, and how should they be interpreted and reported? How are the findings used to improve processes, and what impact do the improvements have on achieving progress toward the original goal? Since 1998, a matrix similar to that in Table 1.1 has been used in assessment planning and reporting by faculty and staff in individual departments and offices at Indiana University–Purdue University Indianapolis (see www.planning.iupui.edu/64.html#07).

TABLE 1.1. PLANNING FOR LEARNING AND ASSESSMENT.

1. What general outcome are you seeking?

2. How would you know it (the outcome) if you saw it? (What will the student know or be able to do?)

3. How will you help students learn it? (in class or out of class)

4. How could you measure each of the desired behaviors listed in #2?

5. What are the assessment findings?

6. What improvements have been made based on assessment findings?

7. What has been the impact of improvements?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Walvoord (2004) has provided a useful set of standards for judging an effective assessment plan. She envisions the plan as a written document that

embeds assessment in high-stakes and high-energy processes.

considers audiences and purposes.

arranges oversight and resources.

articulates learning goals.

incorporates an assessment audit of measures already in place and how the data are used in decision making.

includes steps for improving the assessment process.

includes steps designed to improve student learning. (

p. 11

)

The assessment plan at St. Norbert College embodies these standards. It was developed with support from a Title III Strengthening Institutions Grant after insufficient progress in implementing assessment was identified as “an urgent institutional need” (Robert A. Rutter, see Resource A, p. 290). College administrators established the Office of Institutional Effectiveness and the assessment committee was expanded to include campuswide representation. The assessment committee produced the “Plan for Assessing Student Learning Outcomes at St. Norbert College,” which was subsequently endorsed by every division of the college as well as the Student Government Association. The institution’s mission statement was revised to include student learning outcomes, a comprehensive review of the general education program resulted in a continuous evaluation process that repeats on a four-year cycle, and a rigorous program review process was implemented for academic units. As a result of assessing learning outcomes in general education and major fields, general education course offerings in some areas have been refocused, major and minor programs have been reviewed and improved, a few programs have been terminated, new strategies to support and retain students have been implemented, and a student competence model in student life has been developed.

Timing Assessment

Timing is a crucial aspect of planning for assessment. Ideally, assessment is built into strategic planning for an institution or department and is a component of any new program as it is being conceived. If assessment must be added to a program or event that is already under way, time is needed to convince the initiative’s developers of the value of assessment for improving and sustaining their efforts. Finally, because effective assessment requires the use of multiple methods, it is not usually resource-efficient to implement every method right away or even every year. A comprehensive assessment plan will include a schedule for implementing each data-gathering method at least once over a period of three to five years.

At the University of Houston main campus every academic and administrative unit must submit an institutional effectiveness plan each year. Institutional research staff assist faculty with program reviews, surveys, and data analysis. Part-time and full-time assessment professionals are embedded in the colleges to provide day-to-day support. Libby Barlow (see Resource A, p. 293) describes the evolution of the current plan as slow, but asserts that “genuine assessment … takes time to take root. Higher education is a slow ship to turn … so pushing faster than faculty are willing to go will inevitably cause backlash and be counterproductive. Time has allowed us to go through several structures to discover what would work.”

Building a Culture Based on Evidence

Outcomes assessment can be sustained only if planning and implementation take place in an atmosphere of trust and within a culture that encourages the use of evidence in decision making. Bresciani (2006) notes the following characteristics of such an environment:

Key institutional leaders must demonstrate that they genuinely care about student learning issues.

Leaders must create a culture of trust and integrity through consistent actions that demonstrate a commitment to ethical and evidence-based decision-making.

Connections must be established between formative and summative assessment and between assessment for improvement and assessment for accountability.

Curriculum design, pedagogy, and faculty development must be connected to delivery and evaluation of student learning.

Faculty research and teaching must be connected so that they complement each other in practice and in the campus reward structure. (

pp. 144

146

)

At Agnes Scott College the faculty-staff Committee on Assessing Institutional Effectiveness recommended that the president integrate a report on assessment activities in the template for annual reports that all academic and administrative units must submit. Laura Palucki Blake (see Resource A, p. 280) believes this integration of assessment in a report long expected of each unit helps to create a positive culture for assessment. If the president expects it, assessment must be important. Moreover, because each vice president sees the reports from his or her units, assessment evidence takes on added importance in decision making at Agnes Scott.

In subsequent sections of this volume we will describe additional characteristics of the culture in which assessment can thrive.

CHAPTER TWOIMPLEMENTING EFFECTIVE ASSESSMENT

The most carefully crafted plans will not produce desired results if not implemented in good faith by appropriate people who have the proper knowledge and skills and who are supported by organizational leaders. Assessment scholars (Walvoord, 2004; Suskie, 2004; Palomba & Banta, 1999) have written entire books on specific ways to conduct assessment. Each has offered sound general and step-by-step advice. These authors provide evidence that key principles under-girding successful implementation include providing knowledgeable and effective leadership, with opportunities for faculty and staff development; emphasizing that assessment is essential to learning, and therefore everyone’s responsibility; educating faculty and staff about good assessment practices; providing sufficient resources to support assessment; and devolving responsibility for assessment to the unit level. We expand on several of these principles in the paragraphs below.

Providing Leadership

Leadership at all levels is critical for successful assessment programs (Maki, 2004; Suskie, 2004; Peterson & Vaughn, 2002). Academic leaders—including presidents, provosts, deans, department chairs, and leaders in student affairs—must be public advocates for assessment and provide appropriate leadership as well as support for the faculty and staff closest to the assessment process. Through public and private statements and actions, these leaders can enhance the likelihood that the assessment process will be valued and sustained. Such leaders often foster innovations by providing meaningful incentives for participants. Leaders should clearly articulate the need for and importance of a credible and sustainable student outcomes assessment process, but faculty and staff also must commit time and talent to the process.

The task of revitalizing a dormant assessment process at the University of Central Florida has been successful first and foremost because of the commitment and support of the president and senior administrators. The president’s sustained attention to the question of how the institution can do better has produced a stronger assessment program and ultimately led to external validation through successful accreditation visits (Julia Pet-Armacost and Robert L. Armacost, see Resource A, p. 293).

Empowering Faculty and Staff to Assume Leadership Roles for Assessment

Faculty and staff routinely take on campuswide and department-level leadership roles—for example, by leading assessment committees or by joining formal or informal research or practitioner groups to discuss and analyze data and to encourage and offer support for their colleagues. Faculty are involved in the design and implementation of student learning activities and the curriculum and thus are the most knowledgeable about goals for student learning in these areas. Likewise, student affairs professionals and advisors are the experts in setting student learning goals for campus activities and advising. All of these individuals must play critical leadership roles in assessing the outcomes of these activities at both the campus level and within colleges, schools, divisions, and departments.

Although leadership is imperative at all levels, assessment has the most impact when responsibility for carrying out assessment resides primarily at the unit level. Because unit faculty and staff have developed the goals for student learning, they must assess student achievement of those goals. The learning that takes place in the process of assessing the degree to which goals are achieved is most useful at the unit level where the principals can take that understanding and apply it in improving curriculum and instruction. Receiving a report from a central office is informative, but results take on new meaning when the persons responsible for the program or process engage in assessment design, implementation, and analysis. And regardless of who collects and analyzes the data, actions based on assessment findings must be taken at the unit level. If individuals in a unit are to embrace the responsibility for taking the action, they must own the assessment process.

Central assessment or institutional research offices can provide leadership by not only collecting and analyzing data and reporting results but also by leading processes. In addition, many academic units such as colleges of business or colleges of education have full-time staff members or faculty members serving as the full-time assessment leader. At St. Cloud State University, the Assessment Peer Consulting Program that trains peer consultants to assist units engaged in assessment is led by staff in the Assessment Office (James Sherohman, see Resource A, p. 290). Based on the strengths of the consultants and the nature of the request, staff assign two campus consultants to assist each unit seeking help with an assessment process. When the work is finished, the requesting unit provides an evaluation of the facilitation process. Sherohman reports that this process has strengthened individual unit assessment processes and has resulted in greater assessment capacity throughout the campus.

Ownership by faculty and staff participating in learning communities such as the Hocking College’s Success Skills Integration project has been enhanced by their participation in the process as they struggle to find suitable metrics for measuring student learning in general education courses. As a result of this struggle, faculty are looking for more varied learning opportunities for students. Success of long-term faculty and staff initiatives in general education such as the one at Hocking is attributed to the key roles these individuals have played in developing, implementing, and assessing the program (Judith Maxson and Bonnie Allen Smith, see p. 258).

Providing Sufficient Resources

In a national survey of institution leaders and an extensive literature review, Peterson, Einarson, Augustine, and Vaughan (1999) report that assessment proponents argue for the commitment of resources to assessment initiatives. Authors of this comprehensive study of nearly 1,400 responses (from approximately 2,500 questionnaires distributed) from institutions across the country reported that 49 percent of institutions had established budget allocations “to support their student assessment activities” (p. 94). However, the commitment varied greatly by institution type. Baccalaureate institutions were the most likely to have explicit budget allocations, and research universities the least likely to do so.

In addition to the traditional budget allocations for staff time and relevant materials, leaders must provide resources for developing appropriate methods, giving faculty and staff opportunities to hone their assessment skills, and rewarding those who engage in assessment, whether that be through the traditional promotion and tenure process and staff advancement or through other means, such as assessment grants or awards. Faculty and staff can contribute to the resource base by competing for external grants or awards.

Obtaining external grants has proven to be a useful way to launch an assessment program, but sustaining the program with soft money is risky and should be viewed as a temporary measure. According to Robert A. Rutter (see Resource A, p. 290) federal grants such as the Title III funding received by faculty at St. Norbert College can provide interim support until permanent resources are available for infrastructure. In addition, such funds can be used for faculty development in the form of conference attendance. Partly as a result of what faculty have learned at national meetings, the assessment activity at St. Norbert has matured, as evidenced, for example, by the revisioning of the general education program and its assessment.

A grant from the Bush Foundation was used to fund a longitudinal study at the University of North Dakota. Kelsch and Hawthorne (see Resource A, p. 294) report that these funds were used to provide stipends to individuals to interview students and transcribe their comments, then participate in data analysis days during which faculty considered implications of the data and planned the next year’s interviews. During the interviews, students were asked how they experienced the general education curriculum and their learning. Faculty were assigned 10 to 12 students each to interview and were paid $1,000 to $1,500 per year; student participants were given $25 per interview.

Educating Faculty and Staff about Good Assessment Practices

To help faculty and staff understand the potential range of effective assessment practices and how to implement them, many colleges and universities offer special programming through a center for teaching and learning or a faculty-staff development office. Though most of the profiles addressing professional development in this book are focused on academic affairs, it is crucial to provide similar programming for student affairs leaders and staff. Such programming can be designed as an integrated set of learning experiences that take place over several semesters. Aloi, Green, and Jones (2007) discuss the specific nature of six professional development seminars that were offered to all student affairs leaders and staff at West Virginia University. These seminars helped student affairs units develop learner-centered assessment plans. A significant challenge to leaders of professional development initiatives that involve planning and implementing assessment processes is in sustaining the programs’ effectiveness. Research suggests that one-time single session workshops or interventions often have little effect on behavior (Licklider, Schnelker, & Fulton, 1997).

Creating development opportunities for instructors is difficult without knowing what types of help faculty need to assess student outcomes. At Widener University, a special task force was appointed to develop and conduct a survey of faculty needs. The results indicated that the following areas needed attention: “developing student-centered learning outcomes, creating assessment criteria, reporting results, and using results to improve teaching and learning” (Brigitte Valesey, see p. 128).

Needs assessments like the one used at Widener can help academic leaders identify which assessment topics need attention and suggest how to offer educational opportunities for faculty. Topics with which most faculty need assistance include how to write student-centered learning outcomes, how to choose the best assessment methods, and how to interpret and use the results of assessment to make targeted improvements.

Faculty learning communities provide an example of a more sustained initiative that may have a greater impact on instructors. In learning communities, instructors typically work together for a semester or more on a specific project. At Texas Christian University, several campus units provided funding to support the creation of six faculty learning communities (FLCs), each representing a part of the core general education curriculum. The FLCS are designed to: “(1) create and maintain appropriate assessment strategies for the category, (2) share the results of the assessment process with faculty who teach in that category, and (3) enhance discussion on teaching within that particular core category” (Catherine Wehlburg, see p. 114).

Ideally faculty development opportunities are provided during the entire assessment cycle—from the very beginning as plans are developed, through the implementation of assessment and interpretation of results, to understanding how to use the results to make improvements.

Faculty members at the University of Northern Iowa conceptualized a professional development plan that addressed the entire assessment process. Developing clear and measurable learning outcomes is an essential early step in the assessment process. The linkages between program-level learning outcomes and individual course-level outcomes can then be gleaned through curriculum maps that identify gaps and redundancies in the program and improve the articulation of outcomes across all program segments. Faculty initially were offered curriculum mapping workshops “to identify gaps and redundancies in the program and improve the articulation of program outcomes across all segments” (Barry Wilson, see p. 111). These workshops focused “primarily on articulating learning outcomes for teaching candidates in the areas of diversity, assessment of learning, and classroom management, which had been identified as in need of improvement in a recent accreditation visit.” A second series of workshops at Northern Iowa oriented faculty to assessing student learning outcomes at the course level. In the final wave of professional development, the provost cancelled classes for an entire day so that faculty and administrators could devote time to the study and interpretation of data, and then develop action plans for change.

Joanne M. Crossman tells us that faculty at Johnson and Wales University use multiple approaches to professional development in the Master’s of Business Administration program (see p. 243). Senior faculty formally mentor full-time and adjunct instructors, helping them understand how to teach courses and measure student learning in alignment with program learning outcomes. Faculty can participate in workshops to assist in designing and using rubrics and consistently applying the criteria to increase interrater reliability. In addition, faculty create portfolios to document their assignments and rubrics. The rubrics make faculty intentions very explicit and public so that students gain a better understanding of key expectations for individual courses.

Cognitive peer coaching is another strategy wherein faculty colleagues form pairs to improve instruction and assessment over a sustained period of time. Each pair enters into a formal written contract in which partners agree how they will help each other. Faculty at Southern Illinois University-Edwardsville have used this approach and have engaged in: “direct observation of class meetings (including pre- and post-observation meetings); Group Instructional Feedback Techniques (GIFTs, including pre- and post-GIFT meetings); review of syllabi, assignments, exams, and other course materials, with special attention paid to relevance to course objectives; and review of student work samples and grading policies” (Andy Pomerantz and Victoria Scott, see Resource A, p. 291).

The preparation and education of faculty and staff to consider and plan assessment is a crucial element of the process of implementing assessment (Jones, 2002). As leaders thoughtfully plan and develop a series of ongoing professional development learning experiences, participating instructors and staff learn how to conceptualize new ideas and receive constructive feedback from their peers regarding needed improvements.

Assessing Processes as Well as Outcomes

If the processes that lead to the outcomes of student learning are not examined, one cannot truly improve those outcomes. Measuring a desired outcome will do little to improve it without a look at the processes that led to the outcome. As Banta (Banta & Associates, 2002) has reminded us, “a test score alone will not help us improve student learning” (p. 273). What students and faculty do makes a difference. Thus, student engagement has been described as a key to student success (Kuh, Kinzie, Schuh, Whitt, & Associates, 2005). Student engagement is commonly assessed using surveys such as the College Student Experiences Questionnaire, and more recently the National Survey of Student Engagement (NSSE), as well as locally developed instruments.

Faculty and staff at North Carolina State University developed the First Year College Student Experiences Survey (SES) to assess involvement by asking students about the types of organizations in which they are involved; the amount of time they spend on certain types of activities; how often they use specific campus resources; and interactions with faculty, peers, and residence hall peer mentors (Kim Outing and Karen Hauschild, see p. 180). Faculty support, shown by willingness to administer the survey in the classroom, and the brevity and online availability of the survey instrument contributed to the success of this practice and ultimately to an expansion of first-year programming.

Both a national survey and a locally developed survey were employed to gauge the level of student engagement at Ohio University (Joni Y. Wadley and Michael Williford, see Resource A, p. 288). NSSE responses revealed that freshman students were less engaged than their peers at other universities. Discussions stimulated by presentations of the NSSE data to deans, chairpersons, and faculty led to the realization that there was not a common learning or engagement experience for first-year students. Further, a locally developed faculty engagement instrument provided insights into instructional issues and faculty practices that contributed to student engagement or the level thereof. A resulting two-year study of the first-year experience produced 33 recommendations of which 17 have been successfully implemented. Another important development is that additional resources have been put into first-year programs, including the establishment of an office that focuses on student success in the first year.

In 1991 Pascarella and Terenzini reviewed over 2,600 studies on the influence of college on students (Pascarella & Terenzini, 1991) and again in 2005 they reviewed some 2,500 studies that had been conducted since the 1991 publication (Pascarella & Terenzini, 2005). Evident in both reviews is the important influence that teacher behavior has on student learning. Specifically, faculty organization and preparation have a positive influence on student learning. These studies confirm the notion that process, or how we arrive at an outcome, is essential to good results.

Consistent with the concept that process is critical to outcomes, faculty at many institutions pay attention to techniques that are found to improve student learning. Medical and dental schools in the United States and Canada have for many years used problem-based learning (PBL). According to Natascha van Hattum-Janssen (see Resource A, p. 289) the University of Minho in Portugal employs a similar process called project-led education (PLE) in engineering courses. In this process, faculty act as tutors for teams of students who work on problems they will face as they enter the profession to produce a model, report, or other such product.

The pervasiveness of PLE at the University of Minho has led faculty to rethink and redesign the faculty evaluation process. Because the role of the faculty member now resembles that of a facilitator rather than “sage on the stage,” older faculty evaluation forms are not useful in understanding the success of this more student-centered process. Older forms ask questions about students’ expectations and perceptions of the instructor. Scales in a newer version assess faculty knowledge of the subject, faculty attitudes toward the PLE process, success of the project, student critical thinking and problem-solving skills, student attitude toward team work, and student perceptions of their learning. Results from the new instrument have helped instructors reflect on their areas of strength as well as the overall PLE process and the relatively new role of facilitator. Instructors have widely varying interpretations of the tutoring or facilitating role that the faculty member plays. This finding has suggested the need for more training for faculty in an effort to close this gap and to improve the process.

Although classroom processes are critical to student learning, equally important is the assessment process itself. Assessing and reporting results may serendipitously coincide with improved student learning but can coincide equally with no improvements if the process itself is not viewed as sound or indeed is not sound. Continuously reviewing and exploring new ways to assess student learning is critical. During the evaluation process at St. Mary’s College of Maryland, members of the Core Curriculum Implementation Committee recognized the need to develop coherence between, and to evaluate, the various missions of the college, the core curriculum, and the assessment process itself. Click, Coughlin, O’Sullivan, Stover, and Nutt Williams (see p. 176) stress that these connections are necessary for the success of the core curriculum. Indeed this kind of strong linkage is vital to any successful program.

Communicating and Using Assessment Findings

One of the tenets of good research has always been that results should be communicated and vetted so that the research can benefit others as they pursue similar studies. Those assessing student learning should be held to the same standards and provided the opportunities to learn from colleagues engaged in this process. March (2006) reminds us of the importance of communicating the results of assessment but points out that this step is often considered last and is frequently ignored.

For many years now, assessment practitioners and researchers have pleaded with faculty and staff to ensure that assessment is an ongoing process that communicates to faculty and staff about the learning outcomes and the educational processes on campus and how well they are working to improve student learning and development. Those charged with compiling assessment results at the campus level must find ways to share information about findings that can help to improve teaching and programming processes with those teaching and/or designing and carrying out programs at the unit level.

Assessment leaders at the United States Military Academy describe what they fondly call “state of the union addresses” where course directors give updates on the assessment of the core mathematics courses and the relationship of findings to program goals (Graves and Heidenberg; see Resource A, p. 293). In years past these reports were done only in traditional print form. Though these conversations are not mandatory for faculty, the audience for the state of the union briefings has grown steadily. They have proved to be a useful communication mechanism for course directors to share information about program strengths, issues and concerns, new initiatives, and, most important, about student learning. Many more faculty are hearing about best practices and improvement ideas through these informal conversations.

Likewise, after instituting a new assessment program at Florida A&M University, Uche O. Ohia (see p. 83) reports that the success of this new approach has led to its becoming an accepted framework for linking assessment results to planning and budgeting. Instrumental in the success of this initiative has been the open and consistent communication about the process, the results, and best practices to deans, directors, chairpersons, and vice presidents through orientations, newsletters, roundtable discussions, and the usual printed progress reports.

These first two chapters of Part One describe characteristics of successful assessment initiatives through their planning and implementing phases. In the third chapter we explore ways to improve and sustain assessment programs. We provide examples of successful efforts to review and use assessment results as well as to evaluate the assessment process itself and the outcomes this process seeks to improve.

CHAPTER THREEIMPROVING AND SUSTAINING EFFECTIVE ASSESSMENT

Many college and university faculty and staff have developed and implemented assessment plans. In this section, we initially review how faculty and staff can provide credible evidence of student learning to relevant internal and external stakeholders. We also examine how academic leaders and those engaged in assessment can use the information gleaned from various assessments to make targeted improvements. Such improvements can include making changes to the overall curriculum or academic program, revising individual courses, or adding new services with additional funding to address students’ needs. A formal review of assessment reports can reveal trends or patterns in how faculty and staff are using assessment results to make enhancements. Finally, the formal assessment plan should be evaluated as it is implemented so that appropriate changes can be made to strengthen the assessment measures or the assessment process itself. If assessments yield meaningful results that faculty and staff can use to identify necessary changes, there is greater likelihood that the overall assessment process will be sustained over time.

Providing Credible Evidence of Learning to Multiple Stakeholders

Many faculty and staff members collect relevant and meaningful assessment information pertaining to their students. Often they use multiple assessments over a period to time to determine how well their students have mastered their intended learning outcomes. As Maki (2004) notes, multiple assessment methods are crucial for the following reasons. They

provide students with multiple opportunities to demonstrate their learning that some may not have been able to show within only timed, multiple choice tests;

reduce narrow interpretations of student performance based on the limitations often inherent in one particular method;

contribute to comprehensive interpretations of student achievement at the institution, program, and course levels;

value the diverse ways in which students learn; and

value the multiple dimensions of student learning and development (

p. 86

87

).

Though some assessment leaders may be tempted to rely mainly or solely on indirect methods (those that capture students’ perceptions of their learning and the campus environment), this approach does not generate enough meaningful information. Most assessment plans incorporate a combination of indirect assessments and direct assessments (those that provide a direct understanding of what students have learned). According to Thomas P. Judd and Bruce Keith (p. 46), the United States Military Academy (USMA) is an example of an institution that draws on course-embedded assignments (including projects, papers, and tests) to gather direct evidence of student learning in relation to the USMA’s ten specific academic goals. Faculty also survey students at least three times and conduct focus groups with graduates’ employers. Judd and Keith report that the results gleaned from these multiple assessment methods provide a comprehensive picture of student achievement and development.

A major challenge is to provide evidence of student learning that is credible and meaningful to a variety of stakeholders, including professional and regional accreditors who have explicit standards related to assessment. Most professional accrediting organizations expect faculty within accredited academic programs to demonstrate accountability regarding student performance on a continuous basis. Accreditors want evidence that faculty and staff “identify the knowledge and skills required of all students receiving a degree and determine in advance the level of student performance that will be acceptable” (Diamond, 2008, p. 19).