Minutes of the 3/29/06 Meeting of the College of Arts & Sciences

 

Proxies were held by:

Jesœs Escobar for Marti LoMonaco

Phil Lane for Kathy Nantz

Susan Rakowitz for Irene Mulvey and Rick DeWitt

Kraig Steffen for Matt Kubasik

 

Call to order: Prof. Crabtree called the meeting to order at 4:07.

 

Approval of the minutes of 1/25: Prof. Lane moved to approve, seconded by Prof. O'Neill. The minutes were approved unanimously with 7 abstentions.

 

Announcements

 

Prof. Boryczka announced a call for papers for an upcoming conference on Jesuit and feminist education. She pointed out a handout with the details and noted that she or Prof. Petrino could answer any questions about the conference.

 

Crabtree requested that we invert the order of the next two agenda items; there were no objections.

 

Assessment of Student Learning Outcomes

 

Prof. Naser began a presentation by distributing a document that had already been sent to the chairs. He explained that he is co-chairing the self-study that precedes our upcoming NEASC re-accreditation. Assessment is an important part of the NEASC standards, so the College needs to address it.

He argued that we need to engage in assessment in order to demonstrate that our students are learning what we say we're teaching them. We need to back up our claims of educational effectiveness and improve the educational services we provide. He acknowledged that initial attitudes toward assessment are often negative, but noted that discussions of teaching can be interesting and valuable. He further emphasized the fact that NEASC requires assessment and we have $18 million in federal aid riding on being re-accredited. Furthermore, there is an ongoing battle between accrediting organizations for higher education and the federal government. The government is starting to press for standardized testing in higher education, so accrediting organizations need assessment data to counter these pressures.

Naser went on to explain what assessment is. Indirect assessment includes things like surveys of graduates and standardized tests. Such instruments are useful, but limited. Direct assessment entails the measurement of actual student work that has been submitted for credit. Such assessment takes place independent of the instructor, because assessment is not the same thing as grading (although some forms of assessment can be used in grading). Furthermore, assessment of student learning is not a system for evaluating faculty. Trying to use assessment in that way would be a very intensive process.

Naser explained that assessment involves 5 steps. The first task is to identify educational goals and objectives. He noted that the sciences have already done this with regard to core science courses. The second step is to see how the goals and objectives are mapped across the curriculum. Then samples of assignments and student work need to be collected. The fourth step is to determine how well the student works meets the learning objectives. This step requires that faculty set up rubrics and criteria for evaluating student work and then, perhaps, spend a weekend plowing through several 100 student papers in light of these rubrics. At this point, an audible groan arose from the attendees. The final step is to reflect on the outcomes of the fourth step and consider appropriate changes.

Prof. McFadden asked whether this process will be conducted within departments or across core areas, or both. Naser responded that the initial structure involves departmental representatives, but eventually some of this work will be done divisionally. He also noted that some of the work will build on the goals and objectives laid out in the 1999 Mission of the Core document.

Prof. Abbott observed that all of the examples offered are matters of degree, and wondered how minimum standards would be set. Naser said that the workshops organized to start this process will be addressing these questions. He also indicated that it is important to have multiple people review the same materials in order to maintain common standards. Prof. Bowen followed up by clarifying that any minimum criteria for assessment would not be related to individual students. Naser confirmed that students' grades are set; this process is independent of evaluations of individual students. Prof. Davidson asked whether there are guidelines for collecting student work. Naser said that work can be sampled, and the committees working on assessment will determine how that sampling takes place. Some models include capstone project assessment, student portfolios, or embedded assessment, i.e., assessment of student work submitted for course assignments.

Crabtree reminded the faculty that an assessment plan needs to be in place by the NEASC visit in the Fall of 2007, but data do not have to be gathered and analyzed by then. Naser concurred that the accrediting body wants to see a permanent assessment program in place along with the beginnings of implementation and evidence that the process will be ongoing.

Prof. Mielants worried about comparability. For example, some departments rely heavily on adjuncts, especially for introductory level courses. Would there be pressures to require common syllabi for such courses? Naser said that he was opposed to imposing syllabi, but that there needs to be some commonality in terms of defining learning objectives.

Dean Snyder pointed out the danger of worrying too much about NEASC and what they want us to do as opposed to how we can make use of this process. For example, we use more adjuncts than we would like, so we should use the assessment process to look at the implications of adjunct use for learning outcomes. Naser agreed that NEASC will not specify precisely what we should do; they want us to do what's best for us. At this point, Prof. Lang, acknowledging his own surprise at what he was about to say, announced that he agreed with the Dean. The Earth apparently continued its motion. Lang explained that he had just returned from an AAUP meeting, part of which addressed accreditation. He echoed Naser's comments that political pressures are leading to greater emphases on assessment. He said that representatives from accrediting bodies indicated they would encourage assessment, but they aren't going to be hard-nosed about it. He opined that if Fairfield doesn't meet accreditation standards, that would mean only 4 or 5 schools in New England would do so. In other words, we should consider how assessment will best serve our interests, rather than focusing on NEASC. He suggested that people read the 1991 AAUP statement on mandated assessments, and then warned of potential dangers: the process entails a lot of work; the claim is that assessment will not be used to evaluate faculty, but given our history with student evaluations, that claim is suspect; we need to be sure that rubrics aren't used to establish curricula and thereby infringe on academic freedom; and a focus on measurable objectives may lose the richness of all the things we do that are not easily measurable. With regard to the last point, Naser said that he shared Lang's fear, but he learned from a workshop led by D. Eder (who will be leading a workshop at Fairfield in April), that some things one would think are not measurable can actually be measured. He agreed that we need to be vigilant about the possibility of assessment driving the curriculum.

Prof. L. Miners said that 2 keys to the process are flexibility and willingness to change. The core initiative is happening in parallel with the assessment initiative. We own these processes and should be flexible enough to change with the findings. He heard a talk last year by faculty who reviewed bodies of student work and said that they learned a great deal from the process. Like the Dean, he argued for a positive attitude, suggesting, for example, that if samples of work from adjunct courses were weak, we could use that evidence to argue for more full time faculty.

Prof. Escobar pointed out that last year the department of Visual and Performing Arts worked with Prof. Dohm from GSEAP and her graduate students to apply NEASC standards to their curriculum. They found that they were already where they needed to be, so the process may not be as scary as it might sound.

Naser continued his presentation, turning to the question of where we begin. He said that the process will require work, training, and a culture shift. We will start with the core, both because of goal 1 in the Strategic Plan and because the core is the centerpiece of the College. We will then extend assessment to the majors and interdisciplinary programs.

The structure revolves around the College Assessment of Learning Committee (CALC), which includes 1 representative per department. This body will participate in 3 training workshops, one of which will be led by D. Eder, a biologist who is very good at addressing these issues. (That workshop will begin right after the April College meeting.) Naser's hope is that the CALC representatives will begin discussions in their departments over the summer or during the fall. CALC members will be responsible for instituting assessment programs for core courses within their departments. Naser hopes there will be ongoing funding to supplement the initial stipend offered to CALC members. As departments are trained by their CALC representatives, they will be expected to expand their assessment efforts into their major, minor, and graduate programs. Faculty will also be expected to do assessment in interdisciplinary programs. Assessment in the College will be overseen by the College Assessment Steering Committee (CASC) consisting of Profs. Harriott, Hohl, L. Miners, Nantz , Naser, and Torosyan, and Deans Perkus (UC) and Snyder.

Prof. Schwab noted that we already have people trained in assessment in GSEAP. She asked whether there are plans to work with these people. Naser responded that he has talked a bit with Prof. Calderwood, but not yet with Dohm. He said that were no clear plans to involve GSEAP faculty. Schwab said that VPA had found the graduate students from GSEAP to be very helpful. Naser agreed that it would be great to have graduate students help other departments as they had helped VPA. He added that GSEAP has their own accrediting body.

At this point, Crabtree moved to the next item on the agenda.

Grades and Course Evaluation Databases

Naser explained that Snyder had asked him to develop some analyses of course evaluation data. AVP Grossman authorized passing these data on to Naser. Naser noted that he had consulted with Profs. Fine and Schlichting and that what's accessible on his website is experimental. Snyder emphasized the experimental nature of the data, noting that the goal is to construct the data in useful ways, but the danger is in the possibility of creating misleading data.

Naser explained that he and Prof. Nantz had recently attended a conference on evaluations. He learned that our student evaluation form is clearly problematic but that doesn't mean that all of its data should be dismissed out of hand. He then explained the"weighted sum" he had created and included on his website. While many people use our form by adding together the"agree" and"strongly agree" responses to create a measure of agreement, he argued that this method loses distinctions between agree and strongly agree. His weighted sum, therefore, multiplies strongly agree by 2, agree by 1, disagree by -1, and strongly disagree by -2, sums these numbers and divides by 2. In this way, the distinction between agree and disagree is larger than that between agree and strongly agree.

Lang argued that there are several problems with this manipulation. First of all, the form was originally designed to avoid converting evaluations into a single number, yet that's what Naser has done. Rank and Tenure looks at the overall pattern of responses, rather than looking for a single number. Furthermore, the difference between agree and strongly agree isn't a statement about the faculty, but is instead a statement about how strongly a student feels about his or her opinion. For example, the question on teaching effectiveness does not ask whether the teacher is effective or very effective, it asks how strongly the student feels about the teacher's effectiveness, so the weighted sum is misleading. The form was designed without weighting or averaging and should be left that way. Prof. Dennin, just finishing up a term on Rank and Tenure, said that the Rank and Tenure committee looks at the raw data from the evaluations. Any formula for combining numbers loses information. Naser responded that no one is required to use his weighted sum. Dennin argued that the number's presence encourages its use.

Naser demonstrated how patterns of responses across semesters and within departments can be viewed. Dennin argued against averaging the responses because the difference between 1 and 2 on the scale isn't only a difference of 1, but also a multiple of 2. Naser replied that almost all commercial evaluations average responses, even when they use the type of scale that our form uses. He said there's evidence that students interpret these scales as interval scales. Prof. Schlichting added that when some of the evaluation questions used to go to FUSA to be printed in The Mirror, the responses were averaged even though they shouldn't have been.

Prof. Salafia contended that before we accept any manipulation of the data, we need to look at the survey and it what it was intended for. It was constructed with the clear understanding that it would never be used by the administration or for comparisons. It was supposed to go only to individual faculty for self-improvement. If comparisons are to be made with student evaluation responses, we need to start by changing the form. Snyder replied that the problem is that evaluations need to be used right now for merit. The AVP suggested that faculty consider changing the student evaluation form in 2001. We can't now simply throw the form out; we have to use it until we change it. Naser said that even if the data are only formative, it's helpful to see the distributions. He noted that his website only makes comparisons within departments because there are known differences nationally between departments.

Dennin asked whether we are better off with no data or with bad data. He suggested that these are bad data, so we would be better off with no data. Naser argued that the fact that these data are consistent means they are not necessarily bad.

Bowen asked the Dean to clarify an earlier comment that departments could opt not to use these data. Snyder said that departments could choose other forms of evidence of teaching effectiveness for their merit assessments.

Request for Books and Scholarly/Creative displays

At this point, Crabtree reminded the assembly that the next meeting is the last CAS meeting of the semester. It will celebrate faculty accomplishments, so faculty are asked to send books published in the past year to Jean Daniele, and to send articles, grants, reviews of art, etc. to Crabtree for display. The fourth annual College of Arts and Sciences Distinguished Teaching Award will also be presented at this meeting. Nominations for that committee are due by 4/10, and the committee consists of Profs. Crabtree, Dallavalle (newly added as the only nominee from the Humanities), Gardner, and Phelan, as well as Dean Snyder. Most importantly, Crabtree noted that the reception following the final meeting of the year features upgraded refreshments.

Prof. McSweeney, seconded by Dennin, moved to adjourn. The meeting was adjourned at 5:00.

 

 

                                                                        Respectfully submitted,

                                                                        Susan Rakowitz