A provocative essay on administrative assessment-mania, written by a friend of the blog, Alex. His take focuses on universities, but K-12 has long ago fallen prey to the same trend.
Thanks for the promotion!
That comments section is… interesting! Some mid level administrators are worried they might lose their job or something?
We had a series of introductory meetings as new faculty, and one of the meetings was about assessment. Being someone who is naive about the whole thing, I had assumed what they would cover was how to build good tools for assessing student progress in my course (formative and summative at least I knew that much). AKA, how to write and grade good assignments and exams so I can assess their progress on learning various topics they need to learn in the course.
Nope! Apparently what I was supposed to be doing was to make a list of things I wanted the students to learn and then after I teach the course fill out a matrix stating what percentage of my students achieved different levels of mastery of these outcomes.
And I’m like… I already do that… it is called writing exams which cover the appropriate material (outcomes) and assigning grades based on student performance… how is this not totally redundant busywork?
But I kept my mouth shut because I am a first year faculty and I would like to get tenure thanks.
Alex is a gem.
Jojo—they get REALLY upset when you suggest that exams serve as assessment. Even if you teach a math class. They like the word “portfolio” though I’m not clear what that means in a non-humanities context.
I truly believe there’s good and useful assessment and there’s value in checking that students are getting the learning outcomes they’re supposed to be getting in a major/program, but the way the accreditation people want it done has no value.
“Jojo—they get REALLY upset when you suggest that exams serve as assessment. Even if you teach a math class. They like the word “portfolio” though I’m not clear what that means in a non-humanities context.”
Well I’m glad I kept my mouth shut other than just to ask for clarification.
But, what do they think that I give exams for if not to assess student learning!?! And why are final papers students write (presumably that’s what’s in a portfolio or is it something else) acceptable but exams are not? Huh???
What is happening…
Seriously though I would be curious to read something describing what exactly it is they want us to do (we didn’t talk about “portfolios”) so I can at least check the correct boxes for tenure.
I mean… I tell my students ahead of each exam, I will write a question for each learning goal that is listed on the course website, and which we have covered in lecture and discussion. How is that not assessment of their learning of these goals if in fact they correctly answer these questions?!?
The least bad argument for differentiating grading from assessment is that somebody could always ask “OK, but does this grade actually mean the students learned something?” or “If they got a C, does that mean they achieved middling proficiency at everything or did really well on some aspects of the class and terribly on other parts?” Those are fair questions, but anything more than a superficial treatment requires a discussion with other people in the field. Yet we’re supposed to prepare reports that will be read by people from every field on campus and then aggregated into some sort of institutional report for accreditors.
If they were serious about this they’d invite peers from the same field at other institutions to review our curriculum. That’s often part of an external site visit, but usually not the biggest part. But if they took this seriously then peers would review our tests, project assignments, etc. Instead of having us submit charts showing that we used “action verbs” and that “Learning Objectives” align with Bloom’s Taxonomy (or whatever is currently in favor).
ABET accreditation of engineering programs does involve the detailed review that Alex suggests—but it comes at a very high cost. The preparation of the paperwork for the accreditation takes about 2 faculty years, if all the faculty are cooperating in providing the necessary information. If there is resistance from physics, math, and other faculty providing required courses, the paperwork takes even more time to prepare.
ABET accreditation is worth doing once for the detailed look at the curriculum, but the renewal every 6 years is way too much effort for a small department, particularly in newer engineering fields (like computer engineering and biomolecular engineering), where employers don’t care about the ABET approval.
A “portfolio” in a science or engineering class should consist of problem sets, lab reports, exams, and a paper (based either on library research or a more detailed lab/design report). Luckily, our institution does not require portfolios (which no one ever looks at anyway).
Exams would not provide much evidence of meeting the learning outcomes of my courses, which include goals like the following [copied from my syllabus]:
Students will be able to
• draw useful block diagrams for amplifier design.
• use simple hand tools (screwdriver, flush cutters, wire strippers, multimeter, micrometer, calipers, … ).
• hand solder through-hole parts and SOT-23 surface-mount parts.
• use USB-controlled oscilloscope, function generator, and power supply.
• use python, gnuplot, PteroDAQ data acquisition system, and Waveforms on own computer.
• do computations involving impedance using complex numbers.
• design single-stage high-pass, band-pass, and low-pass RC filters.
• measure impedance as function of frequency.
• design, build, and debug simple op-amp-based amplifiers.
• draw schematics using computer-aided design tools.
• write design reports using LaTeX and BibTeX.
• plot data and theoretical models using gnuplot.
• fit models to data using gnuplot.
Engineering fields need to show that they are rigorous, for accreditors. Engineers have gotten away with that because filtering has been done largely (though not solely) by the physics and math departments, sometimes the chemistry departments, and maybe even the SAT (via the admissions office, depending on campus rules for freshmen declaring majors).
Nowadays the Retention Uber Alles types are looking at physics and math and saying “Hey, you guys teach weed-out classes! Weeding out is bad!” So now we filter less than we used to. So engineers now have sophomore classes with kids who never would have lasted that long before. So classes like statics and whatnot become filters.
So then these kids get D’s and F’s in statics or whatever and they think “Well, I got C’s in physics and calculus, so maybe I should major in…”
And we get those kids. And they’re juniors. And the institution wants people to finish their degrees.
I just don’t see anyone ACTUALLY asking us whether a C in senior-level physics classes means something. Either they wouldn’t like the answer, or they wouldn’t like the graduation rate that accompanies that answer.
“OK, but does this grade actually mean the students learned something?”
The grade means that they were able to demonstrate X level of proficiency / they knew Y material on exams and assignments at the time they were completed.
If “actually learn” means Longer term learning gains, then I suppose you could retest graduating seniors on a few core concepts from each required course and see how they do…
“If they got a C, does that mean they achieved middling proficiency at everything or did really well on some aspects of the class and terribly on other parts?”
Well, since each question is tied to a learning goal (which are more detailed versions of learning objectives in my course) I do have this information. But this is all part of grading / assigning grades as far as I’m concerned.
I guess the statement “grading doesn’t mean assessment” is assuming we are literally talking about only the letter grades assigned at the end of the course, whereas I’m talking about grades being those assigned to each assignment and exam, and within that, the distribution of success on each question as well.