I scrolled across the image below on my Facebook feed. It was posted by a group I follow called We Are Teachers. It shows one method that students use to “outwit” the teacher and cheat on tests.
The solutions listed in the comments included…
“Only clear bottles permitted in our exams…”
“I don’t allow my students to have anything on their desks except a pencil when taking a test. I’ve seen cheat notes on water bottles, erasers, and glass cases before. SMH😕”
Other comments included…
“Too bad they can’t use their genius to actually study.”
“After typing all the info for the test, he shouldn’t even need it. Haha”
What’s Really the Problem Here?
A lot of the comments focus on the bottle as the problem – a place to store notes incognito. And there are dozens of other ways to do this too.
Some comments focused on the student – his or her’s great effort to hide the notes rather than just putting the work into memorizing information so it can simply be applied on a test.
But what about the one subject in this equation that 95% of the comments (out of 172) failed to acknowledge? THE TEST ITSELF. I argue that’s the center of our problem.
Where is Real Knowledge Constructed?
In many talks with teachers, I bring up the Lorin Anderson’s Revised Bloom’s Taxonomy (see image) in order to illustrate where students should be spending the majority of their time constructing knowledge. I wrote an earlier blog post, What Does Engagement Mean in the Classroom, where I reference this idea. Brain science proves that lasting knowledge is constructed when students are operating in the upper-levels (Analyzing, Evaluating, and Creating) when developing their new understandings of the content. Shouldn’t this be where we assess them then?
Formative vs. Summative Assessments…
Now, when I argue that students are being assessed incorrectly, I am referring to those summative assessments that test a student’s whole knowledge on a concept learned in class. I am not referring to a “checking for understanding” kind of assessment. Those would be more formative and would possibly require students to perform in the lower-levels (Remembering, Understanding, and Applying). However, these kinds of assessments should not hold as much weight when compared to Summative.
Summative Assessments = Upper-Level Blooms
The point I’m trying to reach is…Summative Assessments should require students to perform them with upper-level thinking.
If I ask a student to Analyze, Evaluate, or Create something with their new understanding, they can’t cheat (or at least it’s far less likely to occur) because they can’t “fake” upper-level thinking. If I give a unit test that is primarily multiple choice, matching, and even some short answer, students will be enticed to cheat because stakes are higher and they know it is for a grade. Plus, the “knowledge” they need to produce is done so through lower-levels.
I know I’ve been guilty for giving these kinds of tests, and I wish I could go back and correct my teaching error.
This Goes for Tech Too!
When the Apple Watch came out, universities were banning it at tests in order to prevent cheating. So our water bottle from the example above has now been replaced with a new piece of technology on the wrist. But my argument remains the same for ANY piece of technology. Nicholas Provenzano also iterated this in his blog post.
If I ask students to prove their knowledge with lower-level thinking, then technology is a problem (like the water bottle or any other method of cheating).
If I ask my students to prove their knowledge with upper-leveling thinking, then technology can aide in the process of specifically Creating.
When looking at the issue of cheating, we could choose to focus on 1) the method (e.g. watter bottle), 2) the student, or 3) the assessment itself.
Before pointing the finger at the first two, I recommend taking a more critical look at the latter.
P.S. What are your thoughts on the matter? Respond in the comments below.