What would that look like?
It means you might focus on one particular chapter or lesson or concept, then gather data about how your students respond to it. I have an evolving example of this—
One of the classes I have taught for many years is Advanced Composition, and one of the sections in this course that I feel strongly about covering has to do with logical fallacies. Our culture is rife with them and I want my students to be able to identify them for what they are and discard the faulty logic behind them. When I first started teaching this unit, I just assigned them the fallacies to learn, gave them a few online resources and pointed them to the chapter in their text, outlined a test date, and then went on with my regularly scheduled programming.
The test was a disaster.
Not one student passed. I had even scheduled a review the class before the exam, but it was clearly not enough to make up for having let the leash out too far. They didn’t understand; they wanted more practice. They had not acquired the critical thinking skills that I assumed they had coming in to the class. The next semester, I retooled and tried something else. I then assigned a logical fallacy to each student, telling them to teach the logical fallacy to the class. Their presentation needed at least 2 examples, should have some kind of visual element, and should include a handout.
The results from this were only marginally better.
I gave more detailed instructions on what the teaching lesson should include—at least 3 samples of the fallacy, some additional resources for those who didn’t understand it thoroughly; I created a folder on our Blackboard site where they could store their presentations so the other students could look at them and use them as study tools. I started having them read articles from the internet and then discussing the logical fallacies we found in them.
There was improvement, but not enough to stop there.
I added in discussion after each presentation, also course-correcting presentations that didn’t quite hit the mark, giving extra credit points to those who included games, etc… in their teaching. I added more teaching videos from youtube to the folder on Blackboard.
A smidge better.
The most recent iteration additionally had me spending an entire class period reviewing all of the fallacies by having each of them bring 2 examples of their fallacy on little slips of paper, putting them in a bucket that I brought, and then drawing them out and reading them aloud, allowing them to guess. I had even written the names of all of the fallacies on the dry erase board so they didn’t have to remember them, only recognize them. The average score on the exam was a C-, up from the abject failure of even the most dedicated student. I’m getting there. It’s still a work in progress. When pressed about why the exam was so hard, most of them said they didn’t understand how to differentiate between the various fallacies, that they’ve heard these ideas (like “America: love it or leave it” so often, it’s hard to recognize them as wrong.
A willingness to change, to rethink, to use the data we gather every day interacting with our students is imperative to good teaching.
While I am loathe to insert business-think into the teaching paradigm, business does get something right—they are constantly assessing the market, changing, growing, taking its temperature. If we want to be sure information-transfer is happening, we need to check our signal strength as a matter of course.
For more information about any of the 3D’s, check the resources at the end of the article.