Exams have two major purposes: Measure how much students know about a subject and incentivize studying. A good in-person exam can satisfy both purposes, and two-stage exams add icing to the cake by increasing the amount students learn during the exam itself. I’m going to ignore the issue of the substance of the exam because that’s a very important issue for both in-person and online exams. Let’s assume for the moment that we’re reasonably happy with how our in-person exams measure the skills we’re trying to teach. Right now, we’re just trying to figure out the best way to modify them for online.
The primary challenge with online exams is ensuring academic integrity as it’s very hard to control or even observe the students’ behavior during this kind of exam. It can be done on a small scale with proctoring services like Examity that have software that locks down a student’s computer and have humans (or AI) watching students through their webcam. These services are expensive, a little creepy, and just aren’t practical when you have more than a 10-20 students.
I believe many long-time online instructors solve the integrity problem by giving students multiple choice tests that are randomly generated from a large question bank. If students have a limited number of minutes to complete the test, they just won’t have time to get help, and because they get different exams, they can’t just gather in a Zoom chat room and solve the same questions together. Just make sure you don’t ask lots of questions where the answers can be quickly looked up in the textbook. This approach works well when you either already have a large question bank or have the serious resources required to build such a bank: Developing good multiple choice questions is way harder than it looks.
This semester I don’t have an existing question bank or a lot of help to write such questions. Additionally, my applied econometrics course teaches students to build econometric models and analyze results, and it’s particularly challenging (though not impossible) to test these skills with multiple choice questions.
For my recent midterm exam, I chose a very different approach to assessment. I posted a pdf to the course web site that contained the same kind of pencil and paper multi-part problem exam I usually give in person. Students had a 24 hour period to download and complete the exam. I made the exam open-book open note because I had no way to prevent them from looking through books and notes during the test. My in-person exams are mostly open note, so this wasn’t a big change. I thought about asking them to work alone, but because I couldn’t enforce this either, I felt a non-negligible number of students would collaborate anyway. This would put the law-abiding students at a big disadvantage, and that wouldn’t be fair. Instead, I allowed collaboration between members of the small groups that they have been working in all semester.
I strongly encouraged individuals to complete the exam on their own first and then meet to discuss their solutions—If you think this sounds an awful lot like a two-stage exam, you’re right! After the fact, many students told me they learned a lot during the exam through this collaboration process.
Scores on the exam were (not surprisingly) very high, but I worry about how well this structure measured individual learning and motivated studying. I believe some (though far from all) students simply copied the work of their teammates, and other students engaged in “just-in-time” studying figuring they could learn what they needed during the exam.
One tweak that could help would be to allow groups to start the exam at any point during the 24 hour period, but then give them just a couple hours to complete it. I can’t do this this semester because many of my groups have students in very different time zones. In the future I could make sure group members resided in similar time zones, but I didn’t want to break up groups that have built up quality social capital all term.
For me, the ten million dollar question is what to do for the final exam. I’m seriously considering a hybrid approach where I first ask them to take a short (say 10 question / 30 minute) randomly generated multiple choice exam on their own. Then they take a collaborative exam like the one I gave as a midterm. It would be much less work than a full-on multiple choice set up, but it would still let me identify those students who have no idea what’s going on and free-rode on the midterm. The collaborative piece would let me ask tougher questions and keep all the learning that happens during the exam. Their score would be a weighted average of what they get on the two parts.
One thing I know for sure is that I’ll talk with my students before making any decisions. They always bring up things I haven’t thought of. And when I do decide what to do, I’ll explain exactly why.
]]>Got more tips? Feel free to contribute in the discussion below or tweet @TeachBetterCo!
]]>When colleges around the world (including my own) sent students home this spring and switched into online teaching mode, I wanted to replicate as much of the in-person experience as possible. This might sound familiar to regular readers of this site as that was my plan way back in 2014 when I taught a much smaller version of this class using Zoom in Yale’s Summer Session.
Cornell classes started up again last week, and we finally got to reconnect with our students. I have 130 scattered all over the world, and about 100 showed up for two live sessions. Overall it went smoothly, but there were definitely lessons learned:
Live Zoom classes really can be extremely similar to in-person lectures, even with lots of students. They are dynamic in ways pre-recorded lectures aren’t, and they allow for lots of instructor-student and student-student interactivity. It might take some time to get used to, but it’s totally worth the effort.
Update: I added a few more tips for Zoom teaching.
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Intro
1:21 ⏯ Welcoming Kevin Gannon. Why is there never a center for competency in teaching? The students who come to office hours often don’t need much help. Students who don’t care.
4:50 ⏯ Students who influence you? What happens when you’re the youngest person in the class–younger than all your students? Learning from adults. Teaching the middle ages through Game of Thrones. Or Monty Python and the Holy Grail.
9:11 ⏯ Hearing the students’ voices. The Zen idea of the “beginner’s mind.” Doug’s search for the perfect explanation–which never means the same thing to the student. And we’re not like most of our students, because we were the invested ones.
11:48 ⏯ Kevin’s from a family of teachers and wanted to be a teacher since high school. Observing high school teaching cured him of part of that. Adjuncting Communications 101 to get rich. But now Kevin works with middle and high school students around Des Moines.
15:47 ⏯ The documentary 13th (2016) and the prison system in America as ‘slavery by other means.’ College and high school students find Kevin online, and he Skypes into classrooms, and more. History as something more than names and dates.
20:11 ⏯ Coming into someone else’s learning space. Knowing who the learner is. Vs. thinking of our students in terms of deficits. Students as knowledge creators. Active learning doesn’t just mean being active: It means making knowledge, and students shouldn’t have to wait until senior year to do it. The freshman major course is a gateway, not a death march.
24:20 ⏯ Working your way through school as a pinsetter mechanic in a bowling alley. Kevin didn’t read a book and do quizzes. But do avoid running 220 volts through your body. Active learning also means a variety of ways of learning.
27:50 ⏯ Role models for high school and college. It matters that the teacher likes the students and enjoys teaching. Just because you enjoy a good lecture doesn’t mean you can do one. Letting the subject matter be complex.
32:32 ⏯ Where is there room for improvement? Where do you want to be in ten years? Teaching one class a year but trying NOT to make it a masterpiece. Still relying on lectures, however short. Getting quicker at giving feedback. Giving feedback on writing by focusing it on one area.
40:20 ⏯ Incorporating student reflection. Asking students how they might improve–and following through. Letting students earn back points on an exam by identifing what they did that did NOT work. Helping students get over the idea that “time on task” determines learning. The New Science of Learning: How to Learn in Harmony With Your Brain by Doyle and Zakrajsek
44:03 ⏯ A teaching mistake involving a table. But it did set the tone. Thanks and signing off.
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Intro
0:38 ⏯ Welcoming Robert Talbert and introducing his paper with Anat Mor-Avi: “A Space for Learning: A Review of Research on Active Learning Spaces,” commissioned by Steelcase.
5:00 ⏯ Defining active learning: The students need to do more than pay attention and take good notes. The importance of the instructor in active learning. The instructor in an Active Learning Classroom (ALC) may be quiet but may take 5,000 steps in 50 minutes. Active Learning can include lectures, they’re just not long and play a different role. “The professor’s just there to resolve the chord.”
11:28 ⏯ What does a classroom effectively customized for Active Learning look like? Good ALC design facilitates collaboration and physical movement. Supporting the flow of information around a classroom: whiteboards vs. screens. A polycentric space: there is no ‘front of the room.’ A good ALC gives instructors and students many ways to do the same thing.
18:21 ⏯ The challenges of researching the literature on active learning classrooms: different names for the same thing, similar names for different things. ALC’s have an impact on the institutions themselves. Professional development and what triggers it. Working with Steelcase: no pressure to shade the results. Steelcase likes ALC’s to have glass walls–and indeed, this seems to have some tangible benefit. Anecdata.
27:02 ⏯ Analog tools as ‘reducing the amount of friction beween a student and her own thoughts.’ On the virtue of portable whiteboards. The on/off switch is the first thing to go. The need for low ignition costs for capturing ideas.
32:35 ⏯ Spaces designed for active learning (AL) foster AL strategies even even when instructors are told NOT to do AL. Students make choices in all learning spaces.
37:24 ⏯ Equity issues and the ‘bowling alley’ classroom space. Sharing teaching resources with learners to downplay the centrality of a single screen. The big research challenge: comparing the same pedagogy in two different spaces. What’s the active ingredient–the AL or the ALC?
42:30 ⏯ Architectural features making certain AL behaviors easier, and those behaviors support things we know support learning: like engagement, motivation, etc. But what’s left on the table by not having the best room. How much are instructors missing out on when they do AL in a sub-optimal room? ALC’s make instructors more excited about AL practies. But then AL research becomes less convincing than AL anecdotes.
48:12 ⏯ Initiative fatigue: the flipped classroom, writing across the curriculum, time-on-task, and the zone of proximal development. But can you get access to the room?!
52:26 ⏯ Two of Robert’s teaching mistakes: one involving a whiteboard.
56:13 ⏯ How many students can you fit into an ALC? An experiment in getting more students in a smaller space by rotating its use and making the rest of the work online. Applying self-determination theory to learning: supporting competency, connectedness, and autonomy.
1:00:18 ⏯ At UBC, they have a space optimized for doing active learning with 180 students. It’s called the Hennings 200 Active Learning Theatre. Thanks and sign-off.
]]>Machine learning is very good at prediction. It identifies what combinations of values of a large number of variables are associated with particular outcomes. e.g., Males, ages 12-18 who play video games are likely to enjoy Marvel movies. These predictions, while highly accurate, are often not easily phrased in human language. It’s as if the algorithm says “Trust me—I know from looking at the data that people with characteristics like Bob’s prefer Marvel movies to DC movies. Just don’t ask me why.” We’re only slowly figuring out how to summarize these patterns in ways that are useful beyond pure “trust me” predictions. Without insight into “why?” I’m not sure how much we can learn about student learning.
The bigger problem is that correlation does not equal causation. Doctors talk about risk factors for a disease. They don’t explicitly say that old age, fatty foods, and a passive lifestyle cause heart disease, even though they are strong predictors. Social scientists work extremely hard to figure out when an observed correlation is a causal effect. Vitamin D is unambiguously associated with great health outcomes, but a large study recently found that the relationship isn’t causal. Instead, people with high levels of vitamin D are those that spend more time outside, and it seems to be the outdoor physical activity that has the positive causal effect on health. That is, even though the positive association exists, supplementing people’s diets with Vitamin D has no effect on their health.
In my own classes, the students who spend the most time studying are often not earning the highest exam scores. If I were to interpret this as a causal effect, I would want to discourage them from studying so much. This doesn’t take into account the fact that the students who study the most are often starting with weaker skills than other students, and they are studying hard in order to catch up. It’s also possible that the students who study most are studying inefficiently.
Here’s another example: Students who attend my scheduled office hours tend to do better on my exams. It’s so seductive to interpret this as evidence of the value of my one-on-one teaching, but that would ignore what econometricians call selection bias: The students who attend office hours are often the most curious and hard-working and they would do better than other students even if I wrote gibberish on my blackboard and recited bad poetry when they came to my office.
The best case scenario is that unleashing machine learning on student data identifies students at risk and allows us to focus our teaching energy on identifying what those students need in order to succeed. It’s also possible that as the technology improves it will generate interesting hypotheses about the causal determinants of academic success. But we will need to be very careful not to over-reach. What if we find that students who regularly interact with the LMS during the semester are more likely to get A’s? Does this mean we should push all students to do so? If it’s causal, then yes. Perhaps this spaced interaction induces more learning than cramming right before an exam. But it’s equally likely that students who have lots of other good study habits are the ones driving this positive association. And it’s these other good study habits (which we don’t observe) that actually induce more learning. And that just encouraging (or forcing) students to interact with the LMS more regularly would have no effect at all. It could even have a negative effect if students shift their effort away from more constructive activities.
At the beginning of the term most of my students walk in the door of my econometrics classroom knowing that correlation does not always equal causation. They spend the next several weeks learning methods that can tease out the difference through carefully designed experiments or a careful analyses of observational data. Machine learning is great for prediction, but right now it’s lousy for learning how causal processes work. And it’s knowledge of how the learning process actually works that we need to improve our teaching.
]]>Alongside my high-stakes exams, I also give a low-stakes multiple choice standard assessment of learning at the beginning and end of every semester. This is more common in other STEM disciplines, but I’m working with several colleagues to develop a suite of assessments (e.g., ESSA and AESA) that can be used in economics courses so that we too can have objective measures of student skills.
We’ve implemented these assessments as online Qualtrix surveys, but we usually give them in a classroom environment with bubble sheets where students have fewer distractions. To save us money and give us total control over the scanning process, we initially used the open source FormScanner software. We printed custom bubble sheets, scanned them to PDF’s, and processed them, but the process turned out to be quite sensitive to exactly how we printed and scanned. More often than not we ended up doing a fair bit of hand-tweaking.
ZipGrade is a bundled app and service that lets us print custom bubble sheets, scan answers using a phone (or tablet), and download the results. I was pretty skeptical that it would work well with large (100+) classes compared to something that used a sheet feeder, but it’s been fantastic.
At the beginning of every one of my exams, students answer a short set of questions about how they prepared for it. These “exam wrappers” include questions like “How many lectures did you attend?” “How many hours did you study specifically for this exam?” and “How many hours per week do you normally study for this exam?” The questions are all multiple choice, and up until now the data entry has been tedious. Automating the process over the past couple weeks has been a great low-stakes opportunity to try ZipGrade.
ZipGrade provides three “standard” bubble sheets with room for 20, 50, and 100 questions, but I created a custom bubble sheet in about two minutes through their online wizard. My custom sheet has room for 30 questions and asks students to bubble in their 7 digit Cornell Student ID. They also write their name on the sheet.
I’ve learned it’s well worth the effort to upload my student roster (with names and id’s) into ZipGrade before fielding an exam because it lets ZipGrade tell me when a scanned “quiz” does not match. This isn’t common, but some students bubble in their id incorrectly or just skip the bubbling all together.
The magic happens when my students start handing in their tests. I made the bubble sheet the top page so I didn’t have to turn any pages before scanning. I put the app in scanning mode and just hold it over an exam to start. As soon as the little green squares on the screen line up with the little black squares on the exam, the phone buzzes and shows the matching students name and id. And without my pressing a single button, it’s ready to scan the next exam. Scanning is literally moving exams from one pile to another with one hand while lining up squares. I can scan about 20 per minute, and if you have a few helpers (e.g., teaching assistants), you can all scan simultaneously.
Before I used ZipGrade, I had another fairly time consuming process where my TA’s and I would alphabetize all the physical exams and manually match them to the roster to identify any students that didn’t hand in an exam. I would follow up with those missing students to make sure they were okay.
ZipGrade radically speeds up the accounting process by showing the scanned names of the students whose id’s didn’t match any students in the class roster. I can then manually associate the exam with the correct student by tapping on the student’s scanned name and choosing “Change Student” from the review menu. The final step is to simply look at the class Grade Book Report on the ZipGrade website to see which students didn’t hand in exams.
The scanning technology behind ZipGrade is amazing—Its accuracy and speed enable use scenarios I didn’t think were feasible with an app that runs on a phone. The service piece of the system is easy to overlook, but the UI is clean and the functionality is solid. To me, this system is worth FAR more than the crazy low $7/year that ZipGrade charges. I understand that this might be a reasonable price for the underfunded K-12 teacher market, but I also believe most folks in higher ed would be happy to pay substantially more. I know I would.
]]>Over the last two years, George Orlov and I have developed and refined six invention activities that we’ve been using to teach key concepts in applied econometrics. These include Ordinary Least Squares, controlling for a categorical variable, interacting explanatory variables, difference in differences, regression discontinuity, and fixed effects. I’m thrilled to announce that we’ve written “Using Invention Activities to Teach Applied Econometrics” that includes the details of all six activities as well as our arguments for why you should use them.
You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Kids in the background, watching THE INCREDIBLES.
1:49 ⏯ The best teacher Doug knows, bar none. Doug McKee welcoming Doug Robertson. Talking about how the fundamental principles of teaching apply equally to K12 and higher ed. You can’t be an overexposed K12 teacher.
5:10 ⏯ How long Doug Robertson’s been teaching and how many students he teachers. Classroom management as engagement. Engagement online vs. face-to-face.
9:39 ⏯ Student engagement comes from great lesson plans AND the teacher’s personality–Neither works alone. Doug Robertson stands on desks, uses puppets. Other great teachers take a calmer more conventional approach. Teaching like Paul McCartney (and Stevie Wonder) vs. teaching like KISS.
13:00 ⏯ Taking students out of the comfort zone. They’ve learned to play school, but some of that they must unlearn. Being a freeform teacher but giving the students an end goal. Struggle as necessary for learning. “I wanna see you do it wrong, and then I’l help you.” Type A 4th graders vs. Type A college freshmen. Don’t scaffold so much that failure never happens. The minimalist program in instruction. Edutwitter’s ideology of freedom.
19:37 ⏯ Wanting to be a teacher. High school Doug McKee’s ambiguous answers. From lifeguarding to teaching: helping people learn to do something you love, and all of the excitement and energy. Developing a new skill–to get in touch with learning. Doug Robertson’s mother was a teacher. Teaching is the job most likely to be inherited. Doug McKee is a third-generation teacher.
27:10 ⏯ Doug McKee’s structure for the conversation: using Susan Ambrose et al’s How Learning Works. Comparing K12 vs. a college a classroom. Meeting students where they are. Seeing students as functional human people. Spending 180 days with students 6 hours a day. First seeing their unique quirks, and then seeing them as having a rich inner life. Being The Authority and yet helping students question appropriately. “I Am The Man. And there is no man.”
35:39 ⏯ Watching people listen to music they don’t know much about. The Reaction Video or ‘Watching Watching’ Youtube genre as learning: encountering something new and unfamiliar, the discomfort of re-shaping your mental paradigms. “Lost in Vegas” YouTube Channel
39:44 ⏯ Helping students organize knowledge and giving students freedom. A presentation for Doug Robertson isn’t Google Slides: it isn’t a means, it’s aiming for a certain goal. Doug Robertson stumbling across his core teaching philosophy: the student who wouldn’t stop doodling. Not expecting the students to do the thing you want the way you want. Our brains encode information in multiple systems.
46:29 ⏯ Segue to thinking about how to motivate students. Motivation and entertainment, edutainment vs. motivating students to motivate themselves. “I believe in you, Sweetie” vs. giving students concrete reasons why they can do it. Not being scared of students doing crappy work: “This thing will be bad first. It’s okay.” A payoff of developing trust with your student: the student knows you know them. Combatting “I’m not a math person.” “You’re not good at math…yet.”
54:29 ⏯ Doug Robertson’s Hobby Project. Learn ventriloquism. Or to juggle. In three weeks. Document what you did & how it felt. “Grandma taught me to knit.” Encouraging students to try. Supporting good wrong answers. Accepting the fear of failure to banish it. “Goal-free problems” is the technical term for asking for many relevant vs. the right answer. See Sweller (1988) “Cognitive load during problem solving.” Cognitive Science 12:257-285
1:02:38 ⏯ The Hobby Project was: teaching students how to learn. A memorable failure. Teaching persuasive writing through an unpersuasive topic. Then changing it to a hot topic. Students having conversations they care about. Hope for the future.
1:08:40 ⏯ Thanks and sign-off. Doug Robertson promotes his books, twitter feed, and blog.
]]>There isn’t a huge amount written down about two stage exams, so I thought it would useful to provide links to the resources I know of. In the comments below, please share papers/pages I’ve missed or burning questions that aren’t answered!
In February of 2017, I got my dream job when the Cornell Department of Economics won an internal grant to transform our entire undergraduate core curriculum using evidence-based active learning methods. We had written a comprehensive proposal that detailed the process we would use: pairing teaching-focused postdocs with faculty to transform one class at a time and carefully estimating the impact of the new methods on student learning. Since we started work in earnest that summer, our students have learned a ton, and so have we:
Hire great postdocs. Our two postdocs (George Orlov and Daria Bottan) have been amazing. The key was to look for someone who has a real passion for teaching, has some knowledge of modern pedagogy, and has good quantitative skills. Specialized expertise in the area of the classes they will be transforming is a plus. This kind of candidate is absolutely out there.
Choose the right first class. If I could do it again, I’d pick a class that’s taught as a pure lecture by a faculty member who is open and even excited about trying active learning. This maximizes the probability of seeing real improvements in student outcomes, and it’s always nice to start with some success.
Draft learning goals for courses early and get a broad group of department faculty involved in reviewing those learning goals. It turns out faculty often have strong opinions about what should be taught it in a course, even if they don’t normally pay much attention to it.
Communicate clearly with the instructor early on about the transformation process. A lot of measurement happens during the semester (e.g., COPUS, focus groups, student assessment), and it’s important to get instructor sign-off before it starts.
Assess entering students skills as best you can at the beginning of the semester. This is critical for controlling for differences when comparing outcomes across classes. We invested heavily in developing an economic statistics assessment that we gave our Applied Econometrics students. We gave our math-intensive introduction to economic statistics students a basic math skills assessment. And we gave our intermediate micro students the same math assessment as well as a micro principles assessment that we developed.
Assess exiting students skills as best you can. This is primarily how we measure impact of our transformation effort. We’ve developed assessments for the Applied Econometrics course as well as the math-intensive intro stats course. Our intermediate micro assessment is still in development. Our plan is to eventually publish all of our assessments so instructors everywhere can use them.
Get demographic data on students. You can either collect your own or (if possible) get access to university administrative data. This is crucial for controlling for differences in student populations as well as estimating sub-population specific effects. There’s evidence that active learning is even more effective for URM’s and women, and using these teaching methods will reduce performance gaps–This data lets us see if that’s happening in our classes.
Create an explicit transformation plan for the class you’re transforming so it’s clear to everyone involved what’s going to be done. People have very strong (and different) ideas about what active learning is.
Communicate regularly with the whole department. The past few months I’ve been maintaining a shared folder of ALI documents for the whole department and sending a monthly update email. We also give occasional 5 minute updates at faculty meetings.
Share what you’re doing and what you’re finding with folks outside the university (e.g., conferences, invited talks and journals). This lends the effort credibility, and that’s crucial when you are trying to convince faculty to do something uncomfortable. Also, once you have good measures of student learning and are gathering all this other data, it enables really interesting research projects. We ended up submitting abstracts for four new projects to the AEA Conference on Teaching and Research in Economic Education (CTREE) this year.
Get the best TA’s you can in your ALI classes, especially if you are changing what happens in discussion sections. A TA in an active classroom doesn’t just sit there—they spend a lot of time interacting with students and guiding them in activities.
While we’ve learned a lot and accomplished a lot in our first 18 months, our focus has been on process and measurement. The next 18 months will be about reaping the benefits, and I couldn’t be more excited!
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Intro
1:09 ⏯ What’s this episode all about? Looking back at the courses we taught this fall. What were our challenges? Podcaster, teach thyself. The challenges of educational research.
4:33 ⏯ Doug tries to re-work a course he’s taught many times: applied econometrics, the first economics class to be transformed through Cornell’s Active Learning Initiative. They measure in a baseline semester and then try new things in following semesters.
7:22 ⏯ Edward designs, writes, and teaches online–so he can help others with the same process. He migrated an online film history course that he teaches for Lake Taho Community College to the Canvas Learning Management System and incorporated student blogging, but now he wanted to revise it.
10:44 ⏯ Edward’s big challenges were: explaining what historians do, helping students avoid making the same mistakes again and again, helping them to write better, re-organizing the course while it was running, and rethinking assessment.
14:37 ⏯ Doug wasn’t changing the technology in his course: a laptop and an iPad work together so Doug can avoid being stuck behind a lectern. The iPad hybridizes the chalkboard and slides. Doug’s challenges were two-fold:
20:49 ⏯ Edward also needed to do a lot of scaffolding, as it turned out.His students needed to find and then analyze primary and secondary historical documents, journalistic and academic criticism–as well as other sorts. To scaffold the distinctions, he gave quizzes on primary vs. secondary, journalistic academic, and he started by giving the students documents to use. Then rethinking assessment based on “habits” rather than criteria: writerly, scholarly, critical, and historical. And scaffolding these habits by expanding the rubric week by week.
31:06 ⏯ Doug and his postdoc (George Orlov) refined their invention activities and plans to publish them this year. Sidetrack #1: Why do we create our own courses and course materials? Why don’t we share and publish materials to be reviewed and evaluated? Doug also gave up on pre-reading quizzes the lecture, and moved the quizzes and reading to after the lecture–and he made the quizzes harder. Sidetrack #2: Using quizzes for a very focused purpose: to help the students go deeper into the material.
36:01 ⏯ The surgical use of quizzes. Edward’s good news: The students’ writing improved dramatically. Once an auto-graded quiz is written, you can use it forever. You can configure the quiz so the students don’t simply search for the right answers. Three bits of bad news:
46:09 ⏯ Sometimes our assessment just consists of telling students they did the prep work correctly. Using points rather than letter grades. You can’t assume the students know how to do some of these basic things.
50:11 ⏯ Focusing on what students achieve rather than whether or not they do all the work. Encouraging students to do the work when they may not have the greatest study skills. The final project or exam may show what the student learned–but it may not. That’s why our grading/weighting schemes are what they are.
57:17 ⏯ What worked for Doug: The revised invention activities worked much better. Some students responded to clicker systems from their dorm rooms. The quizzes were much harder–but the grades were higher than expected. The students had time to work on them, and they wanted to get high scores. The surgical uses of quizzes (again). A sample Evernote Dossier on BLAZING SADDLES
1:01:14 ⏯ Edward would like to use peer assessment, give space for students to revise their writing, and separate out dates and times from other forms of content (to avoid misunderstandings about due dates).
1:04:44 ⏯ Doug wants to focus next on group work and what kinds of standards and criteria students can use to rate each other, ways to improve the students’ poster presentations, and dive into the assessment data and see where the teaching could be improved.
1:09:05 ⏯ Try debriefing your semester with a colleague over coffee. What challenges did you face? What did you do to address them? What worked and what didn’t? What do you plan to do next? Signing off with “We Three Kings” recorded by Ben Devries
]]>Last night we had an awesome poster session with my 86 Applied Econometrics students, and I thought it would be useful to write down some recent lessons learned:
Was the poster session perfect? No way! But it was absolutely more productive and fun than ever. Given the logistics are in pretty good shape, I think I’ll be focusing on improving the actual content of the posters next time around. Progress!
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Introducing and welcoming Jose Vazquez. Trying to focus on peer assessments. How Jose came to focus on teaching economics: bringing together research, social benefit, and teaching.
5:44 ⏯ The illusion that active learning methods means giving up control: if anything, it requires a high level of comfort. Putting the lectures on video–and putting his best economic jokes in the videos.
9:19 ⏯ Active learning is not just students working problems together. Some short pieces of lecture are usually needed. Jose: “You need to know the kinds of questions and misconceptions students are going to have.” Helping students identify their problems–so they can correct them. Comparing lecturing and lecture capture with listening to Miles Davis live and on CD: One’s better, the other is still pretty good!
15:56 ⏯ There’s still a lot we don’t know about classroom instruction. But we are starting to have the tools to collect the data we need. Teaching large classes also give you more data.
19:41 ⏯ In 50 minutes, there’s not a lot of cognitive change. It’s wiser to focus on motivaton. The illusion we sometimes have that they’re learning when we’re talking. Plus a similar notion that there’s one best way of expressing any idea. Uta Hagen writing a book that can’t be misunderstood
25:10 ⏯ Benefits and challenges of peer evaluation: students giving each other either A’s or F’s. Using a very detailed rubric for the purposes of evaluating the results.
32:32 ⏯ Simplifying the rubric by getting rid of rows and columns: using a holistic rubric with only three levels. In favor of simpler rubrics: Doug’s rubrics for peer evaluation of project.
38:24 ⏯ Can we reduce teaching to a science? It’s not just about learning objectives: it’s also about inspiration. Not everything can be quantified, and the process itself matters.
44:44 ⏯ Teaching fails. Bringing back the lecture–instead of pure active learning. Lecturing has a high cost (in terms of time), but it’s sometimes worth it. Taking student feedback seriously: instructor humility.
52:22 ⏯ Quality control in peer grading with Moodle: giving students a sample to grade, or having someone spot check the grades. Asking students to explain their score improves the results. (Also: students grading a few pieces of peer work reduces the effect of fatigue on grades.)
57:24 ⏯ Thanks and signing off.
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ A big day for a superfan as he shares a couple of his favorite recent episodes:
2:11 ⏯ Getting going. Welcoming Justin Cerenzia. St. George’s School has a yacht where students can go to sea and take courses remotely.
7:32 ⏯ Learning from K-12 education: ‘Stations’ in the classroom to promote interdisciplinarity.
11:08 ⏯ Historians learning from chemists about teaching. At St. George’s, the classrooms aren’t all that different from each other. How Justin got there: his education and teaching experience.
14:59 ⏯ Some ideas work across across disciplines: the micro/macro distinction in economics and history. How the scope of the time frame helps make different thing significant. The History Manifesto by Armitage and Guldi.
18:47 ⏯ Is flipping the classroom a big deal in the humanities? Justin thinks about how best to help his students learn.
21:25 ⏯ Stanford’s Sam Wineberg and history education is critical of the sage-on-the-stage approach. Do Edward and Justin actually use learning objectives in the humanities? Oh yes.
26:52 ⏯ Can humanists use multiple choice questions? The AP History exam does–so Justin does. And a USC law school professor (Tom Lyon) uses clickers in a law school course to help skills practice legal thinking live in-class.
35:12 ⏯ Think-pair-share in the high school history classroom and digital tools: Quizzoodle and Backchannel Chat and the Digital Learning Farm concept. Laptops and cellphones in the classroom
40:43 ⏯ How Justin teaches metacognition. Using non-digital games in the classroom to help students define concepts. When a student fails to learn is a good opportunity to think about metacognition. Humanity and teaching.
48:50 ⏯ Teaching and motivation. The ABC’s of How We Learn. Intrinsic motivation vs. self-efficacy. Some up’s and down’s of students collaborating with peers.
54:59 ⏯ Justin’s teaching mistake: personalized final exam questions.
58:07 ⏯ Thanks. Faculty development at St. George’s. Evaluating Professional Development by Thomas R. Guskey.
]]>Please do let us know if you have any questions all in the comments below.
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Intro
0:34 ⏯ Welcoming Marilyne Stains. What are people doing in the classroom? The culture of privacy around teaching. Faculty don’t automatically observe each other. But there is a lot to gain–for both the observed and the observer.
4:45 ⏯ Videotaped observations can provide fodder for rich faculty development conversations.
5:25 ⏯ COPUS (Classroom Observeration Protocol for Undergraduate Stem) is an objective measure of instructor and student behaviors. It’s NOT a measure of teaching quality. RTOP (Reformed Teaching Observation Protocol) measures the degree to which the instructor is doing inquiry-type teaching. The data collection gives the opportunity to, for example, measure the impact of faculty professional development. And these may be used for formative teacher assessment. COPUS has high inter-rater reliablity, and it’s easy to train undergraduates to use it.
8:28 ⏯ Marilyne uses COPUS for research on teaching. But she has colleagues who uses COPUS for faculty development and formative feedback. How might you observe a peer or be observed for faculty development? Have a conversation first. Doug has been observed using COPUS ten times with an interval between, and he got valuable data. You may think you’re active, but you’re not–or vice-versa. COPUS is objective.
11:58 ⏯ There are lots of things going on in a classroom that COPUS doesn’t capture. The OTOP (Oregon Teacher Observation Protocol) is also useful as a more qualitative measure. It’s a problem in general when the observer doesn’t know much about teaching. Summative teaching observation evaluation is a tough nut to crack. We often assume that experience teaching translates into knowledge about teaching. Newer faculty might actually have more exposure to how learning works. And senior faculty can ‘freak out’ when they visit an active learning classroom. Some faculty can imagine that active learning is less challenging: they may see it as entertainment.
17:28 ⏯ Humanities teachers have done active learning for a long time. It’s not just seminars, it’s group work, projects, peer instruction, etc. Faculty who want to do active learning may not know what to do. POGIL (Process-Oriented Guided Inquiry Learning) for chemistry, biology, and math is one set of resources. But seminars can still be run in a teacher-centered way. It’s the bad Socratic Method: “Everyone Socrates talks to is stupid.” Classroom observation doesn’t measure any of the teaching that happens outside the classroom.
22:43 ⏯ We may spend more time teaching outside the classroom than in–but we don’t know. Many organizations set standards for and evaluate the quality of online courses, but there is no equivalent for face-to-face courses.
25:23 ⏯ Who should be doing the classroom observation? Only senior faculty observing pre-tenure junior faculty is just not enough. Options include faculty outside the department from neighboring disciplines. There are issues with observers with no disciplinary knowledge. So ultimately we may need difference people who bring different lenses. There are dangers in faculty feeling judged by colleagues who do research on teaching. Observation is ideally one part of a conversation and driven by faculty questions and goals.
32:34 ⏯ Doug’s experience observing at another institution as part of a tenure review. Marilyne gave Doug the NSF report on teaching quality evaluation, which referred to the OTOP (Oregon Teacher Observation Protocol), and Doug found that useful. Some protocols focus on behaviors, others on how students participate in producing knowledge.
36:23 ⏯ COPUS is built on the TDOP (Teaching Dimensions Observations Protocol). Marilyne had trouble establishing inter-rater reliability with TDOP and RTOP and found that easier with COPUS. Doug and Edward go meta on observation protocols.
38:55 ⏯ Marilyne has been using COPUS to monitor change before and after participation in faculty development programs–and longitudinally. They found that faculty behavior do change, although the new behaviors show a downward movement over the long term. Doug and Marilyne discuss experimental design, and Marilyne shares the actual process of recruiting research subjects. Those who DON’T sign up have higher self-efficacy about teaching–which is probably why they didn’t sign up. But the workshop raised the participants self-efficacy about teaching in just two days to the level of non-participants. They tried to create communities to maintain the teaching behaviors, but they have not had luck.
45:05 ⏯ COPUS adoption across whole departments seems to be rare. Doug suggests COPUS could be used to compare departments, not evaluate instructors. There is little research to link COPUS behaviors to learning outcomes. UBC has seen trends. The data may be noisy, like the correlation between a certain vitamin and certain health outcomes. Doug worries about teachers trying to ‘game’ COPUS by pseudo-active learning. Marilyne emphasizes that COPUS does NOT measure teaching quality. People ask Marilyne what good teaching is, and she says there are too many factors. Qualitative evaluation is hard–as you recognize when grading papers.
52:13 ⏯ A mistake in the faculty development classroom. Faculty don’t credit education research. “That’s not true in my classroom.” Using principles and ‘case studies’ instead of data. Faculty believe in ‘personal empiricism: “I tried it once, and it didn’t work.” Using personal anecdotes to think about learning. Building a culture of collecting and analyzing data in order to talk about learning. The case for student portfolios.
1:03:20 ⏯ Thanks and signing off.
]]>While reading the essay it occurred to me that this is exactly what the Math Club at my daughters’ elementary school brings to the table. Tom and I present new concepts and problems, and we let the kids play with them. The journey and the attitude is way more important than getting to the solution.
Playing with math is a pretty common activity in my home too (see Potty Math), and over the last year the best example is something we call Math Time. Several times a day one of us will spontaneously yell out “Math time!” Everyone else stops what they are doing to stare at the (digital) clock to see how the time can be turned into an equation.
At the beginning, the equations involved simply addition and subtraction:
3:12 → 3 = 1 + 2 or 3 - 1 = 2
Then we added multiplication and division:
12:34 → 12 = 3 x 4 or 12 / 3 = 4
These days it can get a little crazy:
3:29 → 32 = 9
or even better:
12:05 → 11:65 → 11 = 6 + 5
Whenever I think we’ve played it out, one of us comes up with a new variant. And when the math time well does eventually run dry, we will find something else because the math well will never run dry.
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Intro
0:41 ⏯ Welcoming Monroe Weber-Shirk. The AguaClara Cornell clean water project and how it started. Monroe’s experience with education abroad: Honduran refugee camps. Goshen College (Monroe’s alma mater) requires all its students to spend some time studying outside the US. A long-held belief in the importance of experiential and engaged learning.
6:22 ⏯ A project for Engineers for a Sustainable World at Cornell. But the first project actually didn’t work–and why–and stumbling across a better project.
10:27 ⏯ How the student experience has changed. Scaling engaged learning from 25 students to 80 by finding student leaders. This is not a one-semester project: students may participate for four years. Students work in teams; the teams have leaders; more experienced students become experts and serve as research advisors.
15:02 ⏯ Learning about leadership. The team leads organize the syllabus, design and create tutorials, deliver content. The instructor only gives two lectures per semester. Two lab sessions per week, sometimes a Monday night lecture, one symposium of student presentations. All the students are in upstate New York designing solutions for people in Honduras. Why not bring students to Honduras? The importance of partner organizations and student accountability.
22:40 ⏯ What do employers say they want in engineers? The student is not a widget. Going beyond a mechanistic view of how engineering solves problems. Both conventional and low-tech/appropriate designs have their problems. Complex systems fail. e.g., Jacques Tati’s Monsieur Hulot character often battles with modern technology. Even “appropriate technologies” like slow-sand technology are expensive and don’t work with dirty water. Monroe’s students design non-electrical systems with no moving parts, save for levers.
29:58 ⏯ Solving the right problem and designing solutions that can be maintained. The first plant failed. The second plant is ten years old and has been upgraded multiple times. Employers also want teamwork and leadership skills. The typical college course has a false model: that the student is a tabula rasa. Teaching servant leadership. Innovation requires failure. Some students go in to careers that involve social justice. More people now that don’t have safe drinking water than at any other time in history. Flint, Michigan and Ithaca public schools.
38:47 ⏯ The Big Question: Is there still a place for didactic lecture courses? The challenges of teaching in domains where the existing knowledge is not sufficient: Monroe’s focus is flocculation. Edward’s intro psych course and Doug’s two graduate micro-economics courses.
48:54 ⏯ Should we teach problems? Or theories? Monroe’s students have trouble applying theories they know well to a practical problem. Edward on teaching creative activities.
52:10 ⏯ Monroe’s teaching fails? Every course is an experiment. His students want better explanations in his lecture slides. On taking a knee during a lecture while a protest was taking place. Edward’s motto: It helps to care.
57:53 ⏯ Thanks and signing off.
]]>You can subscribe to the Teach Better Podcast through your favorite podcast app or simply subscribe through iTunes if you don’t have one yet.
0:00 ⏯ Intro
0:35 ⏯ Welcome Mac Stetzer. The benefits of comparing teaching K-12 and college. Flashing back to Episode 11: Teaching Undergraduates and Preschoolers with Carla Horwitz. In K-12, professional development is required, while in higher ed it’s optional. Being a resource for more colleagues without pushing things on them. Mac’s position and UMaine’s faculty incentive grants. Crediting teachers and undergraduate ‘learning assistants’ for what they know about teaching and learning.
9:38 ⏯ How Mac discovered the study of the teaching of physics. The importance of teachers and students being comfortable with not knowing the answer. Teachers can also model misconceptions. The importance of listening to learners. The U Washington Physics by Inquiry approach. Alan Schoenfeld’s research on metacognition in math education: when the instructor never makes mistakes, students get a false sense of how problem-solving works
18:24 ⏯ Supporting students in the experience of struggle and making incorrect predictions as part of learning. Eliciting student errors without making the students feel incapable. How Doug handles this on the first day of the term. Doug’s high school math teacher proves 1=2.
22:32 ⏯ What is physics in K-12? Balancing, ramps, and tracks. Using the Physics by Inquiry approach. How to show and experience the concept vs. merely knowing the equations. Students who teach learn more–because they need to go beyond ‘this is the equation.’ Sometimes teachers only know the takeaway concepts, not underlying proof. Using analogies to teach and reason. Good teaching is good listening.
33:40 ⏯ Models: closing the gap between the current state and the goal state. But you need to know where the learner is. Leveraging what the students already think. Good teachers have good models of cognition and meta-cognition. Interpreting students’ responses can actually be tricky.
40:13 ⏯ Aligning the faculty development experience with the methods being taught. Seeing everyone as bringing something unique. Tailoring instruction in small groups. And allowing people to bring their own experiences to problem-solving.
46:24 ⏯ Reverting to didactic methods–because you’re less comfortable with the material. So teaching also involves tolerating one’s own discomfort–hence getting closer to the student’s experience. That often gets lost.
51:30 ⏯ A short answer to a large question. Things you must include in a faculty development program. Get good learning materials. Create a climate where failing is okay. Go in depth: don’t just skim through it.
55:40 ⏯ Mac’s teaching mistake. Discovering by accident that students didn’t know a key concept in electronics. Doug’s mistake with a multiple choice clicker question.
]]>