To read part 1, click here. Spanish 9
It was hard to get a decent photo, but this is actually my classroom transformed into a maze on the annual 'prank day' of which I was totally unaware (thanks teachers!). Nah, my room was pretty sweet and easy to clean up in comparison to others who had anything from a million pictures of kilts to live fish. Bodies in Creation
| Independent Learning Program
I'm not entirely sure all of what the students processed from the PBS doc I showed (and attempted very hard to censor, out of respect for the school) about Kahlo, but I certainly enjoyed it. It reignited my passion for and interest in surrealist art, the 1920s, and authentic self-expression in all of its glory and profanity. Spanish 30
|
0 Comments
Not this past Thursday, but the one before that, I administered an open-book multiple choice test to my ELA 20-2 class only to discover that the class average was about 7/20. Yikes. In hindsight, the questions may have been too difficult. All of them were really at the 'analyse' and 'evaluate' level, which was why I made it open book. I wasn't sure what to do, but my teacher mentor suggested I have the students do a rewrite.
So, this Thursday (after I had tracked everyone down and made sure they completed the test the first time), I let them see their mark as a bit of incentive to come on Friday. Then on Friday, they were given their completed scantron and were allowed to work in pairs to write the test again. The second time around, the majority of students got 20/20 and no one got lower than a 17. I made an effort to go to pairs and ask them to walk me through their thinking process in selecting the new answer so that I knew they understood why the answer was right and that they weren't relying primarily on the process of elimination. I'm not entirely confident that the new results accurately indicate their understanding (that's a huge jump), but the conversations I had with students indicated they had learned to reason through things more thoroughly. I decided to 'count' both tests, but the first was weighted only half as much as the second. Do you think this is fair and accurate? I love this idea for communicating both test and report card results. They do tell us something, but certainly not everything (and not the most important things) about a person.
I had originally planned on making this presentation into a Brainshark video, but my free account only allows me 15 minutes of recording time, so I was unable to do that and it's considerably complicated to make a timed PowerPoint presentation into a YouTube video. This attempt showed me that I have more to learn about "flipping" my classroom through direct instruction via technology (I think I'll eventually purchase Camtasia and a good mic).
Regardless, I wanted to make the information available to anyone who might benefit from it, so here is the PowerPoint with the "meat" of the presentation located in the "notes" section on each slide. For quick reference, I also want to include in this post the steps I plan to take to use social media in my classroom (references are in the PowerPoint).
Today’s class was heavy. We were challenged to consider whether students drop out or are pushed out. Jane modeled quite vividly the process of “autobiographical suicide”: “in order to succeed in school, a student is required to wipe out their original (cultural/social) identity and become a different person.” I never experienced this tension or temptation in school because I had the right “cultural capital,” but I did recall eventually “dropping out” of a Christian club in university after a number of years of service and leadership. I remember gradually recognizing that I no longer shared the same values as the leaders of the organization and feeling more and more uncomfortable with the evangelistic tactics they endorsed and expected. I remember often feeling pressure to speak a certain way and adopt certain beliefs, and eventually came to resent feeling trapped, unable to be myself or to relate to God the way I felt I wanted to. Recalling this experience helped me to connect my own feelings and understanding to the student whose values and culture don’t fit with the school’s. In my own journey with various church communities, I’ve been learning to value differences, to be okay with disagreements, and to be more comfortable living in tension. I’ve also been blessed to be a part of a church community where I genuinely feel free of judgment and free to be myself. This has enabled me to love myself more and to extend that love and grace to those whose values are very different than mine and who might themselves judge others and put others down.
I want to read the rest of this book! McLaren chose such raw, heart-wrenching, and heart-warming moments for his readers. As he notes in his Afterword, I continue to be angered that public education masks itself as the “great equalizer” when, because of its dual purpose of teaching the overt and hidden curriculum, it tends simply to perpetuate social inequalities. It makes me upset that “the poor are often ostracized to states of unworthiness and inferiority” (p. 177). Jesus called them blessed and the salt of the earth.
McLaren begins one paragraph by saying, “Democracies like ours exhort equal opportunity but often ignore ways in which our schools operate unconsciously and unknowingly to guarantee that there will be no real equality” (p. 177). I think part of the problem is our emphasis on equality rather than justice. We are not all equal, but we are all needy, and we all have something to offer. Maybe we need to focus on meeting each other’s needs, as well as receiving others’ help, and not making a big fuss over either. This is challenging in a society undergirded by competition. I agree with McLaren: “the school system is mostly geared to the interests, skills, and attitudes of the middle-class child” (p. 178). Being one myself, I constantly need to check myself for my biases and assumptions, asking, “Whose voice and concerns am I leaving out?” and “How can I honour my students as they are while inspiring them to be all they can be?” and “What do I need to unlearn in order to move forward more ethically?” A few days ago, I got an email newsletter update from Edutopia with the subject line: "Games? In Class? Yes, Please!" Gamification of learning has intrigued me for a while now, and I gamified my first social studies lesson to help students learn the different First Nations groups that live(d) in Alberta. They loved it and requested a number of times that we play it again. I was encouraged, but want to proceed with caution. While I love the idea of kids "creating" (c.f. Bloom's Taxonomy) by designing and building video games, and I agree with borrowing some useful video game mechanics after which to model one's classroom, I'm definitely opposed to using badge/level/leader board mechanics.
Maybe I've drunk too much of the Alfie Kohn Kool-Aid, but he provides compelling, research-based reasons for upholding such principles as: "9. If students are rewarded or praised for doing something (e.g., reading, solving problems, being kind), they’ll likely lose interest in whatever they had to do to get the reward. "10. The more that students are led to focus on how well they’re doing in school, the less engaged they’ll tend to be with what they’re doing in school. "11. All learning can be assessed, but the most important kinds of learning are very difficult to measure -- and the quality of that learning may diminish if we try to reduce it to numbers." These principles stand in direct contrast to the reward/punishment, competition-based design of video games, marketing, business management, and our general socio-economic infrastructure, largely because we tend to accept and promote short-term thinking, planning, and gains. Do badges, levels, rewards, grades, bonuses, subsidies, awards, and 'winning' or 'beating' motivate us? Certainly! But for how long, for what, and unto what? By ensuring that we have a sufficient variety and quantity of learning evidence from students, we are able to produce a fairer and more accurate assessment. Davies notes that by observing the principle of triangulation, the evidence becomes more valid (p. 46). This makes sense, because the variety ensures that students have the chance to demonstrate their learning in at least one way (showing, telling, writing), and prevents bias toward students who are more proficient at one method. This made me think about the philosophy and intent of my ED3503 class. By incorporating all six language arts into instruction and assessment for any subject, I should have accomplished triangulation as well. Davies also states that we must gather evidence over time to detect trends and patterns (p. 51), which prevents us from making conclusions about learning (positive or negative) based on only one assessment. Our lesson plan templates greatly help me remember to plan at least three assessments per lesson using UbD.
Another significant piece of the puzzle is ensuring that assessments have “clear criteria that define quality” (Davies p. 52). Clear criteria enable students to self-assess and improve their own work over time, and enable teachers to evaluate the evidence objectively. I thought that the four-step process (Gregory et al. [2011] ctd. in Davies pp. 35, 56) would be an excellent and simple way to ensure that students understand quality, take ownership of assessment, especially by being able to revise the criteria based on further experience. The continual revision/improvement process also enables me to be responsive in highlighting both common errors to prevent recurrence and additional qualities that make an exceptional product to guide students for further growth. I would not rely on this alone, however. I would prefer to prepare my own criteria ahead of time in collaboration with other teachers and, where it makes sense, experts from the community, and then use this as a basis to guide and refine what the students brainstorm. I feel that this would be more accurate and helpful. I loved the idea of ongoing portfolio development to foster student metacognition, accountability, and sense of progress and achievement, and to manage evidence of learning. I recall putting so much time and effort into my assignments and always feeling disappointed a) that only my teacher saw it, and b) that the product had no life after being graded. Portfolios are a great way to enable students to share their work with a broader audience, to improve products after feedback from anyone, and to reflect on what they’ve learned and how they’ve improved. It makes each assignment more meaningful and valuable. I prefer and am used to managing information using digital online systems. I decided to ask Paul Bohnert for his ideas on what’s out there, as I have seen only Moodle, Blackboard, and D2L, and am not confident that these are the best solutions. At minimum, I would like to empower my students to store their evidence in digital format (video, audio, photo, and convert hard copy or physically awkward projects into photo images) so that it is accessible to both them and me anywhere (secured, cloud solution). I’d like them to be able to track version history on items (for edits, comments, etc.), file items in a way that makes sense to them (e.g., date, subject), and then tag items for different portfolios. Similarly, the system should allow me to provide feedback and to link student evidence with assessments in lesson plan and with learning outcomes (this also fulfills fix #7 from A Repair Kit for Grading: 15 Fixes for Broken Grades). I found “The Case Against Grades” thought-provoking and well-argued. Kohn highlights that grades are another form of extrinsic motivation that erodes intrinsic motivation (p. 30). I’m learning in educational psychology, however, that, while it’s desirable that student are intrinsically motivated, extrinsic motivation may not entirely destroy intrinsic motivation, often exists alongside it, and could be a stepping stone toward building it. Similarly, if a grade is based on a well-designed rubric, it’s possible that working for a grade simultaneously involves meaningful learning. On the other hand, grades seem to best (only?) serve students who are already high-achievers. I like the idea of not grading, but am not yet sure how I would go about answering to people who require numbers to measure. This quote from Jim Lloyd (Chappius et al, p. 7) strongly resonated with me: "Classroom assessment for learning ... is a way of being. It is a type of pedagogy that when used as a matter of practice makes a profound impact on the way the teacher engineers her learning environment and how the students work within it." The key ideas wrapped up in this quote (and the rest of the readings) are: 1) effective teaching integrates assessment into lesson planning and execution systematically, and 2) it is possible and desirable to engage students in the assessment process.
By integrating assessment into lesson planning, I will develop much stronger lesson plans and be able to support, guide, verify, and report on student learning much more effectively and clearly. By making assessment overt and inviting students to participate in assessment activities, I'm confident that the students will take more ownership of their learning and pride in themselves as learners, and that they will develop skills that will enable them to be lifelong learners in academic and non-academic areas. One notion that stuck out at me was that attendance, effort, and behaviour should not be accounted for in the grading system and that focus should be on what students know and can achieve. In my schooling experience, it was common to get some nominal marks for participation. On the surface, attendance, effort, and behaviour seem important, but I realized that all three affect student achievement. Measuring achievement would make things more simple (because you aren't trying to consider those factors separately) and accurate (because the criteria for achievement are tied to the learning objectives set out by the Program of Study). I also recalled that both formative and summative assessment criteria were hidden from me throughout my education in non-core subjects, including physical education, art, music, and dance. I tended to do well in them, but I had no idea what I was doing to get my good grade, nor what I could do to improve my grade when it wasn't as high as my grades in other subjects. That was frustrating for me, and, while I liked these subjects and the activities we did, I didn't feel I was learning anything, and skill improvement just seemed to come with practice. When I put myself in my teachers' shoes now, I envision structuring these classes much differently. I find that the Keys to Quality Classroom Assessment figure (Chappius et al, p. 5) outlines a logical progression in designing lessons that assess, further, verify, and communicate students' learning progress, as well as engage the students in their own learning. I feel confident that, as I gain knowledge and practice of each 'key', I will be able to, say, take the learning objectives for grade 10 art, design assessment methods to match the targets and provide quality feedback, and empower students to assess themselves and set goals. Having this logic and structure in place is reassuring for me, because it also allows me to systematically check what might have gone wrong if things don't turn out as I predicted they would, and to know what I need to tweak, ditch, or improve. On a related note, I learned that descriptive feedback helps students identify specifically what to continue, improve, and avoid based on comparison to an exemplar, sample, description, or criteria. Evaluative feedback helps students understand whether they need to improve, but not how. Therefore, as a rule of thumb, teachers should increase their descriptive feedback and decrease their evaluative feedback. Research says, however, that evaluative feedback may interfere with learning (Davies, p. 18). This made me wonder whether the time spent designing and implementing evaluative measures are really worth it if the goal is student learning. Evaluative feedback seems to be used primarily to summarize (often average) students' progress (individual or aggregated) to people outside the classroom, who likely don't understand all of the intricacies associated with generating that report and therefore cannot properly analyse and act upon it. Even if I had a clear understanding of how stakeholders will use the assessment information and identified what level of detail is required, I'm not confident that I could generate any reports that explain things thoroughly yet briefly enough for certain audiences to use them properly. I experienced this very problem often in my job as an occupational health and safety analyst. Perhaps evaluative feedback is useful in identifying whether certain teaching strategies are more or less effective, but then it seems that one would have to analyse reports of student progress alongside teachers' lesson plans, classroom management plans, assessment plans, and other tools used in teaching. Alternatively, if descriptive feedback supplements evaluative feedback, it may become more effective. I think this is what the 'balanced assessment system' table (Davies, p. 21) attempts to promote. Looking at this table makes me wish that the school and community leaders would ask more questions related to formative assessment, or at least understand that the questions they are currently asking indirectly pushes teachers to evaluate students in ways that actually interfere with their learning. |
Archives
August 2017
Categories
All
|