I love this idea for communicating both test and report card results. They do tell us something, but certainly not everything (and not the most important things) about a person.
0 Comments
The following two videos were useful for devising and marking tests digitally. Tests created in this way can be used formatively (since you can have it email the student the results) or summatively. It would be fun to have students create their own tests and test each other this way as well. I'm excited that the CBE uses Google Apps for Ed, and I want to learn all I can. From the Turley & Gallaher article, the main ideas that struck me were: 1) a rubric should not be transferred from an broader administrative level to the classroom because the context and purpose are completely different; 2) co-constructed rubrics are useful for creating learning communities that share a common vocabulary for discussing and evaluating work; and 3) effective rubrics generate greater responses and more effective feedback.
I want to co-create rubrics with detailed criteria so that students can use them as guides for completing, assessing, and learning from their own work as well as each other’s. This is where collecting, analysing, and using samples (Davies chp 4; Chappuis chp 7) comes in: the class can look at samples (in small groups or as a whole class), generate and sort criteria and quality, and together we can make a rubric that can be revised as needed. I think this works best for general categories of tasks that students will perform throughout the course, such as science reports, essays, and multimedia presentations. Additional criteria could be added specific to that particular assignment when it is introduced. While, I liked the idea of collaborating with colleagues to collect and analyse samples across grade levels (Davies, chp 4), the approach discussed in Davies chp 9, is better: students should assume responsibility for collecting and analysing work (e.g., through portfolios) so that any assessment is used for, of, and as learning. Students should be able to explain, reflect upon, and evaluate the work they produce: it helps them develop metacognition for learning, ensures that they can safely make mistakes, and sets them on the path of continuous improvement. In reference to selecting rubrics (Chappius et al. chp 7), a best case scenario would involve all students keeping work samples to which they and other classes could refer, as well as having access to ‘real world’ samples and ‘real world’ rubrics (e.g., codes, regulations, standards). This way, students and teachers could determine what next steps for improvement might look like across grade levels and beyond school. In addition to receiving peer-, teacher-, and parent-feedback, inviting a professional to give descriptive feedback on student work might be particularly meaningful to students. I agreed with the approach of involving students as much as possible in generating formal report cards (Davies chp 10). Not only does this increase the validity of the evaluation and empower students to explain the report to their parents, but it also maintains a positive, open, and honest relationship among the teacher, students, and families. There are no surprises, misunderstandings, disagreements, complaints, or slander. When formal reports have to comply with coding imposed by administration, the teacher should include a note of explanation about the translation of classroom rubrics into administrative rubrics to help students and parents understand. I liked the 4 keys to success for goal-setting conferences (Chappuis et al. chp 12), and felt that it is vital to empower students practice this regularly and independently because 1) it is impossible to goal-set with each student for each learning outcome, and 2) it seems difficult to structure learning outcomes such that they build off one another or are reviewed regularly (I’ve noticed that units within subjects are somewhat isolated from each other). I hope to include time at the end of each lesson for students to quickly record notes for steps 2-4 for themselves to stay on track with their learning target. I would demonstrate first, and provide regular assistance, but also encourage students to share with and help each other. By ensuring that we have a sufficient variety and quantity of learning evidence from students, we are able to produce a fairer and more accurate assessment. Davies notes that by observing the principle of triangulation, the evidence becomes more valid (p. 46). This makes sense, because the variety ensures that students have the chance to demonstrate their learning in at least one way (showing, telling, writing), and prevents bias toward students who are more proficient at one method. This made me think about the philosophy and intent of my ED3503 class. By incorporating all six language arts into instruction and assessment for any subject, I should have accomplished triangulation as well. Davies also states that we must gather evidence over time to detect trends and patterns (p. 51), which prevents us from making conclusions about learning (positive or negative) based on only one assessment. Our lesson plan templates greatly help me remember to plan at least three assessments per lesson using UbD.
Another significant piece of the puzzle is ensuring that assessments have “clear criteria that define quality” (Davies p. 52). Clear criteria enable students to self-assess and improve their own work over time, and enable teachers to evaluate the evidence objectively. I thought that the four-step process (Gregory et al. [2011] ctd. in Davies pp. 35, 56) would be an excellent and simple way to ensure that students understand quality, take ownership of assessment, especially by being able to revise the criteria based on further experience. The continual revision/improvement process also enables me to be responsive in highlighting both common errors to prevent recurrence and additional qualities that make an exceptional product to guide students for further growth. I would not rely on this alone, however. I would prefer to prepare my own criteria ahead of time in collaboration with other teachers and, where it makes sense, experts from the community, and then use this as a basis to guide and refine what the students brainstorm. I feel that this would be more accurate and helpful. I loved the idea of ongoing portfolio development to foster student metacognition, accountability, and sense of progress and achievement, and to manage evidence of learning. I recall putting so much time and effort into my assignments and always feeling disappointed a) that only my teacher saw it, and b) that the product had no life after being graded. Portfolios are a great way to enable students to share their work with a broader audience, to improve products after feedback from anyone, and to reflect on what they’ve learned and how they’ve improved. It makes each assignment more meaningful and valuable. I prefer and am used to managing information using digital online systems. I decided to ask Paul Bohnert for his ideas on what’s out there, as I have seen only Moodle, Blackboard, and D2L, and am not confident that these are the best solutions. At minimum, I would like to empower my students to store their evidence in digital format (video, audio, photo, and convert hard copy or physically awkward projects into photo images) so that it is accessible to both them and me anywhere (secured, cloud solution). I’d like them to be able to track version history on items (for edits, comments, etc.), file items in a way that makes sense to them (e.g., date, subject), and then tag items for different portfolios. Similarly, the system should allow me to provide feedback and to link student evidence with assessments in lesson plan and with learning outcomes (this also fulfills fix #7 from A Repair Kit for Grading: 15 Fixes for Broken Grades). I found “The Case Against Grades” thought-provoking and well-argued. Kohn highlights that grades are another form of extrinsic motivation that erodes intrinsic motivation (p. 30). I’m learning in educational psychology, however, that, while it’s desirable that student are intrinsically motivated, extrinsic motivation may not entirely destroy intrinsic motivation, often exists alongside it, and could be a stepping stone toward building it. Similarly, if a grade is based on a well-designed rubric, it’s possible that working for a grade simultaneously involves meaningful learning. On the other hand, grades seem to best (only?) serve students who are already high-achievers. I like the idea of not grading, but am not yet sure how I would go about answering to people who require numbers to measure. This quote from Jim Lloyd (Chappius et al, p. 7) strongly resonated with me: "Classroom assessment for learning ... is a way of being. It is a type of pedagogy that when used as a matter of practice makes a profound impact on the way the teacher engineers her learning environment and how the students work within it." The key ideas wrapped up in this quote (and the rest of the readings) are: 1) effective teaching integrates assessment into lesson planning and execution systematically, and 2) it is possible and desirable to engage students in the assessment process.
By integrating assessment into lesson planning, I will develop much stronger lesson plans and be able to support, guide, verify, and report on student learning much more effectively and clearly. By making assessment overt and inviting students to participate in assessment activities, I'm confident that the students will take more ownership of their learning and pride in themselves as learners, and that they will develop skills that will enable them to be lifelong learners in academic and non-academic areas. One notion that stuck out at me was that attendance, effort, and behaviour should not be accounted for in the grading system and that focus should be on what students know and can achieve. In my schooling experience, it was common to get some nominal marks for participation. On the surface, attendance, effort, and behaviour seem important, but I realized that all three affect student achievement. Measuring achievement would make things more simple (because you aren't trying to consider those factors separately) and accurate (because the criteria for achievement are tied to the learning objectives set out by the Program of Study). I also recalled that both formative and summative assessment criteria were hidden from me throughout my education in non-core subjects, including physical education, art, music, and dance. I tended to do well in them, but I had no idea what I was doing to get my good grade, nor what I could do to improve my grade when it wasn't as high as my grades in other subjects. That was frustrating for me, and, while I liked these subjects and the activities we did, I didn't feel I was learning anything, and skill improvement just seemed to come with practice. When I put myself in my teachers' shoes now, I envision structuring these classes much differently. I find that the Keys to Quality Classroom Assessment figure (Chappius et al, p. 5) outlines a logical progression in designing lessons that assess, further, verify, and communicate students' learning progress, as well as engage the students in their own learning. I feel confident that, as I gain knowledge and practice of each 'key', I will be able to, say, take the learning objectives for grade 10 art, design assessment methods to match the targets and provide quality feedback, and empower students to assess themselves and set goals. Having this logic and structure in place is reassuring for me, because it also allows me to systematically check what might have gone wrong if things don't turn out as I predicted they would, and to know what I need to tweak, ditch, or improve. On a related note, I learned that descriptive feedback helps students identify specifically what to continue, improve, and avoid based on comparison to an exemplar, sample, description, or criteria. Evaluative feedback helps students understand whether they need to improve, but not how. Therefore, as a rule of thumb, teachers should increase their descriptive feedback and decrease their evaluative feedback. Research says, however, that evaluative feedback may interfere with learning (Davies, p. 18). This made me wonder whether the time spent designing and implementing evaluative measures are really worth it if the goal is student learning. Evaluative feedback seems to be used primarily to summarize (often average) students' progress (individual or aggregated) to people outside the classroom, who likely don't understand all of the intricacies associated with generating that report and therefore cannot properly analyse and act upon it. Even if I had a clear understanding of how stakeholders will use the assessment information and identified what level of detail is required, I'm not confident that I could generate any reports that explain things thoroughly yet briefly enough for certain audiences to use them properly. I experienced this very problem often in my job as an occupational health and safety analyst. Perhaps evaluative feedback is useful in identifying whether certain teaching strategies are more or less effective, but then it seems that one would have to analyse reports of student progress alongside teachers' lesson plans, classroom management plans, assessment plans, and other tools used in teaching. Alternatively, if descriptive feedback supplements evaluative feedback, it may become more effective. I think this is what the 'balanced assessment system' table (Davies, p. 21) attempts to promote. Looking at this table makes me wish that the school and community leaders would ask more questions related to formative assessment, or at least understand that the questions they are currently asking indirectly pushes teachers to evaluate students in ways that actually interfere with their learning. |
Archives
August 2017
Categories
All
|