Annotations ‘How To’
- 1. Cite the source in MLA
- 2. Write 2-3 sentences that give a broad overview of the argument, aims, and/or scope of the article or chapter: what are the main claims/goals, what are the key terms, what’s the context?
- 3. Write 2-3 sentences explain the main mode of inquiry and/or evidence the author uses to achieve her main claim/goal.
- 4. Write 1-2 assess the validity of the source. Did the author accomplish the goal he set out for himself? You’ve already stated the main claim/goal and the evidence/methods the author uses to achieve that goal, so now assess the article’s success.
- 5. Write 1-2 How will you use this article/chapter in your own video?
Ball, Cheryl. Ball, Cheryl E. “Assessing scholarly multimedia: A rhetorical genre studies approach.” Technical Communication Quarterly 21.1 (2012): 61-77.
Ball opens by defining a webtexts as, “Scholarly multimedia…article- or book length, digital pieces of scholarship designed using multimodal elements to enact authors’ arguments. They incorporate interactivity, digital media, and different argumentation strategies…” (62). A webtext cannot be translated to a hard copy without significant meaning loss. In her courses Ball asks students to compose texts that aim to be “scholarly web texts.” To support the students in their composition, Ball asks them to articulate the criteria they use in assessing digital texts. To her students’ criteria she adds “Warners’ (2007) criteria for evaluating webtexts;” The criteria by Kairos editors to review submissions; and criteria developed by Kuhn et. al. in “The Components of Scholarly multimedia” and “Speaking with Students: Profiles in Digital Pedagogy.” Kuhn evaluates her students (and her own assessment via conceptual core, research component, form and content, and creative realization). Ball’s students synthesize the criteria available and add their own. Ball guides students through the criteria generating process each time she requires a webtext. She has published a sample student-generated assessment criteria to model the process of criteria generation–not the product. Like White et. al., Ball strongly cautions against borrowing rubric or criteria and applying to a writing situation, and argues instead, “…each piece must be evaluated on its own terms in relation to that moment and to technology and media and genre, in time” (68). Her article is successful b/c she shows how criteria works best when “created fresh” in collaboration with students at the local level to fit the specifics of the writing task. I will use this article in my presentation to show how to crowdsource assessment criteria.