Preview

Moscow Pedagogical Journal

Advanced search

FOREIGN LANGUAGE WRITING ASSESSMENT: MODELSOF THE SUBJECT’S ACTIVITIES ASSESSMENT

https://doi.org/ 10.18384/2310-7219-2018-3-99-107

Abstract

The article is devoted to the study of the subject of assessment behavior in the process of evaluating the foreign language writing. Assessment as a cognitive process is considered in the context of decision-making activities, when the subject addresses various strategies of behaviour. The analysis of the subject’s activity made it possible to identify the following factors: direct and indirect, internal and external, controlled and uncontrolled. They affect both the subject and the process itself. Basing on the results of the study the author comes to the conclusion that modeling the subject’s activity allows to predict his behavior and manage the situation of assessment. Clarification of the factors influencing the subject’s behavior contributes to the effective organization of subject’s activities and to validity of the assessment.

About the Author

Marina A. Bodony
Kuban State University
Russian Federation


References

1. Сутужко В.В. Феномен оценки в социальном бытии и познании: автореф. дис. … док. филоc. наук. Саратов, 2006. 36 с.

2. Barkaoui K. Variability in ESL essay rating processes: The role of the rating scale and rater experience // Language Assessment Quarterly. 2010. No. 7. P. 54-74.

3. Congdon P.J., McQueen J. The permanence of rater severity in large-scale assessment programs // Journal of Educational Measurement. 2000. No. 37. P. 163-178.

4. Connor-Linton J. Looking behind the curtain: What do L2 com position ratings really mean? // TESOL Quarterly. 1995. No 29. P. 762-765.

5. Cumming A., Kantor R., Powers D.E. Scoring TOEFL Essays and TOEFL 2000 Prototype Writing Tasks: An Investigation into Raters’ Decision Making and Development of a Preliminary Analytic Framework // Educational Testing Service. Princeton, New Jersey, 2001. 201 p.

6. DeRemer M.L. Writing assessment: raters’ elaboration of the rating task // Assessing Writing. 1998. No. 5 (1). P. 7-29.

7. Dunbar S.B., Koretz D.M., Hoover H.D. Quality control in the development and use of performance assessments // Applied Measurement in Education. 1991. No. 4. P. 289-303.

8. Freedman S.W., Calfee R.C. Holistic assessment of writing: Experimental design and cognitive theory // Research on writing: principles and Methods. New York: Longman, 1983. Pp. 75-98.

9. Homburg T.J. Holistic evaluation of ESL compositions: Can it be valuated objectively? // TESOL Quarterly. 1984. No. 18 (1). P. 87-107.

10. Irenka Suto W.M., Greatorex J. What goes through an examiner’s mind? Using verbal protocols to gain insights into the GCSE marking process British Educational // Research Journal. 2008. Vol. 34. No. 2. P. 213-233.

11. Kahneman D., Frederick S. Representativeness revisited: attribute substitution in intuitive judgment // Heuristics and biases: the psychology of intuitive judgment. Cambridge, Cambridge University Press, 2002. P. 49-81.

12. Lamprianou I. The stability of marker characteristics across tests of the same subject and across subjects // Journal of Applied Measurement. 2006. No. 7 (2). P. 192-200.

13. Linacre J.M. Many-faceted Rasch measurement. Chicago, IL: MESA Press, 1989. 158 p.

14. Lumley T. Assessment criteria in a large-scale writing test: what do they really mean to the testers? // Language Testing. 2002. No 19 (3). P. 246-276.

15. McNamara T., Roever C. Language testing: the social dimension. Oxford: Blackwell Publishing, 2006. 292 p.

16. Weigle S.C. Assessing writing. Cambridge: Cambridge University Press, 2002. 278 p.


Review

Views: 112


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2949-4990 (Print)
ISSN 2949-4974 (Online)