基于Iwrite的大学生英语写作智能批改与教师批改的对比分析A Contrastive Study Between Intelligent Automated essay scoring and Teacher Scoring in English Writing of University Students Based on Iwrite开题报告
2021-03-10 23:55:45
1. 研究目的与意义(文献综述)
english writing assessment from teachers has been considered a time-consuming and expensive activity, the subjectivity of which can not be guaranteed during the grading process. resorting to the automated essay scoring tools, however, the consistency in assessment of articles can be achieved. there lie varied automated assessment tools in the current environment, namely, the project essay grade(peg) system, developed by page et al, intelligent essay accessor(iea) system, developed in the late 1990s, e-rater system and so on. from automated objective tests grading tools aiming at true-false value and multiple choice to essay grading tools, while the benefits seem to be conspicuous, the authentic effectiveness, however, remains unclear, especially when compared with teacher assessment. thus the thesis aims to study the accuracy and precising of the automated essay scoring tools by comparing the teacher feedback and machine feedback towards forty articles written by chinese students.
as for the study abroad, while some take the articles written by native speakers as objects in the research of computer-based writing assessment, a certain share of documents focus on the articles written by language learners who study english as a second language(li amp; liu, 2017). what’s more, some studies aiming at various automated scoring tools, such as iea, peg, has reported relatively high correlations between automated assessment tools and human raters(coniam, 2009; jim, stephanie amp; evgeny, 2016). coniam (2009) concluded that while computer rating programs have their detractors in terms of transparency, it can be seen that they produce results which compare favourably with human raters. apart from those supporting studies, a major criticism is that the computer rating process is a essentially a ‘‘black box’’ since rating criteria were not explicit(weigle, 2002). except for the study towards the degrees of accuracy the automated writing scoring tools can match that of humans, there are studies which incorporate essay writing systems, namely, scigen, ghostwrite and gatherer, research on the questions like whether automated essay writing systems can generate intelligent and coherent essays which can fool university markers into assigning good grades to them, embodying the disparity between both technology and research(williams amp; nash, 2009).
2. 研究的基本内容与方案
iwrite as a newly-founded yet quickly growing english writing assessment website established by foreign language teaching and research(fltrp), it has its own features in english essay scoring based on the system created. allowing for the current abundant resources and studies aiming at pigai website, pertinent studies towards iwrite is rather scarce. thus the thesis chose forty anonymous articles assessed in the website as objects. errors are classified based on linguistic features. the thesis aims to study the accuracy and precision of current automated writing assessment tool-iwrite, which can be achieved via analysis and comparison between the teacher feedback and automated scoring feedback on iwrite towards the forty articles in accordance with linguistic categorization
first, forty articles would be analyzed by teachers in accordance with the categorization above, such as syntactic errors, lexical errors, collocation errors and technical errors. each extensive category has its own divisions, such as the number of words; average sentence length; number of verbs; content features such as specific words and phrases; and other characteristics including the order in which concepts appear and the occurrence of certain noun-verb pairs and so on. a record towards the comparison between each division of the same article will be made, on the basis of which the statistics are collected pertaining to the inconsistency of the teacher and machine feedback as well as errors the machine has made. related documents would be referred to simultaneously and ultimately a thesis can be written.
3. 研究计划与安排
before 20th january : settlement of the title
before20th march: submission of the outline
before 25th april : submission of the first draft
4. 参考文献(12篇以上)
[1]li x., liu j. automatic essay scoring based on coh-metrix feature selection for chinese english learners. in: wu tt., gennari r., huang ym., xie, 2017
[2]jim ranalli, stephanie link, evgeny chukharev-hudilainen. automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation. educational psychology, 2016
[3]ha, m. amp; nehm, r.h.the impact of misspelled words on automated computer scoring: a case study of scientific explanations.journal of science education and technology, 2016
您可能感兴趣的文章
- 从“了不起的盖茨比曲线”看盖茨比“屌丝梦”的幻灭A Loser’s Disillusionment Interpreted through “The Great Gatsby Curve”毕业论文
- 对《愤怒的葡萄》的生态女性主义解读An Ecofeminist Reading of The Grapes of Wrath毕业论文
- Ecological Holism in The Secret Garden 《秘密花园》中的生态整体主义观毕业论文
- Ecological Holism in The Grapes of Wrath 《愤怒的葡萄》中的生态整体主义观毕业论文
- Dissimilation in As I Lay Dying 《我弥留之际》的异化形象毕业论文
- 任务类型和二语水平对二语学习者词义习得的影响毕业论文
- 任务类型和词类对二语学习者词汇习得的影响毕业论文
- 二语词汇学习任务对目标词熟悉度的影响毕业论文
- On English Translation of Chinese Dish Names From the Perspective of Newmark’s Translation Theory 从纽马克翻译理论角度看中国菜名的英译毕业论文
- On Translation of Sports News Reports from the Perspective of Skopos Theory 目的论关照下的体育新闻翻译毕业论文