
SWAI for Scoring is a narrative-based automated scoring solution based on a specialized language model (sLLM). Optimized for essay and descriptive questions, it comprehensively assesses sentence structure, logical development, and expressive power. It utilizes a standardized rubric-based scoring method to ensure objectivity and consistency, and OCR technology accurately recognizes handwritten or image-based answers.Designed to be used in conjunction with or in place of teacher grading, AI grading aims for a reliability of Kappa 0.6 or higher. It can be run independently, without integration with external learning platforms, and provides quantitative assessments focused on individual students' critical thinking and expressive abilities, without assessing plagiarism.
Setting criteria for each item, including topic clarity, logical development, use of examples, and grammatical expression, and operating a five-level scoring system.
Detection of six types of errors, including centralization and logic errors, and support for statistical reliability analysis, including Kappa and Pearson.
A structure that improves accuracy by measuring the agreement between AI and teacher scoring results. Cross-grading and simulator utilization are possible.
Automatically applies rubric criteria and scoring criteria by question type. Suitable for handling complete, descriptive, and oral questions.
Calculate scores for each item based on achievement criteria and evaluation factors for each item. Conduct quantitative evaluations based on whether criteria are met.
Answers in image or scanned form can also be recognized and analyzed. Converted to digital text, they are then integrated with AI grading.
In addition to rubric scores, we provide item-specific evaluation comments to encourage learner self-assessment and improvement.
Provides analysis and feedback on overfitting and underfitting between raters. This can be used to establish rater training and correction criteria.
Easing the burden on teachers by expanding descriptive assessments and establishing a fair and reliable assessment system.
Simplify repetitive grading tasks at private academies and college prep institutions. Support skill improvement through feedback.
Automatically grades individual descriptive responses for each learner. Rubric-based self-directed learning is possible.
Large-scale evaluations can be conducted through integration with educational administration systems (e.g., NEIS). Management dashboards and analysis functions are provided.
Save over 80% on average. Instant scoring in large-scale testing environments.
Evaluations are conducted according to the same criteria. This minimizes inter-rater variation, ensuring reliability.
Feedback-based learning outcomes analysis is possible, contributing to improved writing skills and motivation.
Ask anything.
You can also contact us by phone (031-972-0409) or email (swempire@swempire.co.kr).* Indicates required fields.