Research

Education professor develops Web-based system to help children learn to read

A sample image of a Maze test. Credit: Penn StateCreative Commons

UNIVERSITY PARK, Pa. — Simon Hooper, Penn State professor of education in the Learning, Design and Technology program, has been key in the ongoing design of a Web-based learning analytics system that will help improve children’s literacy. The system is designed to track performance over time and use the data to personalize instruction.

The system holds the potential to radically change the grading experience for teachers. A system could highlight possible errors, change the mechanics of the grading process, and provide access to data visualizations that enhance decision-making.

A well-designed scoring system could increase reliability, reduce grading time, improve teachers’ ability to identify writing problems and improve overall satisfaction with the assessment process.

The software includes separate interfaces for students and teachers. The teacher interface serves three functions: managing students, grading assessments and viewing performance charts. The student interface establishes a space to type short essays, presents random prompts, allocates five minutes to type a response and gives remaining-time reminders.

In a chapter recently published in the “Handbook of Design in Educational Technology,” Hooper and his co-authors, University of Minnesota professors Charles Miller and Susan Rose, outline the theory behind the software, the development process of the program and implementation research over time.

The chapter chronicles four design cycles that date back to 2005. Initially, the group hypothesized that a computer-based grading system would allow for automatic data aggregation, eliminating the need for counting and recording students’ scores manually. The design challenges have evolved along with the system.

“The design of the system is in a constant state of change. As soon as we take care of one set of issues, another seems to emerge,” Hooper said. “For example, we designed the system for use on regular computers, but now many schools are changing to iPads, and we cannot simply translate from one system to the other. Also, we are constantly generating new ideas for how the system could be expanded.”

The project began with the goal of improving efficiency in grading students’ essays for elementary school special education teachers.

Initial research inspired the design of a more ambitious software system that the researchers hoped would produce reliable and valid academic growth data that could be used to monitor students’ literacy performance. It was named Avenue DHH (i.e. Audio-Visual Educational Environments for Deaf or Hard of Hearing).

As the project continued, the researchers replicated paper-pencil versions of both Maze and Slash tests, and auto-grading components were added.

A Maze test is a modified Cloze test, which is created by removing every seventh word from a text passage. Students complete the tests by writing into blank spaces the words they think were deleted. The Maze test provides a list of three words, the missing word and two distractors, that appear under the blank space.

Slash tests are text passages, displayed in upper case, in which the spaces between words are removed to create a continuous flow of characters. In print, students draw vertical lines where they believe word breaks should fall. On the computer, students click between words to insert (or remove) word breaks between the characters.

Then the researchers took the project in a different direction.

“At this point, an important shift occurred in our design; we introduced gaming features into the design. We were trying to be innovative and to move away from traditional ways of doing things,” Hooper said.

The researchers were also able to conduct classroom feasibility analyses with DHH teachers and students. The primary focus was on establishing system-wide software stability. The initial system was designed to work for use with a handful of users. The upgraded system would have to function effectively for hundreds, perhaps thousands of concurrent users.

With the funding now available for nationwide implementation, the researchers are ready to conduct usability testing and implementation research. They will conduct a technological overhaul of the system, examine opportunities for data visualization for both teachers and students, and conduct a longitudinal study in which they examine the validity, reliability and impact of the e-assessment system in K-8 classrooms.

“We have started testing out the system. We are conducting tests to examine the efficiency and effectiveness of the WordMark part of the software. Next semester we will be comparing data from the use of Slash and Maze with standardized achievement data to determine the validity of our system,” Hooper said.

The team is starting to see a glimmer of light at the end of their research tunnel.

“I think we are getting close now, but an important question concerns how this system can be generalized to a mainstream population. Also, the big question will be how access to a system that automates/simplifies assessment administration, grading, data recording and data representation, will help teachers in their decision-making processes,” Hooper said.

Pictured above is a sample of a slash test, in which students need put slashes where they think spaces should be between words. Credit: Penn StateCreative Commons

Last Updated November 25, 2013