From Our Print Archives

Language Sample Analysis

View Comments (1)Print ArticleEmail Article
Vol. 19 • Issue 27 • Page 12

Language sample analysis (LSA) is considered one of the best methods of evaluating language production in children. Most of what we know about the development of language production in typical children is the product of LSA.

In his early work Roger Brown, PhD, gathered extensive language samples each month from three children to document syntax acquisition.1

Each sample included 700-plus utterances. His research group traced each child's progress in mastering grammar. The children progressed through the same stages but at different ages. Each grammatical stage was defined by mean length of utterance (MLU), which became a convenient method to define progress.

Researcher Walter Loban, PhD, added to this work, collecting language samples from children in grades K-12 and documenting progress on a number of different measures, including MLU, number of different words, and fluency (mazes).2

His research laid the groundwork for the clinical use of language samples. Initially, this required collecting a representative sample of oral language production; transcribing the sample using standard conventions for defining utterances, words, morphemes, mazes and errors; analyzing the sample for each measure; and interpreting the results relative to the research data on language development.3

Fortunately, the LSA process has improved considerably. The advent of personal computers removed much of the work from the process by improving the transcription tools and providing automatic analysis of measures at every level of language production. Several computer-assisted LSA projects offer a standard transcription format and automated analyses. The Systematic Analysis of Language Transcripts (SALT), by SALT Software LLC, also offers an automated comparison of individual speakers with samples from typical speakers collected under the same conditions. This provides clinicians with critical comparison data from typical speakers of the same ages or grades.4

SALT researchers collaborated with the Wisconsin Department of Public Instruction to create databases that could be accessed to interpret individual performance.5

The initial databases included conversational and narrative samples from children ages 3-13. The conversational samples were between the child and an examiner with fixed topics, such as school, family and holiday events. Children talked about a favorite movie or book for the narrative samples.

The data from these samples showed that most measures continued to advance with age, and many correlated significantly with age. Children talked more and used more complex syntax in narration than in conversation. Measures of verbal fluency, such as repetitions and revisions, reflected the difficulty of the task. These measures were relatively stable across age and were more frequent in narration than in conversation. Measures of speaking rate correlated significantly with age, indicating that talking more per unit time is an important indication of advancing language skill.

The SALT Group from the Madison Metropolitan School District proposed creating story retell databases for students in early elementary grades to help evaluate state standards for oral language. Students would retell the same story so the content would be consistent across the dataset. This would allow direct assessment of story vocabulary and narrative structure.6

Story retelling is part of the literacy curriculum in school, providing direct assessment of the oral language skills required for school success. SALT researchers took part in projects with the Madison Metropolitan School District, San Diego Unified School District and Cajon Valley School District to create story retell databases for students in grades K-4.7

SALT researchers also participated in a national effort to document reading and school achievement in bilingual children. This project used a story retell paradigm to assess oral language in English and Spanish. The results revealed that story retells of 35 to 65 utterances predicted reading outcomes of English language learners better than any other measure, including standardized tests. Reading was improved by oral language skill in either language. This project led to the creation of bilingual databases and a bilingual version of SALT Software.

Most recently, SALT researchers collaborated with the Madison Metropolitan School District and several Milwaukee area school districts to create an expository database to provide oral language assessments for students in grades 7-9.8,9

Exposition is part of the oral language standards for secondary school students.

SALT databases contain more than 6,000 language samples from typical speakers, ages 3-15, in a variety of sampling conditions. These databases can be accessed to record a language sample, such as a story retell; transcribe the sample using SALT conventions, which can be done by a speech-language pathology assistant or other trained support staff; and analyze the sample by selecting the appropriate database, configuring the comparison set, and running an analysis to create reports on the target speaker and a comparison set of typical peers.

These reports provide the raw values for the target speaker and standard deviation differences from the comparison set. Values of more than one standard deviation above or below the mean should be reviewed in detail.

The results provide a profile of performance across 24 measures of syntax, semantics discourse, speaking rate, verbal fluency, omissions and errors.

The LSA process takes about an hour. Language samples can be collected frequently to monitor therapy progress. Language sample analysis is a time-tested process that can provide the data necessary to document a practice.

References

1. Brown, R. (1973). A First Language: The Early Stages. Cambridge, MA: Harvard University Press.

2. Loban, W. (1976). Language Development: Kindergarten Through Grade Twelve. Urbana, IL: National Council of Teachers of English.

3. Miller, J. (1981). Assessing Language Production in Children: Experimental Procedures (2nd ed.). New York: Allyn and Bacon.

4. Miller, J., Iglesias, A. (2008). Systematic Analysis of Language Transcripts (SALT), Version 2008. Muscoda, WI: SALT Software LLC.

5. Leadholm, B., Miller, J. (1992). Language Sample Analysis: A Wisconsin Guide. Wisconsin Department of Public Instruction.

6. Miller, J., Heilmann, J. (2009). New tool assesses narrative structure. ADVANCE, 19 (21): 10-11.

7. Miller, J. (2009). Story retells provide powerful insight. ADVANCE, 19 (20): 10-11.

8. Miller, J. (2009). New database helps identify, monitor older students. ADVANCE, 19 (8): 4-5, 30.

9. Malone, T., Miller, J., Andriacchi, K., et al. (2008). Let me explain: Teenage expository language samples. Presented at the American Speech-Language-Hearing Association Convention, Nov. 20-22.

Jon Miller, PhD, is professor emeritus of the University of Wisconsin-Madison and a partner in SALT Software LLC (www.SALTSoftware.com).


 

PRACTICAL AND EXPLICIT PRACTICES.

vani mahadevan,  speech and hearing consultanta,  centre for special education,Muscat, OmanMay 15, 2010
MUSCAT




     

Email: *

Email, first name, comment and security code are required fields; all other fields are optional. With the exception of email, any information you provide will be displayed with your comment.

First * Last
Name:
Title Field Facility
Work:
City State
Location:

Comments: *
To prevent comment spam, please type the code you see below into the code field before submitting your comment. If you cannot read the numbers in the below image, reload the page to generate a new one.

Captcha
Enter the security code below: *

Fields marked with an * are required.

 
http://speech-language-pathology-audiology.advanceweb.com/Pediatrics/default.aspx
http://speech-language-pathology-audiology.advanceweb.com/Webinar/Editorial-Webinars/ADVANCE-Speech-Language-Pathologists-and-Audiologists-Webinars.aspx
http://www.satpac.com