Korean Speech-Language & Hearing Association(KSHA)

Editorial Board

Korean Speech-Language & Hearing Association(KSHA) - Vol. 29 , No. 3

[ ORIGINAL ARTICLE ]
Journal of Speech-Language & Hearing Disorders - Vol. 29, No. 3, pp. 27-34
Abbreviation: JSLHD
ISSN: 1226-587X (Print) 2671-7158 (Online)
Print publication date 31 Jul 2020
Received 28 Jun 2020 Revised 11 Jul 2020 Accepted 28 Jul 2020
DOI: https://doi.org/10.15724/jslhd.2020.29.3.027

The Effect of Augmented Reality Contents on Verb Naming in People With Aphasia
Hee June Park1 ; Cho Rong Oh2 ; Soon Bok Kwon3, *
1Dept. of Speech and Hearing Therapy, Catholic University of Pusan, Professor
2Sch. of Rehabilitation and Communication Sciences, Ohio University, Professor
3Dept. of Humanities, Language and Information, Pusan National University, Professor

증강현실 콘텐츠가 실어증 환자의 동사 산출에 미치는 영향
박희준1 ; 오초롱2 ; 권순복3, *
1부산가톨릭대학교 언어청각치료학과 교수
2오하이오대학교 언어치료학과 교수
3부산대학교 언어정보학과 교수
Correspondence to : Soon Bok Kwon, PhD E-mail : sbkwon@pusan.ac.kr


Copyright 2020 ⓒ Korean Speech-Language & Hearing Association.
This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Funding Information ▼

Abstract
Purpose:

People with aphasia (PWA) have been reported to experience more difficulty in understanding verbs than nouns. Various stimuli such as pictures, animation, virtual reality, and augmented reality are used to help PWA re-learn verbs. It is also unclear whether dynamic image stimuli elicit better responses in PWA than do static picture stimuli. This study is designed to examine the effects of augmented reality content on the verb utterances in PWA.

Methods:

Twenty healthy adults and twelve people with expressive aphasia participated in this study. Forty pictorial verbs were presented via an augmented reality program. The verbs were classified into 4 types (10 unacussative verbs, 10 unergative verbs, 10 non-instrument verbs, and 10 instrument verbs). In order to verify the effect of augmented reality content, picture and 3D augmented reality cards describing target verbs were created. The correct response scores on the augmented reality task were compared to those on the picture card task.

Results:

Correct response scores of PWA when provided with the augmented reality stimuli were higher than when provided with the picture stimuli. With the dynamic stimuli, the scores for unergative verbs, non-instrument verbs, and instrument verbs were higher than those for unacussative verbs.

Conclusions:

Augmented reality content is more effective than static images for prompting verb naming. In addition, it was found that when applying augmented reality content to a person with aphasia, type of the verb should be considered in advance.

초록
목적:

실어증을 가진 사람들은 명사보다 동사를 이해하는 능력이 떨어진다고 보고되고 있다. 실어증 대상자에게 동사를 가르치기 위해 그림, 애니메이션, 가상현실, 증강현실 등 다양한 자극이 사용되고 있다. 그러나 동적인 증강현실 자극이 정적인 그림 자극보다 동사 산출에 긍정적인 영향을 미친다고 보기에는 한계점이 있다. 이에 본 연구는 증강현실 콘텐츠가 실어증이 있는 사람들의 동사 산출에 미치는 영향을 알아보고자 하였다.

방법:

일반 성인 20명과 표현성 실어증을 가진 12명이 연구에 참여하였다. 프로그램을 이용하여 40개의 정적 이미지와 동적 증강현실 자극을 출력하였다. 동사는 4가지 유형(비대격 10개, 비능격 10개, 도구 10개, 비도구 10개)으로 분류하였다. 증강현실 콘텐츠의 효과를 검증하기 위해 그림과 3D 증강현실 콘텐츠로 동사의 카드를 만들었으며 증강현실 어휘 산출에 따른 정반응 점수를 비교하였다.

결과:

증강현실 자극에 대한 정반응 점수가 그림 자극에 비해 높게 나타났다. 동사의 4가 자극 유형별로 살펴보았을 때 비능격 동사, 도구 동사, 비도구 동사의 경우 증강현실 자극이 그림 자극보다 통계적으로 유의하게 정반응 점수가 높았다. 비대격 동사의 경우 집단간 차이가 나타나지 않았다.

결론:

증강현실 콘텐츠는 동사 이름대기를 유도하는 데 정적 이미지보다 효과적이다. 이와 함께 실어증 환자에게 증강현실 콘텐츠를 적용할 때는 동사의 유형에 따라 적절하게 적용해야 한다.


Keywords: Augmented reality, aphasia, verbal production
키워드: 증강현실, 실어증, 동사산출

Ⅰ. Introduction

A common feature of all languages is that nouns and verbs are classified as different parts of speech (Corina et al., 2005). During language acquisition, the verb is learned later than the noun, and it is generally believed that verbs are more greatly affected than nouns in language disorders that are more prevalent in the elderly such as aphasia and dementia. Most researchers agree that long-term processing of verbs is more difficult than that of nouns. Moreover, a dissociation between verb and noun treatment was reported in people with aphasia (PWA), suggesting that the processing of verbs is different from the processing of nouns (Mätzig et al., 2009).

Whether PWA show more deficits in nouns or verbs remains controversial. Because verbs and nouns are different parts of the vocabulary system, the semantic representation process for verbs is more difficult than that for nouns (Bae, 2019; Mätzig et al., 2009; Park, 2018). Verbs and nouns involve different grammatical categories (Caramazza & Hills, 1991). Unlike nouns, verbs require various morphemes and are subject to the syntactic influence of the argument structure. In addition, verbs and nouns have different image abilities (Bird et al., 2003). Thus, it is not meaningful to specify which of them is more difficult by assuming that the processing of nouns and verbs are of the same magnitude, and there is a need to acknowledge and approve the cognitive processing of the grammatical categories and the independence of the neurological networks involved (Kwag et al., 2014).

It is unclear if PWA performs similarly on noun and verb tasks. Most language testing tools currently used in clinical practices have been designed using mainly nouns. Particularly in Korea, testing tools that focus on verbs are limited. In the sentence comprehension assessment or the process of inducing sentence production, verb processing is necessary; however, in such cases, it is difficult to purely examine verb processing because of different context variables involved. If the verb processing skill of PWA is different from their noun processing skill, and the verb processing skill depends on the type and severity of aphasia, accurate diagnosis and analysis of verb skill is required. The performance on noun and verb tasks should be evaluated, and the direction of the targeted treatment should then be determined based on the overall language evaluation results. Because language evaluation is largely divided into assessments for comprehension and expression skills, both verb understanding and verifying skills must be evaluated. The naming test, which evaluates the ability to produce simply language, is the most basic test for language evaluation, and can be used both as screening and comprehensive tests. Therefore, this study focused on the verb production skill using the naming test.

Most verb and noun production tasks are designed using black and white static images. Recognizing motion in a static picture is more difficult than recognizing objects given that it require greater inference than does object recognition. Because of the nature of this task, a greater effort is needed for verb production (Honincthun & Pillon, 2005; Tae et al., 2017). The limitations of using static picture material for verb production have already been reported in several studies. Briefly, the impaired verb production in PWA can be attributed to the unnaturalness and complexity of the picture task rather than the pathologies language. Due to the methodological issue, it is possible to underestimate the skills involved in producing a verb, which may contribute to a decrease in the accuracy of the language evaluation and in the efficiency of speech therapy. Because of the specificity of verb processing, we can find literature that describes a more natural stimuli presentation method that helps in the visual recognition of presented stimuli and the process of conceptualization of motion more easily. In other words, the more dynamic the stimulus presentation is, the closer it is to the real-life action word, and various studies suggest that this can provide stronger neurological stimulation to induce verbal output (den Ouden et al., 2009). The output of verbs appears to be better for dynamic stimuli than for static stimuli. In addition, understanding which types of verbs (unaccusative verb, unergative verb, non-instrumental verb, and instrumental verb) are more likely to improve the performance of production by dynamic stimulation is important, and may contribute to elucidating the mechanism of the verb calculation process.

Augmented reality is a computer graphics technique that is used to synthesize virtual objects or information in a real environment to make them appear to be objects in the original environment. Recently, augmented reality has been used in various fields, including broadcast media, medicine, and education and has demonstrated effectiveness (Antkowiak et al., 2016). Additionally, in research on aphasia, unlike for still images in which only one face is displayed, augmented reality can rotate markers towards different angles, when the software is programmed to display objects from various angles by moving a predetermined marker on the picture card. This evaluation method is expected to be useful for enhancing the verb’s output because it can contribute to the creation of a new memory retrieval synapse pathway (An et al., 2017; Kim & Kwon, 2018).

Therefore, it may be most appropriate to compare the dynamic augmented reality with the static image stimulus. In this study, we investigated the effectiveness of the expression of verbal vocabulary using self-generated augmented reality contents to dynamically elicit core motions only. Therefore, the purpose of this study is two-folds: (1) to compare the response scores in people with and without aphasia when static and dynamic augmented reality stimuli were presented, and (2) to examine the effects of augmented reality on verb production.


Ⅱ. Methods
1. Participants

Twenty healthy adults (male: 20, mean age: 62.5±4.8 years) and 12 people with expressive aphasia (male: 12, mean age: 61.7±4.0) participated in this study. Healthy participants were right-handed, native Korean speakers; have minimum of twelve years of education; with no psychiatric diagnosis or neurological diagnosis (Korean-Mini Mental State Examination; K-MMSE, Oh et al., 2010); with no history of current drug abuse, including alcohol; no uncorrected vision and hearing impairments; age under 75 years. Individuals in the aphasia group were qualified to participate if they were diagnosed with aphasia based on the Korean version of the Western Aphasia Battery (K-WAB, Kim & Na, 2001); had no hearing and visual abnormalities; had no previous history of mental illness; did not show left-neglect.

Table 1. 
Descriptive information of people with aphasia
Case Age Edu Type of CVA
(aphasia type)
AQ MPO
 1 66 16 Lt. MCA infarction
(broca)
87.2 16
 2 63 12 Lt. MCA infarction
(transcortical motor)
82.4 14
 3 56 16 Lt. MCA infarction
(broca)
61.8 13
 4 61 12 Lt. MCA infarction
(broca)
54.4 15
 5 64 12 Lt. ACA infarction
(transcortical motor)
51.3 12
 6 58 16 ICH
(transcortical motor)
68.6 16
 7 67 16 Lt. MCA infarction
(broca)
70.4 14
 8 58 16 Lt. MCA infarction
(broca)
72.3 15
 9 68 12 Lt. MCA infarction
(broca)
54.2 16
10 62 16 Lt. MCA infarction
(broca)
66.6 12
11 57 12 Lt. MCA infarction
(broca)
67.8 14
12 60 12 Lt. ACA infarction
(transcortical motor)
82.1 13
Note. Edu=education (years); CVA=cerebrovascular accident; AQ=aphasia quotient; MPO=months post-onset; MCA=middle cerebral artery; ACA=anterior cerebral artery ICH=intracerbral hemorrhage.

2. Experimental Design
1) Verb selection

Forty verbs composed of 2~3 high frequency syllables were used in current investigation. The verbs included 10 unaccusative verbs, 10 unergative verbs, 10 non-instrumental verbs, and 10 instrumental verbs (Appendix 1). An unaccusative verb is an intransitive verb of which grammatical subject is a non-semantic agent and a unergative verb is another intransitive verb that takes an agent argument (Sung & Kwag, 2012). A non-instrumental verb is a verb that describes a movement without the use of a tool and an instrumental verb refers to a verb that describes a movement that involves use of tools (Shin et al., 2017).

2) Augmented reality content production

A professional designer illustrated the selected verbs into a format of 3D motion. Three speech-language pathologists with over 10 years of clinical experience reached to a consensus of the key moments of each motion.

3) Augmented reality content operation program

An augmented reality-based content was developed for this investigation (Figure 1).


Figure 1. 
Structural chart of the augmented reality program (Bae et al., 2014)

A maker was inserted into the 2D image to create the 3D augmented reality contents. Using the augmented reality contents, we prepared a teaching tool in the form of a picture card or a cube. Then a camera recognized the picture cards. Finally, the picture cards and cube were converted into objects identifiable by the augmented reality program and presented through a monitor (Figure 2).


Figure 2. 
Augmented reality system (Ahn et al., 2018)

4) Evaluating appropriateness of picture and augmented reality contents

Five speech-language pathologists volunteered to evaluate the appropriateness of selected stimuli and dynamic stimulation. We excluded items that received a score of 1-3 points on a 5-point scale (1 point=very inappropriate, 2 points=inappropriate, 3 points=average, 4 points=appropriate, and 5 points=very appropriate). The mean appropriateness of the picture stimuli was 4.89 (SD=.41) and the mean fitness of the dynamic stimuli was 4.48 (SD=.52).

3. Experimental Procedures

To ensure that the participants understood the study tasks thoroughly, they were asked to answer preliminary questions. The task presented picture stimuli and augmented reality stimuli, and used to speak verbs corresponding to pictures/AR contents. Participants were asked to look at the picture cards or augmented reality stimuli on the computer screen and tell what the target verbs is. During the investigation, the picture and dynamic stimuli were presented randomly by the program, and all the utterances describing the stimuli were automatically stored by pressing the correct response button.

4. Data Analysis

The recorded data were analyzed to confirm the correct response and identify incorrect response according to the utterance of the presented word. Each participant was given 1 point for producing the target verb correctly, and 0 for otherwise. When a participant produced a word similar to the target, three researchers decided whether they were in the right direction. For the reliability across researchers, an Intraclass Correlation Coefficient (ICC) was completed. The inter-test reliability was 98%. In order to examine the differences between the verb dynamic and the verb type, a mixed ANOVA was performed.


Ⅲ. Result
1. Scores on Response to Dynamic Stimuli

The mixed ANOVA revealed a significant difference between the picture and augmented reality stimuli. The results were analyzed by classifying 10 dynamic vocabularies each of the four types (unaccusative, unergative, non-instrumental, instrumental). A significant difference was found between two groups in presence of dynamic response (Table 2).

Table 2. 
Analysis of variance for correct response score across the groups and across the conditions (presence of dynamicity and verb type)
Family-wise F p-value
Group(2) 117.586 .001***
Presence of dynamicity(2) 9.914 .003**
Verb type(4) 130.783 .001***
Group(2) × Presence of dynamicity(2) .910 .340
Group(2) × Verb type(4) 15.448 .001***
Presence of dynamicity(2) × Verb type(4) 2.841 .040*
Group(2) × Presence of dynamicity(2) × Verb type(4) .226 .860
*p<.05, **p<.01, ***p<.001

2. Scores on Response to Verb Type

As a result of examining the differences in the specific reaction scores based on the different types (unergative, non-instrument, instrument) of verb using picture and the augmented reality stimuli, the remaining type of verb (unacussative) showed a significant difference in the score with a dynamic stimulus and that with a still picture stimulus (Table 3, Figure 3).

Table 3. 
Mean score on response to picture and dynamic stimuli across verb types
Verb type Picture Dynamic stimulus t df p-value
Unaccusative 7.633 (1.813) 7.583 (1.907) .264 59 .79
Unergative 5.317 (2.34) 5.767 (2.403) 2.941 59 .01**
Non-instrumental 8.20 (2.097) 8.65 (1.706) 3.683 59 .001***
Instrumental 8.75 (1.536) 9.24 (1.235) 2.078 59 .04*
Note. Values are presented as mean (SD ).
*p<.05, **p<.01, ***p<.001


Figure 3. 
Scores on response to verb type


Ⅳ. Discussion

The purposes of this study were (1) to compare the response scores of people with and without aphasia when static and dynamic augmented reality stimuli were presented, and (2) to examine the effects of augmented reality on verbal production. It was found that the participants performed better on the naming test when the augmented reality stimuli were presented compared to when the static picture stimuli were provided. In addition, participants’ responses across the two conditions were significantly different. It is known that the concept of a verb is formed primarily through the process of storing a visual image in the visual cortex. The output of the verb also occurs through a conceptualization process that reactivates the stored image (Bendy et al., 2012).

The findings that people with aphasia showed more difficulty in production in unaccusative verbs than unergative verbs were consistent with previous studies (Kegl, 1995; Lee & Thompson, 2004). In a previous study that examined the verb production of eight patients with agrammatism through the verb naming task, it was reported that patients with agrammatism had more difficulty in producing unaccusative verbs showed than unergative verbs (Lee & Thompson, 2004). The findings of this study imply that the skills to produce verbs through dynamic stimuli may be more preserved than to do so through static stimuli. Previous investigations have shown disadvantages of picture stimuli. For example, the ability to name motion words was significantly impaired picture stimuli were provided, but the performance was improved when the stimuli were presented in direct gestures (Druks & Shallice, 2000). With visual stimuli, the verbal production of people with aphasia decreased significantly (Honincthun & Pillon, 2005). In this study, using the augmented reality content effectively led the reaction to the subject with aphasia instead of the therapist providing the gesture.

During the examination of the participants’ response to the pictured and animated verbs, it was found that unaccusative, non-instrumental, and instrumental verbs were associated with significantly higher correct scores. This finding supports previous studies of typically developing children that report improved performance for transitive and automatic verbs during video stimulus situation compared to picture stimulus situation (An et al., 2017; Davidoff & Masterson, 1995).

The findings of this study suggest that augmented reality may help PWA improve verb production by re-activating the images necessary for the verb production skill and reasoning process essential for the recognition of the actions, compared with the unnaturalness and the complexity of recognition.

This study is preliminary to evaluate the effects of augmented reality content on verb production in PWA. Therefore, future studies addressing following limitations are warranted: First, research should employ people with various types of language disorder and a large number of participants. Second, when applying augmented reality to treatment, its long-term effect should be taken into account, not just a shot-term interest. Third, comparative studies on various presentation methods (picture, photo, video, animation, augmented reality) are necessary considering the nature of the presentation of a task expressing the action of the verb. It is efficient to suggest appropriate tasks (static and dynamic stimuli) according to the severity of aphasia. Finally, how to present contents through augmented reality needs to be considered. In this study, the camera identified the marker of the picture card and sent an output to the monitor. This may yield inaccurate results, in case the person watches the picture card and the monitor at the same time.

Augmented reality can be an effective tool to address the needs of PWA in the clinic. Research demonstrates that augmented reality can be used to improve verb expressions through the use of a simple marker triggering a 3D virtual model. Deeper and focused video instructions of verbs can be overlaid onto a storybook to develop an understanding of verbs. With the technology to integrate augmented reality into one’s own reality, offering one live information during one’s activities, projecting images at a high resolution, and even allowing one to manipulate 3D objects with ease, it is only a matter of time before smart glasses become a part of an individual’s daily life.


Acknowledgments

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (No. NRF-2018S1A5B6075173).

이 연구는 2018년 대한민국 교육부와 한국연구재단의 지원을 받아 수행된 연구임(No. NRF-2018S1A5B6075173).


References
1. Ahn, B. K., Bae, I. H., Park, H. J., & Kwon, S. B. (2018). The efficacy of augmented reality based speech language therapy program on verbal expression vocabulary improvement in children with intellectual disabilities. Journal of Speech-Language & Hearing Disorders, 27(2), 111-124.
2. An, S. W., Kim, G. H., Park, H. J., & Kwon, S. B. (2017). Effects of the speech therapy program based augmented reality on improving naming and functional communication ability of the patients with expressive aphasia. The Journal of Transdisciplinary Studies, 1(1), 57-67.
3. Antkowiak, D., Kohlschein, C., Krooß, R., Speicher, M., Meisen, T., Jeschke, S., & Werner, C. J. (2016). Language therapy of aphasia supported by augmented reality applications. Proceedings of 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), 1-6.
4. Bae, I. H., Park, H. J., Kim, G. H., & Kwon, S. B. (2014). Educational application of speech therapy program based on augmented reality. Journal of Speech-Language & Hearing Disorders, 23(2), 139-152.
5. Bae, J. A. (2019). Effects of semantic cues on naming of moderate patients with Broca's aphasia. Journal of Speech-Language & Hearing Disorders, 28(2), 29-37.
6. Bendy, M., Caramazza, A., Pascual-Leone, A., & Saxe, R. (2012). Typical neural representation of action verbs develop without vision. Cerebral Cortex, 22(2), 286-293.
7. Bird, H., Howard, D., & Franklin, S. (2003). Verbs and nouns: The importance of being imageable. Journal of Neurolinguistics, 16(2-3), 113-149.
8. Caramazza, A., & Hillis, A. E. (1991). Lexical organization of nouns and verbs in the brain. Nature, 349, 788-790.
9. Corina, D. P., Gibson, E. K., Martin R., Polioakov, A., Brinkley, J., & Ojemann, G. A. (2005). Dissociation of action and object naming: Evidence from cortical stimulation mapping. Human Brain Mapping, 24(1), 1-10.
10. Davidoff, J., & Masterson, J. (1995). The development of picture naming: Differences between verbs and nouns. Neurolinguistics, 9(2), 69-8.
11. den Ouden, D. B., Fix, S., Parrish, T. B., & Thompson, C. K. (2009). Argument structure effects in action verb naming in static and dynamic conditions. Journal of Neurolinguistics, 22(2), 196-215.
12. Druks, J., & Shallice, T. (2000). Selective preservation of naming from description and the “restricted preverbal message”. Brain and Language, 72(2), 100-128.
13. Honincthun, P., & Pillon, A. (2005). Why verbs could be more demanding of executive resources than noun: Insight from a case study of a fv-FTD patient. Brain and Language, 95(1), 36-37.
14. Kegl, J. (1995). Levels of representation and units of access relevant to agrammatism. Brain and Language, 50(2), 151-200.
15. Kim, H. H., & Na, D. R. (2001). Paradise Korean-Western Aphasia Test. Seoul: Paradise Welfare Foundation.
16. Kim, H. J., & Kown, S. B. (2018). The effect of augmented reality-based language therapy program on the vocabulary strength improvement in children with language developmental delay. Journal of Speech-Language & Hearing Disorders, 27(3), 87-96.
17. Kwag, E. J., Sung, J. E., Kim, Y. H., & Cheon, H. J. (2014). Effects of verb network strengthening treatment on retrieval of verbs and nouns in persons with aphasia. Communication Sciences & Disorders, 19(1), 89-98. uci:G704-000725.2014.19.1.009
18. Lee, M., & Thompson, C. K. (2004). Agrammatic aphasic production and comprehension of unaccusative verbs in sentence contexts. Journal of Neurolinguistics, 17(4), 315-330.
19. Mätzig, S., Druks, J., Masterson, J., & Vigliocco, G. (2009). Noun and verb differences in picture naming: Past studies and new evidence. Cortex, 45(6), 738-758.
20. Oh, E. A., Kang, Y. W., Shin, J, H., & Yeon, B. K. (2010). A validity study of K-MMSE as a screening test for dementia: Comparison against a comprehensive neuropsychological evaluation. Dementia and Neurocognitive Disorders, 9(1), 8-12.
21. Park, J. I. (2018). A Korean vowel merger caused by the combined effects of phonetic obscurity and semantic-functional similarities. Journal of Language Sciences, 25(1), 39-53.
22. Shin, S. E., Kwon, M. S., Lee, J. H., & Sim, H. S. (2017). Verb naming and comprehension in patients with Alzheimer’s disease: Focusing on instrumentality of action verbs. Communication Sciences & Disorders, 22(2), 190-204.
23. Sung, J. E., & Kwag, E. J. (2012). Age-related verb naming abilities depending on the argument structures. Korean Journal of Communication Disorders, 17(4), 550-564.
24. Tae, J. I.. Lee, Y. H., & Kwon, Y. A. (2017). The role of phonological information on visual word recognition through auditory-visual cross modal priming. Journal of Language Sciences, 24(1), 175-189.

참 고 문 헌
25. 곽은정, 성지은, 김연희, 전희정 (2014). 동사의미역강화중재가 실어증 환자의 동사 및 명사 이름대기에 미치는 효과. Communication Sciences & Disorders, 19(1), 89-98.
26. 김향희, 나덕렬 (2001). 파라다이스 한국판-웨스턴실어증검사. 서울: 재단법인 파라다이스 복지재단.
27. 김혜진, 권순복 (2018). 증강현실 기반 언어치료 프로그램이 언어발달지체 아동의 어휘력 향상에 미치는 효과. 언어치료연구, 27(3), 87-96.
28. 박재익 (2018). 음성적 모호성과 의미기능적 유사성에 기인한 한국어 모음의 융합. 언어과학, 25(1), 39-53.
29. 배인호, 박희준, 김근효, 권순복 (2014). 증강현실기반 언어치료 프로그램의 교육적 적용. 언어치료연구, 23(2), 139-152.
30. 배진애 (2019). 의미 단서가 중등도 브로카실어증 환자의 이름대기에 미치는 영향. 언어치료연구, 28(2), 29-37.
31. 성지은, 곽은정 (2012). 연령 및 동사 논항 구조에 따른 애니메이션을 활용한 동사 이름대기 과제 수행력 차이. 언어청각장애연구, 17(4), 550-564.
32. 신상은, 권미선, 이재홍, 심현섭 (2017). 알츠하이머성 치매환자의 동사 이름대기와 이해: 동작동사의 도구성을 중심으로. Communication Sciences & Disorders, 22(2), 190-204.
33. 안병강, 배인호, 박희준, 권순복 (2018). 증강현실기반 언어치료 프로그램이 지적장애아동의 동사 표현 어휘력 향상에 미치는 효과. 언어치료연구, 27(2), 111-124.
34. 오은아, 강연욱, 신준현, 연병길 (2010). 치매선별검사로서 K-MMSE의 타당도 연구: 종합적인 신경심리학적 평가와의 비교. 대한치매학회지, 9(1), 8-12.
35. 태진이, 이윤형, 권유안 (2017). 시각 단어재인 시 음운정보의 역할: 청각-시각 교차양상 점화과제 연구. 언어과학, 24(1), 175-189.

Appendix 1. 
The Korean verb-selected vocabulary list
List Unergative Unaccusative Non-instrumental Instrumental
 1 (강아지가) 짖다 (얼음이) 녹다 (음식을) 넣다 (연필로) 쓰다
 2 (아이가) 울다 (꽃이) 피다 (자전거를) 타다 (붓으로) 그리다
 3 (엄마가) 웃다 (유리가) 깨지다 (줄넘기를) 넘다 (풀로) 붙이다
 4 (아이가) 자다 (나무가) 자라다 (수레를) 끌다 (가위로) 자르다
 5 (남자가) 뛰다 (음식이) 썩다 (공을) 던지다 (분무개로) 뿌리다
 6 (여자가) 걷다 (열매가) 맺히다 (짐을) 싣다 (먼지떨이로) 털다
 7 (동생이) 돌다 (물이) 끓다 (쓰레기를) 줍다 (끈으로) 묶다
 8 (누나가) 일어나다 (꽃이) 시들다 (빨래를) 짜다 (망치로) 박다
 9 (고양이가) 구르다 (물이) 얼다 (종이를) 찢다 (빨대로) 빨다
10 (새가) 날다 (옷이) 젖다 (초인종을) 누르다 (자로) 재다