The effect of multi-modal learning in Artificial Grammar Learning Task

Katarzyna Rączy,

Zuzanna Skóra,

Maciej J. Szul


The aim of the following study was to answer the question whether multimodal grammar learning would improve classification accuracy as compared with a unimodal learning. To test this hypothesis, an experimental procedure was constructed based on the research conducted by Conway and Christiansen [2006]. Their study regarded modality-specific Artificial Grammar Learning task (AGL). The grammatical sequence that was used in the study presented here was based on an algorithm with a finite number of results. Two additional sets of ungrammatical sequences were generated in a random manner. One of them was used in the learning phase in the control group while the second one, in the classification phase, in both, control and experimental groups. The obtained results showed that participants performed classification task above the chance level. These findings supported the hypothesis, which stated that grammar learning would occur [Conway and Christiansen 2006; Reber 1989]. We did not observe any effect regarding the hypothesized accuracy enhancement in a multimodal learning condition.

Słowa kluczowe: implicit learning, artificial grammar learning, multimodal learning
Alais D., Burr D. (2004). No direction-specifi c bimodal facilitation for audiovisual motion detection. „Cognitive Brain Research”, 19 (2), pp. 185–194. 
Baddeley A. (2003). Working memory: Looking back and looking forward. „Nature Reviews Neuroscience”, 4, pp. 829–839. 
Conway C.M., Christiansen M.H. (2006). Statistical learning within and between modalities pitting abstract against stimulus-specific representations. „Psychological Science”, 17 (10), pp. 905–912.
Conway, C. M., Christiansen, M. H. (2009). Seeing and hearing in space and time: Eff ects of modality and presentation rate on implicit statistical learning. „European Journal of Cognitive Psychology”, 21, pp. 561-580. 
Johansson T. (2009). Strengthening the case for stimulus-specifi city in Artifi cial Grammar Learning: No evidence for abstract representations with extended exposure. „Experimental Psychology” (formerly „Zeitschrift  für Experimentelle Psychologie”), 56 (3), pp. 188–197. 
Meyer G.F., Wuerger S.M., Röhrbein F., Zetzsche C. (2005). Low-level integration of auditory and visual motion signals requires spatial co-localisation. „Experimental Brain Research”, 166 (3–4), pp. 538–547. 
Nahorna O., Berthommier F., Schwartz J.-L. (2012). Binding and unbinding the auditory and visual streams in the McGurk eff ect. „Th e Journal of the Acoustical Society of America”, 132, p. 1061. 
Okada K., Venezia J.H., Matchin W., Saberi K., Hickok G. (2013). An fMRI study of audiovisual speech perception reveals multisensory interactions in auditory cortex. „PLoS ONE”, 8 (6), p. e68959. 
Perruchet P., Pacton S. (2006). Implicit learning and statistical learning: one phenomenon, two approaches. „TRENDS in Cognitive Sciences”, 10 (5), pp. 233–238. 
Pothos E.M., Bailey T.M. (2000). Th e role of similarity in artifi cial grammar learning. „Journal of Experimental Psychology: Learning, Memory, and Cognition”, 26 (4), p. 847. 
Reber A.S. (1989). Implicit learning and tacit knowledge. „Journal of Experimental Psychology: General”, 118 (3), p. 219. 
Werry C. (2007). Refl ections on language: Chomsky, linguistic discourse and the value of rhetorical self-consciousness. „Language Sciences”, 29 (1), pp. 66–87,