Spoken texts provide a large quantity of information which extends beyond language; they include semiotic resources such as gesture, posture, gaze and facial expressions which, like language, contribute to the overall meaning-making of the texts (Kress and van Leeuwen 2006: 41). However, until recently their investigation has completely relied on their 'basic' orthographic transcriptions (Leech 2000), partly due to the lack of adequate concordancing software tools. This has somewhat limited the potential spoken texts bring to language teaching and learning. Based on the theoretical and technical innovations which have taken place in the field of multimodal corpus linguistics (Baldry and Thibault 2001, 2006b, forthcoming), especially within the MCA project (Baldry in press, Baldry and Thibault in press), this study presents a pedagogical application of spoken corpora in the promotion of communicative language competence by language learners at various levels of proficiency. In particular, it illustrates how MCA, a multimodal concordancer (Baldry 2005, Baldry and Beltrami 2005), can be used to create, annotate and concordance spoken corpora in terms of functions and notions (van Ek and Trim 1998, 2001). The study illustrates the kind of information the concordance lines and their associated film clips provide in terms of: a) the linguistic forms realizing a specific language function and b) the ways in which language interacts with its multimodal co-text (Baldry in press). In so doing, the paper introduces a new concordancing technique, namely multimodal functional-notional concordancing (Coccetta in press b), and presents two multimodal data-driven-learning (DDL) activities which show how this new approach to the analysis of spoken texts can enhance language learning.

Multimodal Functional-Notional Concordancing

COCCETTA, Francesca
2008-01-01

Abstract

Spoken texts provide a large quantity of information which extends beyond language; they include semiotic resources such as gesture, posture, gaze and facial expressions which, like language, contribute to the overall meaning-making of the texts (Kress and van Leeuwen 2006: 41). However, until recently their investigation has completely relied on their 'basic' orthographic transcriptions (Leech 2000), partly due to the lack of adequate concordancing software tools. This has somewhat limited the potential spoken texts bring to language teaching and learning. Based on the theoretical and technical innovations which have taken place in the field of multimodal corpus linguistics (Baldry and Thibault 2001, 2006b, forthcoming), especially within the MCA project (Baldry in press, Baldry and Thibault in press), this study presents a pedagogical application of spoken corpora in the promotion of communicative language competence by language learners at various levels of proficiency. In particular, it illustrates how MCA, a multimodal concordancer (Baldry 2005, Baldry and Beltrami 2005), can be used to create, annotate and concordance spoken corpora in terms of functions and notions (van Ek and Trim 1998, 2001). The study illustrates the kind of information the concordance lines and their associated film clips provide in terms of: a) the linguistic forms realizing a specific language function and b) the ways in which language interacts with its multimodal co-text (Baldry in press). In so doing, the paper introduces a new concordancing technique, namely multimodal functional-notional concordancing (Coccetta in press b), and presents two multimodal data-driven-learning (DDL) activities which show how this new approach to the analysis of spoken texts can enhance language learning.
2008
Proceedings of the 8th Teaching and Language Corpora Conference
File in questo prodotto:
File Dimensione Formato  
[2008c]Coccetta.pdf

non disponibili

Tipologia: Documento in Post-print
Licenza: Accesso chiuso-personale
Dimensione 2.93 MB
Formato Adobe PDF
2.93 MB Adobe PDF   Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/28283
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact