Visual narratives as a window into language and cognition (TINTIN)

Crosscultural
This project "Visual narratives as a window into language and cognition" (nicknamed "TINTIN") is funded by an ERC Starting Grant. We will build tools for analyzing visual and multimodal information, and then incorporate these data into a database. All of these tools and data will be made publicly accessible for the general public and other researchers to explore the properties of comics around the world. Our specific project will study whether there are cross-cultural patterns in the visual languages used in comics of the world, and whether those patterns connect to the spoken languages of their authors.

This project is a follow up and expansion from my previous corpus work in the Visual Language Research Corpus, which capped out around 300 comics (+4,000 Calvin and Hobbes strips). We're finishing writing up this data, which has already appeared in papers about cross-cultural page layouts, and American page layouts and storytelling over time. However, the TINTIN project will be launching a new, more sophisticated coding scheme and methods.

In the TINTIN project we will be releasing downloadable open software tools for annotation, our Multimodal Annotation Software Tool (MAST). The corpus that we will build with this tool will also be open for researchers to use for research, and to contribute to.

Here's the official description of the TINTIN project:

"Drawn sequences of images are a fundamental aspect of human communication, appearing from instruction manuals and educational material to comics. Despite this, only recently have scholars begun to examine these visual narratives, making this an untapped resource to study the cognition of sequential meaning-making. The emerging field analysing this work has implicated similarities between sequential images and language, which raises the question: Just how similar is the structure and processing of visual narratives and language?

I propose to explore this query by drawing on interdisciplinary methods from the psychological and linguistic sciences. First, in order to examine the structural properties of visual narratives, we need a large-scale corpus of the type that has benefited language research. Yet, no such databases exist for visual narrative systems. I will thus create innovative visual annotation tools to build a corpus of 1,500 annotated comics from around the world (Stage 1). With such a corpus, I will then ask, do visual narratives differ in their properties around the world, and does such variance influence their comprehension (Stage 2)? Next, we might ask why such variation appears, particularly: might differences between visual narratives be motivated by patterns in spoken languages, thereby implicating cognitive processes across modalities (Stage 3)?

Thus, this proposal aims to investigate the domain-specific (Stage 2) and domain-general (Stage 3) properties of visual narratives, particularly in relation to language, by analysing both production (corpus analyses) and comprehension (experimentation). This research will be ground-breaking by challenging our knowledge about the relations between drawing, sequential images, and language. The goal is not simply to create tools to explore a limited set of questions, but to provide resources to jumpstart a budding research field for visual and multimodal communication in the linguistic and cognitive sciences."



Team Members


Our current research team consists of several core staff and various collaborators. Dr. Bruno Cardoso is a postdocoral fellow who is programming our Multimodal Annotation Software Tool (MAST). Our two PhD students will begin in September, and they are Bien Klomberg and Irmak Hacımusaoğlu.

We have also been planning collaborations with colleagues around the world who plan to help analyze comics for our corpus and conduct experiments. We welcome additional collaborations, so if you are interested in working with us on this project, please inquire with Neil Cohn for details.

Crosscultural

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 850975).