Translanguaging


at the International Symposium of Electronic Arts 2020
Montreal, QC, Canada


Translanguaging is an interactive installation that explores language through sound, image, and gesture. The term “translanguaging” describes how multilingual speakers employ their multiple languages in the act of communicating. Language use and language learning for multi-linguals become a negotiation amongst the different languages in one’s repertoire, all of which co-exist symbiotically.

Credits


Aaron Liu-Rosenbaum, Professor (Laval University) – Creative in Residence
Ronald Boersen, PhD Student (SIAT) – Creative collaborator and software adaptation
Kıvanç Tatar, PhD & Philippe Pasquier, Professor (SIAT) – Original software development team (MASOM)




The act of “languaging” could here be interpreted in line with the act of “musicking”, coined by musician and educator Christopher Small in 1998. The word “musicking” challenges the notion of music as a thing, and instead attempts to get people to think of music as an activity. Th act of musicking as such is inclusive of all aspects and actors involved in a musical performance, comprising not only the performance, but also involving the listening, rehearsing, practicing, and composing. In breaking the linguistic barriers and making “music” into a verb, Small has opened up new conceptual possibilities on how we think about music and the meaning that emerges out of the relations between the various acts and actors involved in its creation.

It is in this spirit that Translanguaging invites visitors to think of language not as a “thing”, but rather as an activity of “languaging” to explore in the two-part interactive experience elaborated below.

The project was led by Prof. Aaron Liu-Rosenbaum (Creative-in-Residence at the Faculty of Education) in collaboration with Ronald Boersen and was presented in a weeklong exhibit at the opening of the faculty’s new Research Hub.



Gesture-wording


In this part of the installation, visitors were invited to “gesture” new words. The system was built on a Self Organizing Map (adapted from MASOM, a software architecture originally developed by Kıvanç Tatar and Philippe Pasquier) in which short, syllabic vocal sounds were mapped to a 2-dimensional latent space. The gestures made by the visitors were captured by camera and mapped to the 2-dimensional map, subsequently playing back the associated vocal sounds. Through the act of gesturing, the visitors could “perform” new instances of words and phrases while exploring the latent space of vocal sounds.

Projected on the wall were spectral images of the respective frequencies contained in these “new words”, visualizing the actions performed. In this way, gesture is de-materialized into sound and sound is re-materialized into image in a multi-modal experience of “languaging”.

Chatterbox


In this part of the installation, visitors were invited to join a gibberish conversation with four loudspeaker-agents. Each loudspeaker presented a different “languaging” personality that spoke a gibberish language reconstructed from an actual language. The interaction between the loudspeaker-agents took place while only being able to hear each other, without being able to see if another agent was about to speak. Through the sound of their respective voices they had to negotiate when it is their turn to speak, being both polite in awaiting their turn as well as a little rude in anticipation and interrupting.

The gibberish languages were constructed by recombining vocal sounds, not according to any semantic measures, but by focusing on paralinguistic features, such as intonation and rhythm. While this eliminated the traditional meaning of language, the gibberish language allowed the visitor to explore the meaning that remained in the act of “paralanguaging”. This was accomplished through another adaptation of the MASOM software.

Subsequently, the loudspeaker-agents had to negotiate their turn through the vocal sounds alone, akin to a conference call without video. By defining a number of simple rules, such as “start speaking when the other stops” and “start speaking when you anticipate the other will stop”, the loudspeaker-agents would interact and display rich and complex behaviour. They would interrupt each other in anticipation or negotiate who’s turn is next through short utterances of speech. The inspiration for governing this behaviour came from the subsumption architecture – an architecture originally developed for creating intelligent and complex behaviour through the parallel layering of simple rules.



The above video presents a short demonstration of the software developed for the gibberish-languaging agents in a translanguaging debate.

Acknowledgements


The project was commissioned and supported by the Faculty of Education Susan O’Neil – Professor and Associate Dean, Academic and Research, Faculty of Education Inspired by the work of:
    Dr. Diane Dagenais – Professor, Faculty of Education
    Dr. Angel Lin – Professor, Faculty of Education, Canada Research Chair in Plurilingual & Intercultural Education
    Dr. Danièle Moore – University Professor, Faculty of Education
Copyright
Kıvanç Tatar
©2018-2022

Art Collectives
-> A.I.D (Istanbul)
-> Gold Saucer (Vancouver)

NFTs
-> Foundation
+ upcoming...