Exposing the Bias in Artificial Intelligence:
The Machine Lexicon

an live audiovisual performance series by auto_impulse
(2025-2026)


Teaser is upcoming...
Exposing the Bias in Artificial Intelligence: The Machine Lexicon continues the series by turning toward the demographic and linguistic stereotypes that showcase how AI models encode and model human identity. This edition draws its background categories from public statistics of the German state (since the original first iteration was prepared for a festival in Germany), where population groups are broadly organized by continents of citizenship. These continental backgrounds as are used as the demographic dimension of prompts for an image generator, serving as simplified demographic signifiers that reveal how generative AI models interpret large-scale demographic concepts. 

First, a vocabulary of masculine and feminine adjectives gathered through prompting with six large language models: ChatGPT, Claude, DeepSeek, Grok, Llama, and Mistral. Their aggregated contributions form a shared lexicon that reflects machine-derived stereotypes and tendencies. Second, the continental backgrounds and identity lexicon constitutes a matrix to generate profile images using Stable Diffusion. A dataset of human portraits is generated through prompts such as "a [adjective] person from [continent], full face, looking at the camera". This structure avoids speculative or futuristic framing which was investigated in the first iteration, The Cyber Future). This second edition situates the images firmly within present-day algorithmic imaginaries, allowing the model to reveal the representational tendencies encoded within the commonly used modules of generative AI (such as CLIP and word-to-vec modules) rather than aesthetic cues from invented futures inm the first edition.

While the profiles themselves remain grounded in visual conventions, the audiovisual performance transforms these materials into an exploration of abstracted latent spaces. The work shapes these latent structures into an aestheticized three-dimensional environment. This spatial world slowly reveals monotonous matrixes of identities, and the machine-generated feature-based relations in generated images and their descriptive language, creating a reveals how the model organizes difference. The revealing human profiles and their clusters evoke the AI systems’ internal representations through which identities are clustered, separated, or drawn together.

This edition invites the audience to a reflective engagement with the systems that encodes people. It invites viewers to consider how demographic simplifications and machine-generated adjectives shape the resulting portraits. The intention is not to prescribe a fixed interpretation of those portraits, rather to open a transparent space where audiences can observe how foundation AI approaches encode, represent, and generate the of visual identity.

Through its interplay of portraiture and machine reducted concept spaces, the performance presents an opportunity where algorithmic structures can be examined with critical attention. The viewer’s perception becomes part of the inquiry, shaped by their own cultural understandings, positionality, histories, and heritage. Whether the images appear fitting, reassuring, strange, disturbing, or pleasing remains open, guided by the viewer’s encounter rather than pre-emposed meaning.

Exposing the Bias in Artificial Intelligence: The Machine Lexicon positions itself as an act towards transparency in how contemporary AI models materialize identity through reduced demographic categories and machine-generated descriptors. It refrains from offering a definitive reading, rather creating a setting where the workings of these systems can be questioned, interpreted, and reflected upon as part of an ongoing democratic dialogue about representation in automated culture.

Acknowledgement


The work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program—Humanity and Society (WASP-HS), funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.


References

Copyright
Kıvanç Tatar
©2018-2022

Art Collectives
-> A.I.D (Istanbul)
-> Gold Saucer (Vancouver)

NFTs
-> Foundation
+ upcoming...