Exposing the Bias in Artificial Intelligence: Digital Assemblage


What is human?
    A collection of limbs, a torso, and a head—joined by a voice?
    A speaking face framed by a camera?
    A body reduced to text, descriptors, and categories; to be parsed, sorted, and retrieved ?



Exposing the Bias in Artificial Intelligence: Assemblage continues the series by shifting attention from persons towards digital assemblages in which artificial intelligence models construct, fragment, and recombine the human. Rather than treating identity as a unified entity, this edition foregrounds the digital assemblage: the association of demographic categories wtih visual and auditory qualities through linguistic latent relations in which AI systems operationalize the idea of a person.

Building on the previous edition, The Machine Lexicon, this work again draws upon public demographic categorizations—specifically those used in German state statistics, where populations are broadly grouped by continents of citizenship. These coarse classifications function here not as representations of lived identity, but as structural components within prompts. When paired with machine-generated adjectives, they become simplified demographic signifiers through which image generation models interpolate and render the digital human.

The linguistic material emerges from a process of prompting six large language models—ChatGPT, Claude, DeepSeek, Grok, Llama, and Mistral—to produce adjectives associated with masculinity and femininity. Aggregated and combined, these outputs form a shared machine-derived repertoire of traits. This repertoire is not treated as meaning in itself, but as raw material: fragments of language that expose tendencies, biases, and sedimented correlations learned from large-scale data.

Together, the demographic categories and the adjective assemblage establish a structured prompt space for generating human portraits using Stable Diffusion. Images are produced through formulations such as:
“a [adjective] person from [continent], full face, looking at the camera.”
By maintaining a neutral, present-day framing—free from speculative or futuristic aesthetics explored in the first iteration, The Cyber Future—the work directs attention away from imagined futures and toward contemporary algorithmic imaginaries. In doing so, it allows the representational logic embedded within widely used generative architectures (including word-vector-driven models such as CLIP and CLAP) to surface more clearly.

While the generated portraits adhere to recognizable conventions of photographic representation, the audiovisual performance disassembles and reconfigures these materials into a spatial exploration of latent relations. Human figures, descriptors, and statistical proximities are transformed into abstracted three-dimensional environments, where identities appear as associations and clusters. This spatialization depict relations: how features are grouped, distances are calculated, and differences are organized within the model’s internal structures.

The resulting assemblage reveals how AI systems do not merely portray people, but actively compose them—through the aggregation of categorical attributes, learned correlations, and visual norms. Faces become surfaces where linguistic bias, demographic reduction, and probabilistic inference converge.

Rather than prescribing an interpretation, Exposing the Bias in Artificial Intelligence: Assemblage offers a transparent framework for observation. The work invites viewers to attend to their own perceptual and cultural positioning as they encounter these machine-constructed figures. Whether the portraits appear convincing, unsettling, familiar, or inadequate remains unresolved, shaped by the viewer’s own histories, expectations, and embodied knowledge.

By staging identity as an assemblage rather than a unified essence, this edition positions itself as an opening for critical examination. It proposes a space where the mechanics through which contemporary AI models encode, generate, and circulate images of people can be questioned—without closure, and without reduction—within an ongoing democratic conversation about representation in automated culture.
Copyright
Kıvanç Tatar
©2018-2022

Art Collectives
-> A.I.D (Istanbul)
-> Gold Saucer (Vancouver)

NFTs
-> Foundation
+ upcoming...