Exposing the Bias in Artificial Intelligence: The Cyber Future

an live audiovisual performance series by auto_impulse
(2024-2025)




Exposing the Bias in Artificial Intelligence: The Cyber Future is an audiovisual performance that investigates how AI models visually encode and reproduce societal biases. Working with human profile images generated through Stable Diffusion, the project probes the subtle and overt stereotyping embedded in machine-produced representations. Each image emerges from prompts combining gendered adjectives, positive and negative, masculine and feminine, with a range of ethnic descriptors. Through this systematic exploration, the work reveals the latent tendencies and value-laden patterns within the search space of generative AI systems.

Approached through speculative and futuring methods in human–machine interaction and grounded in feminist science and technology studies, the performance embraces a cyberpunk aesthetic with fragmented identities, glitch-tinged surfaces, neon-lit contours, and machinic atmospheres. These AI-generated figures are placed inside a techno-social imaginary where the synthetic imagination of a future collides with today’s biases. This aesthetic serves as an expressive layer that amplifies questions about futuristic identity, power, and representation in algorithmic culture.
 
The work refrains from proposing an ideal of neutrality or an expectation of an unbiased AI model. It rather cultivates a critical and situated encounter with the biases that emerge. The generative search space becomes a more transparent terrain where tendencies, distortions, and normative assumptions become visible and open to interpretation. The intention is not to direct how AI should be used, rather to empower audiences with awareness that can guide their own creative and ethical choices when interacting with these technologies.
Through this lens, the performance becomes an act of exposure that renders the socio-technical norms behind algorithmic processes more legible. Biases and aesthetic values differ across cultures and geographies; therefore, the experience of these images, whether humorous, unsettling, beautiful, or unacceptable, remains intentionally open. The viewer’s engagement becomes part of the work’s unfolding inquiry.

Exposing the Bias in Artificial Intelligence: The Cyber positions itself as a speculative and visually charged exploration of AI’s representational politics. It refrains from claiming authority over what is best, rather offering a space where we collectively consider what these algorithmic visions disclose about our present conditions and the futures we are shaping.Exposing the Bias in Artificial Intelligence: The Cyber Future is an audiovisual performance that investigates how AI models visually encode and reproduce societal biases. Working with human profile images generated through Stable Diffusion, the project probes the subtle and overt stereotyping embedded in machine-produced representations. Each image emerges from prompts combining gendered adjectives, positive and negative, masculine and feminine, with a range of ethnic descriptors. Through this systematic exploration, the work reveals the latent tendencies and value-laden patterns within the search space of generative AI systems.
Approached through speculative and futuring methods in human–machine interaction and grounded in feminist science and technology studies, the performance embraces a cyberpunk aesthetic with fragmented identities, glitch-tinged surfaces, neon-lit contours, and machinic atmospheres. These AI-generated figures are placed inside a techno-social imaginary where the synthetic imagination of a future collides with today’s biases. This aesthetic serves as an expressive layer that amplifies questions about futuristic identity, power, and representation in algorithmic culture.

The work refrains from proposing an ideal of neutrality or an expectation of an unbiased AI model. It rather cultivates a critical and situated encounter with the biases that emerge. The generative search space becomes a more transparent terrain where tendencies, distortions, and normative assumptions become visible and open to interpretation. The intention is not to direct how AI should be used, rather to empower audiences with awareness that can guide their own creative and ethical choices when interacting with these technologies.
Through this lens, the performance becomes an act of exposure that renders the socio-technical norms behind algorithmic processes more legible. Biases and aesthetic values differ across cultures and geographies; therefore, the experience of these images, whether humorous, unsettling, beautiful, or unacceptable, remains intentionally open. The viewer’s engagement becomes part of the work’s unfolding inquiry.

Exposing the Bias in Artificial Intelligence: The Cyber Future positions itself as a speculative and visually charged exploration of AI’s representational politics. It refrains from claiming authority over what is best, rather offering a space where we collectively consider what these algorithmic visions disclose about our present conditions and the futures we are shaping.








Acknowledgement


The work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program—Humanity and Society (WASP-HS), funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation. I would like to thank Priscilla Tissot for recommending the USA job statistic dataset for a taxonomy of ethinicities, and a list adjectives of aesthetic appraisals proposed in the work by Arvidsson (2009). 


References
    Arvidsson, Sofia. "A Gender Based Adjectival Study of Women's and Men's Magazines." (2009). , Sofia. "A Gender Based Adjectival Study of Women's and Men's Magazines." (2009).

Copyright
Kıvanç Tatar
©2018-2022

Art Collectives
-> A.I.D (Istanbul)
-> Gold Saucer (Vancouver)

NFTs
-> Foundation
+ upcoming...