<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	
	>

<channel>
	<title>Kıvanç Tatar</title>
	<link>https://kivanctatar.com</link>
	<description>Kıvanç Tatar</description>
	<pubDate>Mon, 13 Apr 2026 05:53:39 +0000</pubDate>
	<generator>https://kivanctatar.com</generator>
	<language>en</language>
	
		
	<item>
		<title>News</title>
				
		<link>https://kivanctatar.com/News</link>

		<pubDate>Wed, 27 Oct 2021 13:14:55 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/News</guid>

		<description>News 
 ︎︎&#38;nbsp;





︎ Kickstarted a new research group at Chalmers: AI in Computational Arts, Music, and Games. Slowly moving my academic research there, while keeping this page for artworks.&#38;nbsp;




</description>
		
	</item>
		
		
	<item>
		<title>Exposing the Bias in AI-Digital-Assemblage</title>
				
		<link>https://kivanctatar.com/Exposing-the-Bias-in-AI-Digital-Assemblage</link>

		<pubDate>Mon, 13 Apr 2026 05:53:39 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Exposing-the-Bias-in-AI-Digital-Assemblage</guid>

		<description>Exposing the Bias in Artificial Intelligence: Digital Assemblage


What is human?&#38;nbsp; &#38;nbsp; A collection of limbs, a torso, and a head—joined by a voice?&#38;nbsp; &#38;nbsp; A speaking face framed by a camera?&#38;nbsp; &#38;nbsp; A body reduced to text, descriptors, and categories;
to be parsed, sorted, and retrieved

?


&#60;img width="2560" height="1440" width_o="2560" height_o="1440" data-src="https://freight.cargo.site/t/original/i/e369571da0bc4f99329a15d09fae0970d12bc080fc600c7b88a55d32db6797df/TDMovieOut.1.png" data-mid="247129889" border="0" data-scale="53" src="https://freight.cargo.site/w/1000/i/e369571da0bc4f99329a15d09fae0970d12bc080fc600c7b88a55d32db6797df/TDMovieOut.1.png" /&#62;
Exposing the Bias in Artificial Intelligence: Assemblage continues the series by shifting attention from persons towards digital assemblages in which artificial intelligence models construct, fragment, and recombine the human. Rather than treating identity as a unified entity, this edition foregrounds the digital assemblage: the association of demographic categories wtih visual and auditory qualities through linguistic latent relations in which AI systems operationalize the idea of a person.


Building on the previous edition, The Machine Lexicon, this work again draws upon public demographic categorizations—specifically those used in German state statistics, where populations are broadly grouped by continents of citizenship. These coarse classifications function here not as representations of lived identity, but as structural components within prompts. When paired with machine-generated adjectives, they become simplified demographic signifiers through which image generation models interpolate and render the digital human.


The linguistic material emerges from a process of prompting six large language models—ChatGPT, Claude, DeepSeek, Grok, Llama, and Mistral—to produce adjectives associated with masculinity and femininity. Aggregated and combined, these outputs form a shared machine-derived repertoire of traits. This repertoire is not treated as meaning in itself, but as raw material: fragments of language that expose tendencies, biases, and sedimented correlations learned from large-scale data.


Together, the demographic categories and the adjective assemblage establish a structured prompt space for generating human portraits using Stable Diffusion. Images are produced through formulations such as:“a [adjective] person from [continent], full face, looking at the camera.”By maintaining a neutral, present-day framing—free from speculative or futuristic aesthetics explored in the first iteration, The Cyber Future—the work directs attention away from imagined futures and toward contemporary algorithmic imaginaries. In doing so, it allows the representational logic embedded within widely used generative architectures (including word-vector-driven models such as CLIP and CLAP) to surface more clearly.


While the generated portraits adhere to recognizable conventions of photographic representation, the audiovisual performance disassembles and reconfigures these materials into a spatial exploration of latent relations. Human figures, descriptors, and statistical proximities are transformed into abstracted three-dimensional environments, where identities appear as associations and clusters. This spatialization depict relations: how features are grouped, distances are calculated, and differences are organized within the model’s internal structures.


The resulting assemblage reveals how AI systems do not merely portray people, but actively compose them—through the aggregation of categorical attributes, learned correlations, and visual norms. Faces become surfaces where linguistic bias, demographic reduction, and probabilistic inference converge.


Rather than prescribing an interpretation, Exposing the Bias in Artificial Intelligence: Assemblage offers a transparent framework for observation. The work invites viewers to attend to their own perceptual and cultural positioning as they encounter these machine-constructed figures. Whether the portraits appear convincing, unsettling, familiar, or inadequate remains unresolved, shaped by the viewer’s own histories, expectations, and embodied knowledge.


By staging identity as an assemblage rather than a unified essence, this edition positions itself as an opening for critical examination. It proposes a space where the mechanics through which contemporary AI models encode, generate, and circulate images of people can be questioned—without closure, and without reduction—within an ongoing democratic conversation about representation in automated culture.</description>
		
	</item>
		
		
	<item>
		<title>Exposing the Bias in AI: Machine Lexicon</title>
				
		<link>https://kivanctatar.com/Exposing-the-Bias-in-AI-Machine-Lexicon</link>

		<pubDate>Sun, 08 Mar 2026 14:52:11 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Exposing-the-Bias-in-AI-Machine-Lexicon</guid>

		<description>Exposing the Bias in Artificial Intelligence: 
The Machine Lexicon



an live audiovisual performance series by auto_impulse
(2025-2026)

Teaser is upcoming...


Exposing the Bias in Artificial Intelligence: The Machine Lexicon continues the series by turning toward the demographic and linguistic stereotypes that showcase how AI models encode and model human identity. This edition draws its background categories from public statistics of the German state (since the original first iteration was prepared for a festival in Germany), where population groups are broadly organized by continents of citizenship. These continental backgrounds as are used as the demographic dimension of prompts for an image generator, serving as simplified demographic signifiers that reveal how generative AI models interpret large-scale demographic concepts.&#38;nbsp;


First, a vocabulary of masculine and feminine adjectives gathered through prompting with six large language models: ChatGPT, Claude, DeepSeek, Grok, Llama, and Mistral. Their aggregated contributions form a shared lexicon that reflects machine-derived stereotypes and tendencies. Second, the continental backgrounds and identity lexicon constitutes a matrix to generate profile images using Stable Diffusion. A dataset of human portraits is generated through prompts such as "a [adjective] person from [continent], full face, looking at the camera". This structure avoids speculative or futuristic framing which was investigated in the first iteration, The Cyber Future). This second edition situates the images firmly within present-day algorithmic imaginaries, allowing the model to reveal the representational tendencies encoded within the commonly used modules of generative AI (such as CLIP and word-to-vec modules) rather than aesthetic cues from invented futures inm the first edition.


While the profiles themselves remain grounded in visual conventions, the audiovisual performance transforms these materials into an exploration of abstracted latent spaces. The work shapes these latent structures into an aestheticized three-dimensional environment. This spatial world slowly reveals monotonous matrixes of identities, and the machine-generated feature-based relations in generated images and their descriptive language, creating a reveals how the model organizes difference. The revealing human profiles and their clusters evoke the AI systems’ internal representations through which identities are clustered, separated, or drawn together.


This edition invites the audience to a reflective engagement with the systems that encodes people. It invites viewers to consider how demographic simplifications and machine-generated adjectives shape the resulting portraits. The intention is not to prescribe a fixed interpretation of those portraits, rather to open a transparent space where audiences can observe how foundation AI approaches encode, represent, and generate the of visual identity.


Through its interplay of portraiture and machine reducted concept spaces, the performance presents an opportunity where algorithmic structures can be examined with critical attention. The viewer’s perception becomes part of the inquiry, shaped by their own cultural understandings, positionality, histories, and heritage. Whether the images appear fitting, reassuring, strange, disturbing, or pleasing remains open, guided by the viewer’s encounter rather than pre-emposed meaning.


Exposing the Bias in Artificial Intelligence: The Machine Lexicon&#38;nbsp;positions itself as an act towards transparency in how contemporary AI models materialize identity through reduced demographic categories and machine-generated descriptors. It refrains from offering a definitive reading, rather creating a setting where the workings of these systems can be questioned, interpreted, and reflected upon as part of an ongoing democratic dialogue about representation in automated culture.
Acknowledgement


The work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program—Humanity and Society (WASP-HS), funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.



References


</description>
		
	</item>
		
		
	<item>
		<title>Exposing the Bias in AI: The Cyber Future</title>
				
		<link>https://kivanctatar.com/Exposing-the-Bias-in-AI-The-Cyber-Future</link>

		<pubDate>Mon, 19 Aug 2024 19:21:16 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Exposing-the-Bias-in-AI-The-Cyber-Future</guid>

		<description>Exposing the Bias in Artificial Intelligence: The Cyber Future

an live audiovisual performance series by auto_impulse
(2024-2025)







Exposing the Bias in Artificial Intelligence: The Cyber Future is an audiovisual performance that investigates how AI models visually encode and reproduce societal biases. Working with human profile images generated through Stable Diffusion, the project probes the subtle and overt stereotyping embedded in machine-produced representations. Each image emerges from prompts combining gendered adjectives, positive and negative, masculine and feminine, with a range of ethnic descriptors. Through this systematic exploration, the work reveals the latent tendencies and value-laden patterns within the search space of generative AI systems.
Approached through speculative and futuring methods in human–machine interaction and grounded in feminist science and technology studies, the performance embraces a cyberpunk aesthetic with fragmented identities, glitch-tinged surfaces, neon-lit contours, and machinic atmospheres. These AI-generated figures are placed inside a techno-social imaginary where the synthetic imagination of a future collides with today’s biases. This aesthetic serves as an expressive layer that amplifies questions about futuristic identity, power, and representation in algorithmic culture.
&#38;nbsp;
The work refrains from proposing an ideal of neutrality or an expectation of an unbiased AI model. It rather cultivates a critical and situated encounter with the biases that emerge. The generative search space becomes a more transparent terrain where tendencies, distortions, and normative assumptions become visible and open to interpretation. The intention is not to direct how AI should be used, rather to empower audiences with awareness that can guide their own creative and ethical choices when interacting with these technologies.
Through this lens, the performance becomes an act of exposure that renders the socio-technical norms behind algorithmic processes more legible. Biases and aesthetic values differ across cultures and geographies; therefore, the experience of these images, whether humorous, unsettling, beautiful, or unacceptable, remains intentionally open. The viewer’s engagement becomes part of the work’s unfolding inquiry.

Exposing the Bias in Artificial Intelligence: The Cyber positions itself as a speculative and visually charged exploration of AI’s representational politics. It refrains from claiming authority over what is best, rather offering a space where we collectively consider what these algorithmic visions disclose about our present conditions and the futures we are shaping.Exposing the Bias in Artificial Intelligence: The Cyber Future is an audiovisual performance that investigates how AI models visually encode and reproduce societal biases. Working with human profile images generated through Stable Diffusion, the project probes the subtle and overt stereotyping embedded in machine-produced representations. Each image emerges from prompts combining gendered adjectives, positive and negative, masculine and feminine, with a range of ethnic descriptors. Through this systematic exploration, the work reveals the latent tendencies and value-laden patterns within the search space of generative AI systems.
Approached through speculative and futuring methods in human–machine interaction and grounded in feminist science and technology studies, the performance embraces a cyberpunk aesthetic with fragmented identities, glitch-tinged surfaces, neon-lit contours, and machinic atmospheres. These AI-generated figures are placed inside a techno-social imaginary where the synthetic imagination of a future collides with today’s biases. This aesthetic serves as an expressive layer that amplifies questions about futuristic identity, power, and representation in algorithmic culture.

The work refrains from proposing an ideal of neutrality or an expectation of an unbiased AI model. It rather cultivates a critical and situated encounter with the biases that emerge. The generative search space becomes a more transparent terrain where tendencies, distortions, and normative assumptions become visible and open to interpretation. The intention is not to direct how AI should be used, rather to empower audiences with awareness that can guide their own creative and ethical choices when interacting with these technologies.
Through this lens, the performance becomes an act of exposure that renders the socio-technical norms behind algorithmic processes more legible. Biases and aesthetic values differ across cultures and geographies; therefore, the experience of these images, whether humorous, unsettling, beautiful, or unacceptable, remains intentionally open. The viewer’s engagement becomes part of the work’s unfolding inquiry.

Exposing the Bias in Artificial Intelligence: The Cyber Future positions itself as a speculative and visually charged exploration of AI’s representational politics. It refrains from claiming authority over what is best, rather offering a space where we collectively consider what these algorithmic visions disclose about our present conditions and the futures we are shaping.




&#60;img width="1080" height="1080" width_o="1080" height_o="1080" data-src="https://freight.cargo.site/t/original/i/52ff396c8e5b041cc93306bc2266f8504c92e4e42764cf2c7d96c5afc3c36efb/Exposing-the-Bias.png" data-mid="221711322" border="0"  src="https://freight.cargo.site/w/1000/i/52ff396c8e5b041cc93306bc2266f8504c92e4e42764cf2c7d96c5afc3c36efb/Exposing-the-Bias.png" /&#62;
Acknowledgement


The work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program—Humanity and Society (WASP-HS), funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.

I would like to thank Priscilla Tissot for recommending the USA job statistic dataset for a taxonomy of ethinicities, and a list adjectives of aesthetic appraisals proposed in the work by
Arvidsson (2009).&#38;nbsp;

References
&#38;nbsp; &#38;nbsp; Arvidsson, Sofia. "A Gender Based Adjectival Study of Women's and Men's Magazines." (2009).

, Sofia. "A Gender Based Adjectival Study of Women's and Men's Magazines." (2009).

</description>
		
	</item>
		
		
	<item>
		<title>Coding the Latent</title>
				
		<link>https://kivanctatar.com/Coding-the-Latent</link>

		<pubDate>Fri, 29 Apr 2022 19:17:50 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Coding-the-Latent</guid>

		<description>Coding the Latent

an live coding performance series by 
Kıvanç Tatar

No.1


...produced and premiered at Center for Art and Media &#124; ZKM Karlsruhe

Coding the Latent No.1 is a live performance that brings together live coding and Artificial Intelligence based audio generation. The performance is an experimental approach to hack AI models using live coding. The AI model organizes a latent space of audio chunks, and live coding functions as a tool to explore the latent audio space. The approach builds on musical improvisation practice, while emphasizing proactivity, over reactivity and immediate musical reactions. Live coding becomes a tool for hacking the AI model in a live sonic design setting, while proactively planning compositional layers. This first edition of the Coding the Latent series invites the audience to a glitchy and noisy world of AI synthesized sounds, spatialized on 47 speakers.
The series initiated during my artist residency at&#38;nbsp;Center for Art and Media &#124; ZKM Karlsruhe, and premiered at the Live Coding Hacklab event, as a part of on-the-fly project funded by Creative Europe.&#38;nbsp;
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.</description>
		
	</item>
		
		
	<item>
		<title>Plastic Biosphere</title>
				
		<link>https://kivanctatar.com/Plastic-Biosphere</link>

		<pubDate>Wed, 27 Oct 2021 13:14:59 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Plastic-Biosphere</guid>

		<description>Plastic Biosphere

an audiovisual artwork series by 
Kıvanç Tatar

Plastic Biosphere is a series of artworks that highlights the imperative coexistence of humankind and the material, plastics. In the anthropocene era, humankind created over 300 million tons of plastic every year, and this production has become a geophysical force of nature. The ubiquitous material extends beyond our reach and control, creating its own life-cycles, existing as an unignorable element of the biosphere.

I initiated this series during an artist residency at&#38;nbsp; UNATC CINETic as a part of Ars Electronica 2019 Kepler’s Gardens series. The first edition was an interactive artwork. The second edition was an audiovisual artwork for which I collaborated with dancer Tamar Tabori. The last edition was a music album that was a recorded through the second edition.&#38;nbsp;


No.3
    Plastic Biosphere No.3 by Kıvanç Tatar
The music album of Plastic Biosphere series is the third and the final edition of this series. It was super fun to explore the aesthetics of plastic material on urbanscapes using AI technologies in this series. The music album is made during the production of live audiovisual performance. You can find the video of the AV premier below.&#38;nbsp;
Some musical layers of this album was produced using a model of MASOM trained on a synth recordings dataset. As an example of those musical layers, the last track is made of a single musical layer generated by MASOM that is passed through a reverb and some post-mix effects. I am only changing a filter here and there on that track. Hence, the AI generated elements in this album are pad-like, ambient, synth-ish musical layers. The rhythmic elements are my own make or playing. I enjoyed the production process in this work, where the pad-like layers generated by AI software was a starting point of each track.&#38;nbsp; &#38;nbsp;
The album is mixed and mastered by my dear friend and colleague&#38;nbsp;Uğur Akagündüz. 

No.2


premiered online on August 16th, 2021

performed @interplay on September 23rd and 24th; 2021

...an online stream of live audio-visual performance with contemporary dance, produced and performed using artificial intelligence technologies.
    
    
    

A silhouette of human body pixelates an anthropocene landscape stylized by AI technologies that copy the look, color schemes, and stroke styles from images of plastic, The body is in a playful, real-time interaction with moving images of anthropocene landscape that is transformed into the artwork through the aesthetics of plastics in its natural life-cycle. A kinect camera tracks the body to create real-time interactivity with the assistance of AI solutions. The real-time interaction between the machine and the human body evolves as the performance unfolds.



	

Plastic Biosphere series use two different applications of AI technologies: moving images and music. For moving images, I focus on the style imitation applications of Deep Learning algorithms. The style imitation algorithm is combined with real-time shaders to create interactivity. On the music side, Kıvanç is performing the music in this piece in real-time with a Musical AI system titled MASOM. Hence, Plastic Biosphere explores how the body can start a dialogue with the machinery of AI through music and moving images. The interactivity between the human body and the machine emphasizes the far-reaching effects of embodied existence of humankind on nature.


	

Credits
Kıvanç Tatar Artistic Director / Creative AI development

Tamar TaboriContemporary Dance
	



Dan O'SheaTechnical Director

Remy SiuTechnicalConsultant






	This stream is hosted on the traditional, ancestral, unceded territory of the Coast Salish Peoples including the territories of the xʷməθkwəy̓əm (Musqueam), Skwxwú7mesh (Squamish), and Səl̓ílwətaʔ/Selilwitulh (Tsleil-Waututh) First Nations.



This work has been funded by Canada Council for the Arts and supported by Vancouver New Music.&#38;nbsp;


	&#60;img width="1201" height="220" width_o="1201" height_o="220" data-src="https://freight.cargo.site/t/original/i/6ddece45167a192cc0744d69a3991ccc0aab5823648fc71a75578c92a4c83cf7/CCA_RGB_black_e.png" data-mid="122825727" border="0"  src="https://freight.cargo.site/w/1000/i/6ddece45167a192cc0744d69a3991ccc0aab5823648fc71a75578c92a4c83cf7/CCA_RGB_black_e.png" /&#62;
&#60;img width="1024" height="538" width_o="1024" height_o="538" data-src="https://freight.cargo.site/t/original/i/81d42e839bdff1d8ba6b8d0f2887d38973fa8371d3b4afceae91666f6bec5ae2/vnm-2.jpg" data-mid="122825725" border="0" data-scale="46" src="https://freight.cargo.site/w/1000/i/81d42e839bdff1d8ba6b8d0f2887d38973fa8371d3b4afceae91666f6bec5ae2/vnm-2.jpg" /&#62;


No.1
Kıvanç Tatar, 2020

@UNATC Garden Bucharest, Romania
ARS ELECTRONICA FESTIVAL 2020









&#60;img width="1859" height="1237" width_o="1859" height_o="1237" data-src="https://freight.cargo.site/t/original/i/b67b3afc227965fbbbdfb9804c5686e76aad4bee43c422c3d331af271bf0b4e2/image-2.jpg" data-mid="122825719" border="0"  src="https://freight.cargo.site/w/1000/i/b67b3afc227965fbbbdfb9804c5686e76aad4bee43c422c3d331af271bf0b4e2/image-2.jpg" /&#62;
&#60;img width="1800" height="1224" width_o="1800" height_o="1224" data-src="https://freight.cargo.site/t/original/i/6532987ddc7f87b86f56e04ceaacce040942ef44fcf49df375fba381d1d91d7b/image-1.jpg" data-mid="122825717" border="0"  src="https://freight.cargo.site/w/1000/i/6532987ddc7f87b86f56e04ceaacce040942ef44fcf49df375fba381d1d91d7b/image-1.jpg" /&#62;
&#60;img width="1855" height="1237" width_o="1855" height_o="1237" data-src="https://freight.cargo.site/t/original/i/faf199f952395b97531bb0187145b814e58855a7e616a9bbfa0215b1ab8539d7/image-3.jpg" data-mid="122825722" border="0"  src="https://freight.cargo.site/w/1000/i/faf199f952395b97531bb0187145b814e58855a7e616a9bbfa0215b1ab8539d7/image-3.jpg" /&#62;


	

No.1 is an interactive installation where a silhouette of the audience’s body pixelates an anthropocene landscape 
that is stylized by artificial intelligence using images of plastic, andmusic is generated in realtime by using an AI system called MASOM. 


	


	

UNATC Garden presents the results of the online collaboration 
between five international artists, virtually gathered in CINETic 
Residencies, active in interactive art and CINETic Centre researchers 
and artists.


	

UNATC Garden Bucharest Youtube Channel: https://www.youtube.com/channel/UC_H8vS0mUUeKsLv7Tkw4AbAArs Electronica page: https://ars.electronica.art/keplersgardens/en/cinetic-residencies-talk/
UNATC page: https://cinetic.arts.ro/plastic-biosphere/









Credits










	




Kıvanç TatarArtist, Creative AI developer

Assoc. Prof. Alexandru BerceanuCINETic Director

Grigore BurloiuTechnical Director







	





	





Marius Hodea3D Modelling



Iuliana Gherghescu
Set Design

Andrei GindacAdmin







Panel Talk @Ars Electronica 2020
UNATC Distant Art
CINETic Residencies

📺

</description>
		
	</item>
		
		
	<item>
		<title>A Conversation with Artificial Intelligence</title>
				
		<link>https://kivanctatar.com/A-Conversation-with-Artificial-Intelligence</link>

		<pubDate>Wed, 27 Oct 2021 13:15:01 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/A-Conversation-with-Artificial-Intelligence</guid>

		<description>A Conversation with Artificial Intelligence
2016

Vancouver, BC, Canada



This is the first public performance where I improvise with the Interactive Music System,&#38;nbsp;MASOM. The human performer is on the left channel and the musical AI agent is on the right channel. For this performance, MASOM is trained on a 30 minute experimental music recording, available here: https://kivanctatar.bandcamp.com/track/self-distruption.
The video is the curtosy of 

Ash Tanasiychuk from&#38;nbsp;Vandocument.
 


</description>
		
	</item>
		
		
	<item>
		<title>Digital Ripples</title>
				
		<link>https://kivanctatar.com/Digital-Ripples</link>

		<pubDate>Tue, 05 Apr 2022 20:21:36 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Digital-Ripples</guid>

		<description>Digital RipplesNFT Series


&#60;img width="1184" height="1184" width_o="1184" height_o="1184" data-src="https://freight.cargo.site/t/original/i/9cf3880b1e42e9baf35a77f4d7e4abf74a81e727e9627b973a54acf3738d868a/step_14_cover.png" data-mid="138775696" border="0" alt="01 - Ocean &#38;amp; Music" data-caption="01 - Ocean &#38;amp; Music" src="https://freight.cargo.site/w/1000/i/9cf3880b1e42e9baf35a77f4d7e4abf74a81e727e9627b973a54acf3738d868a/step_14_cover.png" /&#62;
&#60;img width="1184" height="1184" width_o="1184" height_o="1184" data-src="https://freight.cargo.site/t/original/i/c6b699a7b375bd6d84d77f2a26098e5bae3a55ea7797cdfc1e268863bda4678a/step_16_cover.png" data-mid="141212968" border="0" alt="02 - Music Ripples" data-caption="02 - Music Ripples" src="https://freight.cargo.site/w/1000/i/c6b699a7b375bd6d84d77f2a26098e5bae3a55ea7797cdfc1e268863bda4678a/step_16_cover.png" /&#62;
&#60;img width="1184" height="1184" width_o="1184" height_o="1184" data-src="https://freight.cargo.site/t/original/i/9ede66c36d01e2823f799db4fe1dc8472ddaaf4f5227772b8d4bb5a0924c6294/step_17_cover.png" data-mid="141212762" border="0" alt="03 - Horizon&#38;rsquo;s music" data-caption="03 - Horizon’s music" src="https://freight.cargo.site/w/1000/i/9ede66c36d01e2823f799db4fe1dc8472ddaaf4f5227772b8d4bb5a0924c6294/step_17_cover.png" /&#62;




The series focuses on the uniqueness in the dissemination of digital artworks in today’s networked society, and how digital artworks are shared across the internet, resembling ripples generated by a water droplet on a lake.  In this context, Digital Ripples, as a work consisting of different layers, tell an abstract aesthetic story in each layer.  The work builds on natural  behaviors, such as multimodal flow, and the connection between visceral and auditory sensations. Digital ripples look for the memory in these waves and the source of the information in the memory in an unrefined intersection of nature, daily life, and music. The memory of the city, the change of the city, the flow of nature, the sharing of information, and the fluctuation of information are expressed verbally and visually in different parts of the work.

 
An iterative design process was used in the work that the composition consists of visual reflections with words taught to artificial intelligence. The values that change depend on the texture and the connection between humans and computers in the visual world. Continuing with the text and the computer's processing of the text, a bridge was built between the human and the computer. This link matches the visual and the words in the machine's memory, it also establishes a link with the music through words. It also offers cartography for Digital Ripples texture manipulation and texture imagination.

Credits
Kivanc Tatar - Video and Artistic Director
Paul Paroczai - Sound Design
Esra Ozkan - Curator 
Nancy Lee - Curator 
produced with the support from the Canada Council for the Arts

</description>
		
	</item>
		
		
	<item>
		<title>Raw Music from Free Movements</title>
				
		<link>https://kivanctatar.com/Raw-Music-from-Free-Movements</link>

		<pubDate>Tue, 02 Nov 2021 07:30:56 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Raw-Music-from-Free-Movements</guid>

		<description>Raw Music From Free Movements
by Daniel Bisig and Kıvanç Tatar




Best Paper Award at AIMC 2021&#60;img width="1965" height="595" width_o="1965" height_o="595" data-src="https://freight.cargo.site/t/original/i/85bb8befe79cdb2997282d380a6d91fcfd8e96902e829b98d1fa8be4e112e09b/movement_audio.png" data-mid="123347997" border="0"  src="https://freight.cargo.site/w/1000/i/85bb8befe79cdb2997282d380a6d91fcfd8e96902e829b98d1fa8be4e112e09b/movement_audio.png" /&#62;





Abstract


Raw Music from Free Movements is a deep learning architecture that 
translates pose sequences into audio waveforms. The architecture 
combines a sequence-to-sequence model generating audio encodings and an 
adversarial autoencoder that generates raw audio from audio encodings. 
Experiments have been conducted with two datasets: a dancer improvising 
freely to a given music, and music created through simple movement 
sonification. The paper presents preliminary results. These will 
hopefully lead closer towards a model which can learn from the creative 
decisions a dancer makes when translating music into movement and then 
follow these decisions reversely for the purpose of generating music 
from movement.




Implementation

&#60;img width="3861" height="1222" width_o="3861" height_o="1222" data-src="https://freight.cargo.site/t/original/i/32b7f02c5341ccd661c1889725b899721b62e95f206e72cd1a350c066444357e/seq2seq_autoencoder.png" data-mid="123348292" border="0"  src="https://freight.cargo.site/w/1000/i/32b7f02c5341ccd661c1889725b899721b62e95f206e72cd1a350c066444357e/seq2seq_autoencoder.png" /&#62;





Examples


Two different datasets were employed for training, named 
improvisation dataset and sonification dataset. The improvisation 
dataset consists of pose sequences and audio that have been recorded 
while a dancer was freely improvising to a given music. The dancer is an
 expert with a specialisation in contemporary dance and improvisation. 
The music consists of short excerpts of royalty free music including 
experimental electronic music, free jazz, and contemporary classic. The 
pose sequences have been acquired using the markerless motion capture 
system (The Captury ) in the iLab at MotionBank, University for Applied 
Research, Mainz. The recording is 10 minutes in length which corresponds
 to a sequence of 30000 poses. Each pose consists of 29 joints whose 
relative orientations are represented by quaternions.



The sonification dataset contains the same pose sequences as the 
improvisation dataset. The audio of this dataset was created afterwards,
 through sonification, employing a very simple sound synthesis 
consisting of two sine oscillators controlled by the dancer’s hands. The
 frequency and amplitude of each oscillator are proportional to the 
height and velocity of the corresponding hand, respectively. The authors
 created this dataset to verify the performance of RAMFEM.






Resources
Code: Bitbucket
Paper: AIMC 2021
Supplementary Materials (Datasets + Code in Paper + Examples): Zenodo



Acknowledgements
The authors’ thanks go to the dancers who have contributed countless hours o their spare time to the motion capture recordings. Further thanks go to MotionBank for providing their infrastructure and assisting in the recordings. This research is conducted in the context of a Marie Curie Fellowship and is funded bythe European Union. The collaboration of the second author has been supported by the Canada Council for the Arts.





&#60;img width="308" height="164" width_o="308" height_o="164" data-src="https://freight.cargo.site/t/original/i/d7e78e2225e8440f2dfb0a6c928c22e3299ccf271db272d249e0dd454d6e316e/marie-curie.jpg" data-mid="182533670" border="0"  src="https://freight.cargo.site/w/308/i/d7e78e2225e8440f2dfb0a6c928c22e3299ccf271db272d249e0dd454d6e316e/marie-curie.jpg" /&#62;&#38;nbsp; &#38;nbsp;&#60;img width="800" height="200" width_o="800" height_o="200" data-src="https://freight.cargo.site/t/original/i/1ec8a26de92d49f5e60807e9d36629145f77d4d0e9345518398d2b249f3294ba/CCFA_BW_black_96_e.jpg" data-mid="182533427" border="0" data-scale="33" src="https://freight.cargo.site/w/800/i/1ec8a26de92d49f5e60807e9d36629145f77d4d0e9345518398d2b249f3294ba/CCFA_BW_black_96_e.jpg" /&#62;&#38;nbsp;</description>
		
	</item>
		
		
	<item>
		<title>Latent Timbre Synthesis</title>
				
		<link>https://kivanctatar.com/Latent-Timbre-Synthesis</link>

		<pubDate>Wed, 27 Oct 2021 13:15:00 +0000</pubDate>

		<dc:creator>Kıvanç Tatar</dc:creator>

		<guid isPermaLink="true">https://kivanctatar.com/Latent-Timbre-Synthesis</guid>

		<description>Latent Timbre SynthesisFast Deep Learning tools for experimental electronic music
by Kıvanç Tatar, Daniel Bisig, and Philippe Pasquier


Latent Timbre Synthesis is a new audio synthesis method using deep 
learning. The synthesis method allows composers and sound designers to 
interpolate and extrapolate between the timbre of multiple sounds using 
the latent space of audio frames. The implementation includes a
 fully working application with a graphical user interface, called interpolate_two,
 which enables practitioners to generate timbres between two audio 
excerpts of their selection using interpolation and extrapolation in the
 latent space of audio frames. Our implementation is open source, and we
 aim to improve the accessibility of this technology by providing a 
guide for users with any technical background. Our study includes a 
qualitative analysis where nine composers evaluated the Latent Timbre 
Synthesis and the interpolate_two application within their practices.



The visualizations below illustrate a latent space generated by a Variational Autoencoder. On the right, the green and red dots represent latent vectors of audio frames of two audio examples from the dataset, while the green is the latent vectors of a synthesized sound generated by interpolating the latent vectors of two original audio files.&#38;nbsp;
	&#60;img width="1800" height="1200" width_o="1800" height_o="1200" data-src="https://freight.cargo.site/t/original/i/b3175ccac87bbb409b6b42cd366fea05e24e7857d829f7be1ede3ba239ce30ae/full_latent_space_scatter.png" data-mid="122825759" border="0"  src="https://freight.cargo.site/w/1000/i/b3175ccac87bbb409b6b42cd366fea05e24e7857d829f7be1ede3ba239ce30ae/full_latent_space_scatter.png" /&#62;
	&#60;img width="1800" height="1200" width_o="1800" height_o="1200" data-src="https://freight.cargo.site/t/original/i/a7e03b4f0027d4f9f8c9580077d0f1e90112dd4f8299e5e188d7648141f7ef68/inter_latent_space_scatter.png" data-mid="122825763" border="0"  src="https://freight.cargo.site/w/1000/i/a7e03b4f0027d4f9f8c9580077d0f1e90112dd4f8299e5e188d7648141f7ef68/inter_latent_space_scatter.png" /&#62;





Publications 


-&#38;gt;Tatar, K., Bisig, D., &#38;amp; Pasquier, P. Latent Timbre Synthesis: Audio-based Variational Auto-Encoders for Music Composition Applications. The Special Issue of Neural Computing and Applications: “Networks in Art, Sound and
Design.”https://doi.org/10.1007/s00521-020-05424-2



-&#38;gt;Tatar, K., Bisig, D., &#38;amp; Pasquier, P. (2020). Introducing Latent Timbre Synthesis. https://arxiv.org/abs/2006.00408
  





Compilation Album

💿️Nine composers joined our qualitative study and composed short pieces using Latent Timbre Synthesis.Source code︎



Examples
A set of examples are available here -&#38;gt; 

︎


The naming convention of example audio files are as follows. Original 1 and Original 2 are the excerpts of original samples.&#38;nbsp;00-original-icqt+gL_1 and 00-original-icqt+gL_2 tracks are generated using the original magnitude spectrums, and phase is added after using a reconstruction technique. Likewise, our Deep Learning model generates only the magnitude spectrum, and phase is added later using a reconstruction technique. Hence, the 

original-icqt+gL_1

 and 

original-icqt+gL_2 are the ideal reconstructions that the Deep Learning model aims to achieve during the training.
Reconstructions-&#38;gt; 00-x_interpolations 0.0 and 00-x_interpolations 1.0 are reconstructions of the original audio files using the Deep Learning model, original 1 and 2 respectively. Ideally, these reconstructions should be as close as possible to the original magnitude responses combined with phase estimations; which is the 
original-icqt+gL_1 and 
original-icqt+gL_2 files, respectively.&#38;nbsp;

Timbre Interpolations-&#38;gt; 00-x_interpolations 0.1 means that this sample is generated using 90% of the timbre of original_1 and 10% 

of the timbre of original_2. Try to think 0.1 almost like a slider value from audio_1 to audio_2.Timbre Extrapolations-&#38;gt; x_interpolations-1.1 means that we are drawing an abstract line between timbre example 1 and 2, and then following the direction of that line, we are moving further away from the timbre 2 by 10%. X Extrapolations -0.1 means we are drawing a line between timbre 2 and 1, and moving further away from timbre 1 in that direction by 10%.&#38;nbsp;
An example video of interpolate_two app is on the way! We are also finalizing a set of visualizations as well as a qualitative study. I will keep this page updated as we progress.&#38;nbsp;
 
&#38;nbsp;


AcknowledgementsThis work has been supported by the Swiss National Science Foundation, 
the Natural Sciences and Engineering Research Council of Canada,

 and Social Sciences and Humanities Research Council of
Canada.Ce travail est supporté par le Fonds national Suisse de la recherche scientifique,&#38;nbsp;le Conseil national des sciences et de l’ingénieurie du Canada,

 et le Conseil national des sciences humaines et sociales du Canada.
&#60;img width="600" height="96" width_o="600" height_o="96" data-src="https://freight.cargo.site/t/original/i/16e8b804c0e8672cd8725cdb8eed12bcac8eaeac85955889cf74a5aeeca010e4/SNF_BW_E_POS.png" data-mid="122825741" border="0" data-scale="16" src="https://freight.cargo.site/w/600/i/16e8b804c0e8672cd8725cdb8eed12bcac8eaeac85955889cf74a5aeeca010e4/SNF_BW_E_POS.png" /&#62; &#60;img width="800" height="282" width_o="800" height_o="282" data-src="https://freight.cargo.site/t/original/i/f7d8fe8a1e1a748fe82f6a1607cce58dc6eb4d26273e9b8b5e7e83f0d2da2821/logozhdk_alpha_gray.png" data-mid="122825747" border="0" data-scale="11" src="https://freight.cargo.site/w/800/i/f7d8fe8a1e1a748fe82f6a1607cce58dc6eb4d26273e9b8b5e7e83f0d2da2821/logozhdk_alpha_gray.png" /&#62; &#60;img width="397" height="60" width_o="397" height_o="60" data-src="https://freight.cargo.site/t/original/i/6e2c62d64dd9a2515b843eb665c2b96306ddbbbfd411e8a366ba805b1921c266/ICST_logo_neu_grey50_sm.png" data-mid="122825739" border="0" data-scale="18" src="https://freight.cargo.site/w/397/i/6e2c62d64dd9a2515b843eb665c2b96306ddbbbfd411e8a366ba805b1921c266/ICST_logo_neu_grey50_sm.png" /&#62;
&#60;img width="182" height="51" width_o="182" height_o="51" data-src="https://freight.cargo.site/t/original/i/197a5847b1dbc0d6c2427b9fca87eef3528ddee32689068dcae337b0cd6ebe7a/sshrc-crsh.png" data-mid="122825737" border="0"  src="https://freight.cargo.site/w/182/i/197a5847b1dbc0d6c2427b9fca87eef3528ddee32689068dcae337b0cd6ebe7a/sshrc-crsh.png" /&#62;&#60;img width="99" height="45" width_o="99" height_o="45" data-src="https://freight.cargo.site/t/original/i/54c9f6ab2f7b8ba321ee15401fdbee6870ccaecf9092f8697e3a816e0f0d8c5c/sfu_logo.png" data-mid="122825745" border="0" data-scale="100" src="https://freight.cargo.site/w/99/i/54c9f6ab2f7b8ba321ee15401fdbee6870ccaecf9092f8697e3a816e0f0d8c5c/sfu_logo.png" /&#62;</description>
		
	</item>
		
	</channel>
</rss>