Diary / Behind the Scenes

Seeing Sounds: The
Geometry of Silence

My journey through generative art, TouchDesigner, drone music, and my Final Project.

Advertisement
ENTRY LOG: RIO DE JANEIRO // 05:30 AM

When I first started university five years ago, my understanding of design was entirely confined to grids, typography, and brand manuals. I never imagined that my graduation project would involve writing code to generate living geometry from ambient noise.

The inspiration came from a very specific atmosphere: the quiet, low hum that hangs over Guanabara Bay in the early hours of the morning, just before Rio wakes up. That distinct silence became the baseline for my Final Project in generative art. Instead of drawing a static composition, I decided to use creative coding to build a series of audioreactive visuals that materialize sound frequencies into physical space on the screen.

Audio Engineering

Designing the Void: Project Gemini

Most people think of design as purely visual, but in generative art, I started with my ears. I returned to an ambient drone album I composed back in 2019 called Gemini. Inspired by the constellation, the tracks were originally designed to document a sonic journey through deep space.

Working with drone music is a unique process because it relies on sustained tones and gradual frequency shifts rather than traditional beats or melodies. It demands patience. In a digital landscape built to fracture our attention into fifteen-second scrolling loops, drone music forces stillness.

It physically occupies the room. By using these tracks as my foundation, my goal was to build a digital sanctuary where the listener is forced to sit, observe, and absorb the slow progression of the sound.

Advertisement
Visual Theory

The Ghost of Kandinsky

The immediate challenge was figuring out how to give a physical body to a sound that lacks a clear rhythm. To solve this, I turned to Wassily Kandinsky and the concept of synesthesia in art as my theoretical baseline.

Kandinsky didn’t just paint; he experienced synesthesia, mapping colors to temperatures and sounds to specific shapes—a yellow C-sharp, a deep blue cello note. My project is an attempt to translate his theories into modern creative code. I started asking practical questions: what is the geometric equivalent of a 440Hz sine wave? How does a distorted bass frequency alter the visual temperature of the canvas? By feeding audio data directly into the system, the underlying math dictates the visual output, turning sound into an architect of generative geometry.

[ SYNESTHESIA_MAPPER.EXE ]

AUDIO ENABLED 🔊

Select a sound frequency to trigger its geometric and thermal equivalent in real-time. (Ensure volume is low).

Awaiting audio input...
The Algorithm

Wrestling with the Nodes

To build these interactive systems, I spend most of my days inside TouchDesigner. As a leading tool for creative coding and real-time generative art, it is a node-based programming environment where data flows through interconnected operators, affectionately called 'noodle soup'.

My workflow involves extracting the peak and RMS data from the audio tracks using CHOPs (Channel Operators). I then route those specific values into TOPs (Texture Operators) and SOPs (Surface Operators). For instance, a low-frequency drone might trigger noise displacement on a 3D sphere, making the geometry physically vibrate in sync with the track. It is a delicate balancing act—one wrong connection crashes the software, but the right one perfectly aligns the visual output with the audio source.

Network Editor
AudioFileIn (CHOP)
val: 0.000
Math (CHOP)
mul: 0.000
Noise (SOP)
Disp: 0.000
Render (TOP)
Methodology

Mapping the Unknown: The CSD Matrix

Before opening TouchDesigner and connecting audio channels to geometry, I needed to organize the project scope. In design research, we use a CSD Matrix (Certainties, Suppositions, and Doubts) to separate technical requirements from aesthetic assumptions. I applied this framework to my generative system.

[ CERTAINTIES ]

  • > The final output must maintain a stable 60fps frame rate during real-time rendering.
  • > The system must mathematically map specific audio frequencies to distinct geometric behaviors.
  • > The project bridges pure generative art and visual communication design.

[ SUPPOSITIONS ]

  • > The TouchDesigner network can be scaled and adapted for different spatial formats (projections, screens).
  • > Generative workflows offer a competitive advantage over manual keyframing in creating immersive experiences.
  • > The target audience will find the slow, non-linear ambient visuals engaging.

[ DOUBTS ]

  • > How to properly validate the synesthetic mapping with users without relying on subjective bias?
  • > What is the optimal balance between manual control (VJing) and full algorithmic autonomy?
  • > Will there be latency issues when converting the audio buffer into mesh displacements in complex scenes?

Beyond the Screen: Graduation

Graduating from university is a strange space. You are halfway out the door, navigating the shift from student to professional, while still tied to academic deadlines. This project became my way of making sense of that transition.

I wanted this Final Project to be more than just a graduation requirement. I wanted it to reflect the kind of designer I am becoming—someone who writes code to create art and interactive systems, rather than just delivering static visual interfaces.

The process is not always smooth. There are days when the node networks break and the audio output is just a mess of noise. But then, a shape on the screen finally aligns with a chord I wrote for Gemini. The math translates into something you can feel, and the system works.

What about you?

Synesthesia changes how we process stimuli. Leave a comment if you map sounds to specific colors in your daily life.

Send me a message
Continue Reading

Related Lab Notes