ChladniSonify: A Visual-Acoustic Mapping Method for Chladni Patterns in New Media Art Creation

2026-05-11Sound

SoundArtificial Intelligence
AI summary

The authors created a system called ChladniSonify that connects patterns formed by vibrating plates (Chladni patterns) to sounds in real-time. They used physics simulations and smart computer algorithms to recognize these patterns quickly and accurately. Their system can map the identified patterns to specific sound frequencies instantly, allowing interactive audio-visual art creation. The authors tested the system and found it to be both fast and precise, making it useful for artists working with sound and visuals.

Chladni patternsKirchhoff-Love plate theoryfinite element simulationconvolutional neural network (CNN)CBAM (Convolutional Block Attention Module)sonificationreal-time interactionaudio-visual mappingMax/MSPnumerical programming
Authors
Yakun Liu, Hai Luan, Dong Liu, Zhiyu Jin
Abstract
In new media art creation, the mapping between vision and hearing is often subjective. As a classic carrier of sound visualization, Chladni patterns have great potential in building audio-visual mapping mechanisms. However, existing tools face pain points: high technical barriers for simulation, offline computing failing real-time interaction, and uncontrollable mapping rules in general sonification tools. To address these, this paper proposes ChladniSonify, a real-time visual-acoustic mapping method for Chladni patterns. Based on Kirchhoff-Love plate theory, we build a paired dataset via numerical programming and calibrate it using ANSYS finite element simulation. Focusing on the slender nodal lines of Chladni patterns, we adopt a lightweight CNN with CBAM to achieve high-precision, low-latency pattern classification. Finally, we build an end-to-end system in Python and Max/MSP, mapping recognized patterns to corresponding sine wave frequencies. Results show the system has excellent usability: the classification module achieves 99.33% accuracy on the test set with 7.03 ms inference latency; the mapped frequency matches the theoretical value with zero deviation; the average end-to-end latency is under 50 ms, meeting real-time interactive needs. This work provides a reproducible engineering prototype for Chladni audio-visual art creation.