Direction: Earthbound
(2024.12)
“
Jianhao Zheng's multimedia performance harnesses live-streamed surveillance feeds and artificial intelligence to craft an evolving audiovisual landscape. As the perspective shifts outward, the work becomes increasingly fractured and distorted—a powerful metaphor for the paradoxical nature of digital connectivity, where increased reach often yields diminishing returns in genuine human connection.
”
written by , Tussle Magazine
We humans are physically trapped in our own bodies, and consequently, in our personal perspectives and perceptions of the world. Digital technology today connects distant places, but the convenience of receiving remote information dampens our intrusive empathy and imagination. Seeing and hearing don't encompass the entirety of a perspective. We perceive the world not only through rational information but also through stories and emotions.
In this performance, I become the operator of the “Earthbound Express”, an imaginary vehicle of perspective, and the audience holding tickets becomes the passenger. This vehicle runs on a track of distance, departs from the platform at the performing venue, and travels further and further. Passengers are all seated in window seats, seeing, hearing, and feeling the distant perspectives.
As a final destination, the vehicle reaches the space, with the perspective furthest from our everyday life. All the different places, times, sunrises and sunsets, all happen simultaneously on this tiny blue sphere wandering in the vast and empty cosmos. The distance on Earth becomes an arbitrary number, and beyond our differences in opinions and stances, we share the same feelings toward the Earth.
This journey takes passengers flying outwards, to imagine the lives they may never experience, and in the end, looking back to themselves from 400 km above the ground. The vehicle does not land on Earth, ending this round-trip with a one-way sight – the missing descending ride from the orbit happens within the passengers’ minds – drawing the essential connection between the individual and the universally shared perspective among us.
In this custom-built TouchDesigner system, multiple live streams from YouTube are the video source. An embedded script triggers searches that return the distance to the streaming locations, and the video feeds are sorted in that order. A physical knob serves as the controller for this parameter, and the main device of navigating the performance.
Utilizing MediaPipe in TouchDesigner, people and objects are recognized from the video feeds, and the data is used to generate geometric shapes, colors, and patterns. Data is also translated into sound, becoming louder and distorted as the distance increases.
The sound is generated in TouchDesigner in a few different ways. The coordinate and scale of geometric shapes directly trigger a group of oscillators, generate waves according to and indicate the data changes.
The more dynamic and changing audio component comprises two sets of ratio-based oscillators – referencing Just Intonation and the perfect fifth of the Pythagorean Scale respectively. By adjusting the ratio and the base frequencies, the audience's emotional response to the sound changes dramatically, which is used carefully and creatively in this work as an abstract narrative of storytelling.