The Beauty of Data image

The Beauty of Data Using Cinema 4D and cutting-edge machine learning to generate unique 3D surfaces.

Studio Above&Below is a London-based art and technology practice. Founded by Royal College of Art alumni Daria Jelonek and Perry-James Sugden, the studio worked with the Ministry of Culture and Science of the State of North Rhine-Westphalia to showcase the area's diverse ecosystem.

Recently, they used Cinema 4D and Redshift to create “Aquateque,” an original short film navigating Germany’s Ruhr River. They also used Generative Adversarial Network (GAN) to develop unique 3D surfaces. A type of machine learning, GANs allow two competing neural networks to battle datasets back and forth in order to generate new data based on the original inputs. This type of system is used to create everything from contemporary art to video game texture upscaling and even deep-fake videos.

We chatted with Studio Above&Below to learn more about the development and creation of “Aquateque.” Here’s what they had to say.

Our London-based art and technology studio creates data-driven films and immersive installations. We work with museums, scientists, and brands to push the boundaries of digital media. Our studio has created artwork with cutting-edge technologies, using AR, XR and AI with data to make the invisible visible.

“Aquateque” is an eight-minute short film that navigates the river Ruhr, a 200 km estuary in Germany. The film shows the river from spring to mouth, based on thousands of images and sounds we captured. We created our own material dataset library and processed that through two artificial neural networks. That formed the hybrid and augmented landscapes and materials that may emerge in the area one day. In addition to C4D and Redshift we used visual AI tools Houdini and Octane.

The whole project was a five-month commission. Using thousands of images of the water surface, rock formations and green landscapes, we trained on a popular GAN and found surfaces we were happy with. These became the augmented layer in the film. We also ran the code through carbon-neutral cloud computing.

The GAN outcomes, or AI art, were transferred into 3D CG space. We used Cinema 4D, X-Particles, and Houdini to generate 3D forms. Then we brought all the generated models and material libraries into Cinema 4D. We rendered objects and animated PBR materials using a mix of Octane and Redshift.

Those Cinema 4D renders were composited over the original footage we took our datasets from and we did some post-processing with Cinema 4D object tracking, After Effects, as well as DaVinci Resolve.

We could also transfer PBR materials and surfaces into phone-based AR experiences using Unity or Unreal Engine.

The film has an underlying storyline that follows the steps of creating these AI outputs. We scanned the natural elements like trees and rocks during our travels along the river. Then, in the film you can see the machines scanning these same surfaces. We really liked the combination of watery, fantastical rock formations.

We worked with 3D artist Axel Schoterman on some of the scenes in “Aquateque”. One of our favorites is a scene with overwhelming beauty and colors. It appeared in the middle of winter at the river landscape, and the augmented objects in the scene melded so naturally with the original riverscape view. The GAN inputs and final materials feel otherworldly, intersecting between the natural elements, the human-made additions and the machine-made overlay.

We want the audience to think about how we will collectively work with machines and ecology in the future within the art, design and technology fields. Our passionate aim is to inspire and push how digital material ecologies and modeling of objects could be created and designed using popular artificial neural networks.

We would also like the audience to take away the new concept that machine-human-natural collaborations can be very enriching, and that unexpected outcomes from software are as valuable as curated content. In the end it is the designer, artist and human who makes the final decisions.

We also have tried our best to use carbon-neutral computing throughout the entire process. We want makers to be aware of their footprint, even though we are all just a small fragment of the global computational impact.

Credits:
Concept & Development: Einar Fehrholz, Studio Above&Below | Daria Jelonek, Perry-James Sugden
Directors of Photography: Leon Schirdewahn, Ravi Sejk
CGI: Axel Schoterman, Studio Above&Below | Daria Jelonek, Perry-James Sugden
Cut & VFX: Studio Above&Below | Daria Jelonek
Sound Design: Einar Fehrholz
Programming Development: Studio Above&Below | Perry-James Sugden
Exhibition Design: Einar Fehrholz
Special support: Daniel Traebing, Jan Ehlen, Suzanne Fehrholz, David Janzen, Makroscope e.V., Verein für aktuelle Kunst / Ruhrgebiet e.V., Jana Stolzer, Philip Steffens, Matthias Schliewe, Claudia Weber
Funded by: European Centre For Creative Economy, Ministry of Culture and Science of the State of North Rhine-Westphalia


Author

Michael MaherFilmmaker/Writer – Denton, Texas