Stress testing a new laptop.
TouchDesigner is a node based visual programming language for real time interactive multimedia content, developed by the Toronto-based company Derivative. It’s been used by artists, programmers, creative coders, software designers, and performers to create performances, installations, and fixed media works (from Derivative web site).
After a few years working with Magic Music Visualizer (also a great video animation and processing application) I wanted something a bit more robust. The COVID lockdown, followed by the drab winter of 2020-21 left me with time to focus on learning a new skill.
Most of my learning came throughout November 2020 to February 2021 via YouTube tutorials by the very supportive Touchdesigner community. I treated this like a college course, with daily lessons, building along with the instructors (not just watching passively). I estimate that at 30 hrs/week for four months, I logged at least 480 hrs of focused education, probably more. And of course, I am still always learning and exploring.
Using Touchdesigner for live performance
One of my primary motivations to learn about Touchdesigner (TD) was for creative live audio-reactive visuals, both for my own performances and others. I soon found TD was much more than just a visualizer. I also found that I could build things that would not otherwise be possible in other tools.
So far I’ve used TD in a live context several times, feeding signals into OBS (Open Broadcaster Software) via NDI and Syphon for live broadcast.
To further explore TD, I made some videos for my wife’s 4th grade class.
To get more practice, and to take a break from my own creative work, I offered to make some videos for other artists. This also served as a helpful use case while developing my JDRenderEngine, a pet project that leverages TD to overcome some of my frustrations with traditional video editing platforms.
There’s a lot more that I’d like to share on this. I’ve created a lot of experiments – with and without music – that I think people would enjoy. Just as my music has been exploring chaos, noise, and probability, TD has allowed me to explore those concepts visually. However, the work is time consuming. With COVID lockdown lifting, and with some other higher priority projects picking up, I’ve had to step back from TD a bit. The weather is also much nicer now, so I’ve been trying to get outside more – away from screens. I still create something new weekly, so perhaps I’ll post more of that content soon.
At the moment, the big weakness in my setup here is the GPU on my mid-2015 Macbook Pro. Things run pretty hot, and some functions in TD (like Line MAT) don’t even work properly on the MacOS. I am now trying to decide how to justify the purchase of a more powerful machine to continue development and learning. For now, Touchdesigner remains a fascinating tool in my toolbox for creating things I never would have thought possible even a few years ago.
I came to Touchdesigner from timeline-based video editing in programs like iMovie, Adobe Premiere and Davinci Resolve. For a few years I also built patches in Magic Music Visualizer. I was always frustrated by the UI of iMovie and similar programs. I never found their UX very good coming from the audio world. I also disliked how they managed project data, creating a whole new project with every iteration of a piece.
I liked the scene creation features of Magic Music Visualizer, though these too were quirky. As primarily an audio artist, I wanted a quick way to render content I was creating in Touchdesigner. Out of the box, Touchdesigner didn’t have anything that checked the boxes for me, so I decided to build something.
As I dove deeper into Touchdesigner during a 6-month intensive study under lockdown, I realized that I could mostly replace iMovie or similar tools. I probably wouldn’t go back to Magic (though I still think it’s a brilliant program, and I still liked the scene creation/logic features.
Of course, all of these tools could still be used in various combinations depending on the creative need, but I was looking for a “one stop shop”.
Thus my work began, in January 2021, on what I would call the “Render Engine”, specifically to solve certain problems I was encountering for content development. By March 2021 I had a working proof of concept, and opened up testing with a fellow audio artist (Breakfast). By working on visual content for him, I was able to refine the tool further.
For full details about the tool, visit my GitHub page. There you will find demo videos as well.