By Yiming Li

This project is an expansion and extension of the Design Domin project. Over the two semesters of Design Domin I have been working on topics such as object form, particle break-up and recombination, and looking for ways to present these ideas through Touch Designer. I’ve covered some of this research in previous posts, but in this post I’m going to focus on the process of presenting the new project and the technical aspects of it.
My ideas for improving the previous project can be summarised in two points, i.e. enhancing the visual presentation while adding more interactivity. So in this project I focused on two aspects, redesigning the particle effects in TouchDesigner and a new set of interaction logic.

Coincidentally I’ve been spending a lot of time privately learning to use MIDI keyboards for arranging music lately, and I realised that it would be really cool to link MIDI and TD together, and use the keyboards to control the entire visual effects of the movement.

Firstly the part of setting up the midi, I chose ableton live as the host software and let it take care of providing the sound source as well as signalling the TD. Here I need to introduce a new component, the OSC transport protocol. This is a way of transferring data between multiple software programs connected via a local network. Compared to MIDI Device In in TD, OSC In provides more channels of data and more control options. However, OSC In has its drawbacks, such as the fact that it is sometimes very dependent on local network connectivity, and that there is a slight delay in the data due to the long signal length. In short, they have their own advantages and disadvantages, in the overall comparison I finally chose OSC.


After setting up the OSC components in both Live and TD and making sure they are under the same output port and IP, I just need to pick out those data channels I need. Here I focused on the three Velocity channels, which are responsible for monitoring key presses in the MIDI keyboard, and the Note channels, which are used to detect the strength of certain specific keys, which I did not select. The interaction of my project is therefore focused on these three points, i.e. pressing or releasing the next one, two or three keys at the same time.


The next step was the creation of the visuals. I didn’t use too many new things for this part, the process of particleising objects and the classic Sphere+Noise+Transform trio was basically the same as before. But I used some new tricks. If you take a closer look at the final product, you’ll see that it’s actually two layers, and although they’re both particle effects, it’s only the upper layer that has the shape of the particles that really interacts with the viewer.

This is actually a concession I made because of my computer’s performance. I had to increase the number of particles in order to get a better visual effect, and it was clear that my laptop couldn’t handle rendering such a large number of 3D and 2D models at the same time, and at the end of the run, despite having the RealTime button turned on, the whole project only had a framerate of 10. So I split the project into two layers, with the particles on the bottom layer being more like a ‘premade version’, and I used some small tools I found on Github to help with that, even though the particles couldn’t be generated individually. The particles on the bottom layer are more like a ‘prefab’, I used some quick particle generator widgets I found on Github to help me with this, even though the particles can’t be manipulated individually or rendered in a complex way, it definitely reduces the rendering pressure on the computer. The bottom layer is more like a looping ‘background video’ than a real interactive particle.

Going back to the upper layers I focused on creating this part of the effect. I wanted every parameter of the particle effect to be controllable. Before connecting the OSC data, I also need to make it ‘smoother’ so that some of it doesn’t look too abrupt. This part of the design makes a lot of use of what I learnt in Design Domain part 2, from the overall visualisation to the manipulation of each component and its data in the TD, which for me is the advantage of visual programming.

In the end I chose to use a projector instead of a monitor for the presentation part of the set up. Thanks to my previous experience with Kantan Mapper, this was not a difficult task. There is no doubt that the projector gave me a better visual effect.

I’ve been thinking a lot about ‘how to present’ in this project, and the idea of improving the Design Domain project was not just a spur-of-the-moment idea. In previous projects, because TD was new to me, it took me a long time to learn it, and the final product was not always as good as I would have liked it to be. I’m glad that I have this opportunity to continue to improve on my previous work, it’s good for me to consolidate what I’ve learnt and put it to use at the same time.