It took me a while to get back in to writing this up but I’m here doing it at last!
I’ve now finished the MSc in Pervasive Parallelism that I was working on over the past year. During this time I’ve met some good and some crazy people and had opportunities to participate in and learn about cool technologies and projects.
For my thesis I developed a motion retargeting system that took advantage of the GPU via OpenCL. The thesis was titled “Parallel relationship descriptors for real-time motion adaptation of crowds”.
For this system I extended the concept of Relationship Descriptors by my colleague Rami Al-Ashqar. This method works by mapping descriptor points to the surface of the environment that an animation is moved through. If the terrain deforms, the position of the descriptor points will be updated and then we can alter the joints of the animation. These descriptors map a relationship between the joints of the animation and the environment itself. One of the major downsides of this system however is the computational complexity especially when extending to many characters. As a result the previous system only allowed retargeting of a few characters at interactive frame rates. Alongside this, higher numbers of descriptors can drastically affect performance.
For my system I evaluated all the joints across all characters in the system simultaneously. Each character could have as many as 30 joints. As such in a crowd of 512 characters this would mean 15,360 joints to retarget. On top of this, the number of descriptors (dependent on resolution of the sampling and method) can be anywhere from 50 to 200 leading to over 1 million relationships to be evaluated. For most of the steps in the process, we can evaluate in parallel and due to the data parallel nature of the task the GPU looks to be a suitable option. By simply evaluating all the joints simultaneously and performing the IK and joint constraining steps at the end we manage to achieve 42x speed up over a sequential version of the same system allowing us to retarget and render crowds of over 500 characters in real-time.
Overall we managed to achieve the goal of the project which was to apply motion retargeting to crowd in real-time. The work I did for this project will be presented as part of a paper at SIGGRAPH conference on Motion in Games 2015 titled “Carpet Unrolling for Character Control on Uneven Terrain”. The video for this can be seen above.
For the next stage of this project, we need to combine this system with other crowd technologies and collision avoidance to make a fully featured crowd system. As well as this, there are several parts of the system that could use further optimisation to increase performance, such as evaluating on a per descriptor basis instead and improved selection of the descriptors so we don’t have to filter through them.