I mentioned the people involved in my first post, so I won't repeat that here. As for timetable... Stephanie, who modeled the robot, spent a good bit of time iterating over 2D concepts of the bot's design, and worked with Roman directly to get what he was looking for. That process took a few days. We were then handed 64 shots that needed the bot, and the animators went to work. At this point the schedule tightened up a bit. They passed off the texturing of the bot to me, and I think I burned through that in less than a day using Mari (5-6 hours maybe? In the words of Marvin from Hitchiker's... "You can tell, can't you?"). The rush was picking up at that point. As for the lighting, the scenes were very simplistic: usually it was just the bot, some area lights, and an HDRI (they hadn't shot HDRIs on set, so I had to find something in our library that would work). In several shots, I added rough geometry onto which I could project the footage. This geo wouldn't render, of course, but contributed to reflections and bounce lighting. The shot where the boy is touching the bot's heart is one such shot, where I projected keyed footage of the boy's arm onto a deformed grid, and disabled Primary Rays on it. This allowed me to get the reflection of the boy's arm in the bot's 'mouth-visor-thingey'. But other than that, the scenes were very simple. We didn't build any full enviros or anything.
As for render time, it varied depending on the graphics card (we have a mix of 580s and 770s here). Generally, frames were done within a minute or two, using BF + IPC, at 1920 x 1080. The few boxes we have with RS licenses were finishing most shots within 10-15min. In addition, the textures for the bot were made up of 3 channels, with 3 patches each. Two at 8k and one at 4k. Redshift didn't seem to care. I was even asked to render some shots at 150%-200% HD. And one I think I rendered at 250% HD (4800x 2700). I thought RS might have a problem with that, but it didn't. I think the simplicity of the scenes helped here. Also, we knew we could do DoF and Motion Blur in camera, but Roman wanted to control that in post, so I rendered out Redshift Depth buffers for all the shots. We got all 64 shots lit and rendered in less than 4 days, and not a single one had to be re-rendered for technical reasons. In fact, I think only 2-3 had to be re-rendered at all, and those were for artistic changes. That's very unusual for me actually, because I've grown accustomed to having to render shots 2-3 times for various reasons.
Roman White handled the compositing himself, in After Effects. By this point the time crunch was very real, but Roman is a whizz with AE and was able to tie it all up in the end.
It was a really fun project. I have to admit that I don't fully understand the video, but then again there's a line in the song that says 'if you don't get it then you don't get it' so I didn't lose too much sleep over my thick-skulled interpretation abilities.