Creating slow motion footage is all about capturing a large number of frames per second. If you don’t record enough, then as soon as you slow down your video it becomes choppy and unwatchable. Unless, that is, you use artificial intelligence to imagine the extra frames.
New research from chip designer Nvidia does exactly that, using deep learning to turn 30 frames-per-second video into gorgeous, 240 frames-per-second slow-motion. Essentially, the AI system looks at two different frames and then creates intermediary footage by tracking the movement of objects from one frame to the next. It’s not the same as actually imagining footage like a human brain does, but it produces accurate (though not perfect) results.
The process will need...