Apply for community grant: Academic project (gpu)

#1
by tedlasai - opened

Generating the Past, Present and Future from a Motion-Blurred Image (SIGGRAPH Asia 2025, Journal)

We seek to answer the question: what can a motion-blurred image reveal about a scene's past, present, and future? Although motion blur obscures image details and degrades visual quality, it also encodes information about scene and camera motion during an exposure. Previous techniques leverage this information to estimate a sharp image from an input blurry one, or to predict a sequence of video frames showing what might have occurred at the moment of image capture. However, they rely on handcrafted priors or network architectures to resolve ambiguities in this inverse problem, and do not incorporate image and video priors on large-scale datasets. As such, existing methods struggle to reproduce complex scene dynamics and do not attempt to recover what occurred before or after an image was taken. Here, we introduce a new technique that repurposes a pre-trained video diffusion model trained on internet-scale datasets to recover videos revealing complex scene dynamics during the moment of capture and what might have occurred immediately into the past or future. Our approach is robust and versatile; it outperforms previous methods for this task, generalizes to challenging in-the-wild images, and supports downstream tasks such as recovering camera trajectories, object motion, and dynamic 3D scene structure. Code and data are available at We seek to answer the question: what can a motion-blurred image reveal about a scene's past, present, and future? Although motion blur obscures image details and degrades visual quality, it also encodes information about scene and camera motion during an exposure. Previous techniques leverage this information to estimate a sharp image from an input blurry one, or to predict a sequence of video frames showing what might have occurred at the moment of image capture. However, they rely on handcrafted priors or network architectures to resolve ambiguities in this inverse problem, and do not incorporate image and video priors on large-scale datasets. As such, existing methods struggle to reproduce complex scene dynamics and do not attempt to recover what occurred before or after an image was taken. Here, we introduce a new technique that repurposes a pre-trained video diffusion model trained on internet-scale datasets to recover videos revealing complex scene dynamics during the moment of capture and what might have occurred immediately into the past or future. Our approach is robust and versatile; it outperforms previous methods for this task, generalizes to challenging in-the-wild images, and supports downstream tasks such as recovering camera trajectories, object motion, and dynamic 3D scene structure. Code and data are available at blur2vid.github.io

Hi @tedlasai , it seems that you already have access to ZeroGPU via your Pro subscription, but I've assigned ZeroGPU to your Space as a grant.

@hysts Hi, thank you for your response! Does this mean everyone can use an unlimited amount of ZeroGPU time?

No, the ZeroGPU grant doesn't change ZeroGPU quota of each user. It just means your Space can stay on ZeroGPU even if you cancel your Pro plan. The ZeroGPU quota is per user, not per Space, so users who want higher limits will need their own Pro plan.

Is there anyway to setup the demo where we can pay for any usage of the demo. Because we are noticing that free users have limited zerogpu time.

Unfortunately, ZeroGPU quota is per user, so you can't extend the quota for other users. If you want people to use the demo without worrying about limits, the best option is to switch the Space from ZeroGPU to paid (dedicated) hardware.

Got it. Would it make sense to switch the demo to use an inference endpoint for our use case? Or would we run into the same problem?

@hysts @nielsr Happy new year :) Could you let me know about the question above? Thanks in advance.

Happy new year! Sorry I missed this earlier.
For this use case, I think using an inference endpoint would be essentially the same as allocating paid hardware to the Space.

Sign up or log in to comment