Loading…
Real-time Video Researcher About the Role At Pika, we are pioneering next-generation creative infrastructure built around real-time video generation and intelligent, agentic platforms. We are seeking an accomplished Real-time Video Researcher to drive forward our mission to make agentic real-time video technology accessible, dynamic, and transformative for millions of creators. As a core member of our research team, you will be integral to designing and building foundational technologies, developing novel approaches for real-time video synthesis, and orchestrating intelligent agentic systems that power scalable, interactive multimedia experiences. You will collaborate closely with engineering and product teams, shaping the future of real-time creative platforms. What You’ll Do Lead and contribute to research efforts focused on real-time video generation, streaming, editing, and orchestration of agentic platform infrastructure Design and prototype novel algorithms and architectures for real-time, high-fidelity video synthesis and interactive experiences Focus heavily on real-time aspects of video generation and synthesis Work on diffusion model distillation and develop diffusion-based world models for video applications Train and finetune autoregressive models and diffusion models with a focus on real-time performance Curate specific datasets, especially for camera motion and human motion data Collaborate with cross-functional teams to bring research advancements into production-ready technologies Publish work in top-tier conferences and journals, and communicate results internally and externally Stay at the cutting-edge of the field, monitoring new developments in real-time video, generative AI, multimodal systems, and agentic orchestration What We’re Looking For 5+ years of relevant experience, including research during graduate studies, in real-time video generation, deep learning, or related fields such as image/audio generation and deep experience in multimodals