Loading…
Perplexity is excited to announce the Internship Program for exceptional Master’s or PhD students studying Computer Science or Engineering in the UK, enrolled in the 2025-2026 academic year. This is an intensive program in which you will work directly with our AI Inference team. This program offers a unique opportunity to gain valuable experience in a rapidly growing AI startup. Outstanding performers might be offered a full time position at the end of the program. Our AI Inference team is responsible for running the models behind the Perplexity products. The team maintains the inference engine and deployments behind models ranging from single-node embeddings to distributed sparse Mixture-of-Experts models, maintaining large GPU clusters. With a keen focus on latency and throughput, the Inference team is responsible for the entire serving stack, from GPU kernels to networking and monitoring infrastructure. Responsibilities Work with the inference team to improve serving latency and throughput Bring up support for new models and state-of-the art inference optimizations or quantization schemes Optimize inference across the entire stack, from GPU kernels to serving endpoints Qualifications Strong engineering track record with proven knowledge of fundamentals and programming languages (multi-threaded programming, networking, compilation, systems programming, etc) Pursuing a Master's or PhD in Computer Science with a focus on performance-related subjects (HPC, Compilers, Distributed Systems) Experience with ML frameworks (Torch, JAX) Experience with GPU programming (CUDA, Triton) Experience with High-Performance Computing (OpenMPI) Schedule Internship program: 13 weeks, full-time or part-time, in-person in London office (hybrid schedule: 3 days from the office, 2 days WFH) Interview Process Fill out the application on Perplexity website If selected, People Ops and technical interviews will be involved. Offer. We’re impressed! We’d love to welcome you to our Internship pr