Nvidia is hiring a
Senior Deep Learning Software Engineer, LLM Inference
We are now looking for a Deep Learning LLM Software Engineer, Inference! NVIDIA is seeking a Deep Learning Engineer focused on LLM and DL inference! NVIDIA is rapidly growing our research and development for Deep Learning Inference and is seeking excellent Software Engineers at all levels of expertise to join our team. Companies around the world are using NVIDIA GPUs to power a revolution in deep learning, enabling breakthroughs in areas like LLM, ChatGPT and GenerativeAI that has put DL at the “iPhone moment” for AI. Join the team that builds the software to enable the performant deployment and serving of these solutions. We specialize in developing GPU-accelerated Deep learning software like TensorRT, DL benchmarking software and performant solutions to deploy and serve these models.
Collaborate with the deep learning community to implement the latest algorithms for public release in TensorRT and DL benchmarks. Identify performance opportunities and optimize popular and important LLM models across the spectrum of NVIDIA accelerators, from datacenter GPUs to edge SoCs. Implement optimizations using TensorRT, its open source tools like Polygraphy, TensorRT plugins, Triton and CUDA kernels. Work and collaborate with a diverse set of teams involving performance modeling, performance analysis, kernel development and inference software development.
What you'll be doing:
Do performance optimization, analysis, and tuning of LLM and DL models.
Scale performance of DL models across different types of accelerators.
Contribute features and code to NVIDIA’s inference benchmarking frameworks, TensorRT, Triton and LLM solutions.
Work with cross-collaborative teams across generative AI, automotive, image understanding, and speech understanding to develop innovative solutions.
What we need to see:
Masters or PhD or equivalent experience in relevant field (Computer Engineering, Computer Science, EECS, AI).
At least 3 years of relevant software development experience.
You'll need excellent C/C++ programming and software design skills. SW Agile skills are helpful and Python experience is a plus.
Prior experience with training, deploying or optimizing the inference of LLMs in production is a plus.
Prior experience with performance modelling, profiling, debug, and code optimization or architectural knowledge of CPU and GPU is a plus.
GPU programming experience (CUDA or OpenCL) is a plus.
GPU deep learning has provided the foundation for machines to learn, perceive, reason and solve problems posed using human language. The GPU started out as the engine for simulating human imagination, conjuring up the amazing virtual worlds of video games and Hollywood films. Now, NVIDIA's GPU runs deep learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world. Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for deep learning, and NVIDIA is increasingly known as “the AI computing company.” Come, join our DL Architecture team, where you can help build the real-time, cost-effective computing platform driving our success in this exciting and quickly growing field.The base salary range is $144,000 - $270,250. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits.
Please mention that you found the job on ARVR OK. Thanks.