Posted Aug 2

Nvidia is hiring a
Lead Software Engineer, TensorRT Inference Workflows

US, CA, Santa Clara • 3 Locations • 3 Locations
Full time

We are now looking for a Lead Software Engineer for TensorRT Inference Workflows! Would you like to make a big impact in Deep Learning by helping build a state-of-the-art inference framework for NVIDIA GPUs? We are seeking a technical lead in the TensorRT Inference Workflows software team!

What you’ll be doing:

  • Develop components of TensorRT, NVIDIA’s SDK for high-performance deep learning inference.

  • Use C++, Python and CUDA to build graph parsers, optimizers, compute kernels and tools for effective deployment of trained deep learning models.

  • Collaborate with teams of deep learning experts, GPU architects and DevOps engineers across diverse teams.

What we need to see:

  • BS, MS, PhD or equivalent experience in Computer Science, Computer Engineering.

  • 10+ years of software development experience

  • Proficiency in C++

  • Strong grasp of Machine Learning concepts.

  • Excellent communication skills, and an aptitude for collaboration and teamwork.

Ways to stand out from the crowd:

  • Familiarity with advanced C++11/C++14 language features.

  • Experience developing System Software.

  • Experience in shipping complex software packages.

  • Proficiency in Python.

  • Experience in GPU kernel programming using CUDA or OpenCL.

  • Background in software performance benchmarking, profiling, and optimizations.

  • Experience in compiler development

  • Background in working with TensorRT, PyTorch, TensorFlow, ONNX Runtime or other ML frameworks.

Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. NVIDIA's GPUs run AI algorithms, simulating human intelligence, and act as the brains of computers, robots and self-driving cars that can perceive and understand the world. Increasingly known as “the AI computing company”, NVIDIA wants you! Come, join our TensorRT Inference Architecture team, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly growing field.

The base salary range is $216,000 - $414,000. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Please mention that you found the job on ARVR OK. Thanks.