Posted Aug 16

Nvidia is hiring a
Senior Deep Learning Scientist, Large Language Models

US, CA, Santa Clara • 5 Locations • 5 Locations
Full time

Widely considered to be one of the technology world’s most desirable employers, NVIDIA is an industry leader with groundbreaking developments in High-Performance Computing, Artificial Intelligence and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, autonomous cars and conversational AI that can perceive and understand the world. Today, we are increasingly known as “the AI computing company.” We're looking to grow our company, and build our teams with the hardest working people in the world. Join us at the forefront of technological advancement!

NVIDIA is looking for Senior Data Scientists to develop high-impact, high-visibility Large language model product "NeMo LLM Cloud Service" & improve the experience of millions of customers. If you're creative & passionate about solving real world conversational AI problems, come join our NeMo LLM MLOps engineering team! For more details on NeMo LLM Service check out: https://www.nvidia.com/en-us/gpu-cloud/nemo-llm-service/

What you’ll be doing:

  • Develop, Train, Fine-tune, and Deploy large language models for text completion and chats in different applications including coding, NLU, NLG, IRQA, machine translation, and dialog, reasoning, and tool systems

  • Apply instruction tuning, reinforcement learning from human feedback (RLHF), and parameter efficient fine-tuning such as p-tuning, adaptors, LoRA, and so on to improve LLMs for different use cases

  • Measure and benchmark model and application performance

  • Analyze model accuracy and bias and recommend the next course of action & Improvements.

  • Maintain model evaluation systems

  • Drive the gathering, building, and annotation of domain specific datasets to train LLMs for different tasks and applications.

  • Gather knowhow on datasets for LLM training & evaluation.

  • Characterize performance and quality metrics across platforms for various AI and system components

  • Collaborate with various teams on new product features and improvements of existing products.

  • Participate in developing and reviewing code, design documents, use case reviews, and test plan reviews.

  • Help innovate, identify problems, recommend solutions and perform triage in a collaborative team environment.

What we need to see:

  • Master’s degree (or equivalent experience) or PhD in Computer Science, Electrical Engineering, Artificial Intelligence, or Applied Math

  • 5+ years of experience

  • Strong C++ programming skills

  • Excellent programming skills in Python with strong fundamentals in programming, optimizations and software design

  • Solid understanding of ML/DL techniques, algorithms and tools with exposure to CNN, RNN (LSTM), Transformers (BERT, BART, GPT/T5, Megatron, LLMs)

  • Hands-on experience on conversational AI Technologies like Natural Language Understanding, Natural Language Generation, Dialog systems (including system integration, state tracking and action prediction), Information retrieval and Question and Answering, Machine Translation etc.

  • Experience with Training BERT, GPT and Megatron Models for different NLP and dialog system tasks using “PyTorch” Deep Learning Frameworks and performing NLP data wrangling and tokenization

  • Understanding of MLOps life cycle and experience with MLOps workflows & traceability and versioning of datasets including knowhow of database management and queries (in SQL, MongoDB etc)

  • Experience using end-to-end MLOps platform such as Kubeflow, MLFlow, AirFlow

  • Strong collaborative and interpersonal skills, specifically a proven track record of optimally guiding and influence within a dynamic matrix environment

Ways to stand out from the crowd:

  • Native or near-native fluency in a non-English language - Spanish / Mandarin / German / Japanese / Russian / French / UK English / Arabic/ Korean / Italian / Portuguese

  • Familiarity with GPU based technologies like CUDA, CuDNN and TensorRT

  • Background with deploying machine learning models on data center, cloud, and embedded systems and background with Dockers and Kubernetes

  • Experience applying LLMs for coding, machine translation, question answering and dialog systems modeling including multi-modality and graceful error handling

  • Experience developing knowledge discovery, and reasoning capabilities including but not limited to disambiguation, clarification, and anticipation for dialog systems and experience integrating dialog systems with various NLP and backend fulfillment systems

  • Background with adapting LLMs to different domains such as automotive, health care, finance and so on

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative, passionate and self-motivated, we want to hear from you!

The base salary range is $144,000 - $270,250. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Please mention that you found the job on ARVR OK. Thanks.