LLM Training Frameworks and Optimization Engineer
Company: Tbwa Chiat/Day Inc
Location: San Francisco
Posted on: February 1, 2025
Job Description:
LLM Training Frameworks and Optimization EngineerAbout UsAt
Together.ai, we are building cutting-edge infrastructure to enable
efficient and scalable training of large language models (LLMs). We
focus on optimizing training frameworks, algorithms, and
infrastructure to push the boundaries of AI performance,
scalability, and cost-efficiency.We are seeking a LLM Training
Frameworks and Optimization Engineer to drive innovations in the
development and optimization of distributed training frameworks. In
this role, you will ensure that our LLM training pipelines are
robust, efficient, and capable of handling the complexities of
large-scale distributed systems.Responsibilities
- Framework Development and Optimization:
- Design, implement, and optimize distributed training frameworks
tailored for large language models.
- Develop custom modules, plugins, and features to enhance
framework scalability and performance.
- Algorithmic and Systems Optimization:
- Optimize communication patterns (e.g., gradient
synchronization, all-reduce) in distributed training.
- Implement techniques like mixed precision, tensor parallelism,
pipeline parallelism, and sharded training.
- Performance Tuning:
- Conduct in-depth profiling and debugging of training jobs to
identify and resolve bottlenecks.
- Collaborate with hardware teams to optimize performance for
GPUs, TPUs, and other accelerators.
- Scalability and Resilience:
- Ensure training systems scale efficiently to thousands of nodes
and petabytes of data.
- Develop resilience mechanisms for fault-tolerant and
checkpointed training pipelines.
- Collaboration and Support:
- Work closely with researchers, data engineers, and platform
teams to ensure training frameworks meet model and workload
requirements.
- Provide guidance and tools to improve the overall efficiency of
the LLM development lifecycle.QualificationsMust-Have:
- Experience:
- 5+ years of experience in deep learning frameworks, distributed
systems, or machine learning infrastructure.
- Technical Skills:
- Expertise in distributed training frameworks (e.g., PyTorch
DDP, DeepSpeed, Megatron-LM, TensorFlow XLA).
- Strong understanding of parallelism techniques (e.g., data,
tensor, pipeline, and ZeRO-based parallelism).
- Familiarity with GPU/TPU hardware and deep learning performance
optimizations.
- Programming:
- Proficient in Python and C++ or CUDA for high-performance
computing.
- Experience with memory optimization techniques (e.g.,
activation checkpointing, gradient sharding).
- Knowledge of training dynamics for large-scale LLMs, including
hyperparameter tuning and optimization.
- Soft Skills:
- Analytical problem-solving skills and a focus on performance
improvement.
- Strong collaboration and communication skills across
teams.Nice-to-Have:
- Familiarity with graph optimization and compiler-level
performance tuning.
- Contributions to open-source deep learning or distributed
training projects.
- Experience with low-level hardware optimizations (e.g., kernel
fusion, custom CUDA kernels).About Together AITogether AI is a
research-driven artificial intelligence company. We believe open
and transparent AI systems will drive innovation and create the
best outcomes for society, and together we are on a mission to
significantly lower the cost of modern AI systems by co-designing
software, hardware, algorithms, and models. We have contributed to
leading open-source research, models, and datasets to advance the
frontier of AI.CompensationWe offer competitive compensation,
startup equity, health insurance, and other competitive benefits.
The US base salary range for this full-time position is: $160,000 -
$230,000 + equity + benefits.Together AI is an Equal Opportunity
Employer and is proud to offer equal employment opportunity to
everyone regardless of race, color, ancestry, religion, sex,
national origin, sexual orientation, age, citizenship, marital
status, disability, gender identity, veteran status, and more.
#J-18808-Ljbffr
Keywords: Tbwa Chiat/Day Inc, Rancho Cordova , LLM Training Frameworks and Optimization Engineer, Engineering , San Francisco, California
Didn't find what you're looking for? Search again!
Loading more jobs...