Senior Software Engineer, Model Inference Job at Apple Inc., San Francisco, CA

Umt2dFBmQk1MYzhxKzRsb1NRVVNpaFdEc2c9PQ==
  • Apple Inc.
  • San Francisco, CA

Job Description

Senior Software Engineer, Model Inference


San Francisco Bay Area, California, United States Software and Services

Join Apple Maps to help build the best map in the world. In this role on ML Platform, you will help bring advanced deep learning and large language models into high-volume, low-latency, highly available production serving, improving search quality and powering experiences across Maps. You will partner closely with research and product teams, take end-to-end ownership, and deliver measurable results at global scale.

Description


As a Software Engineer on the Apple Maps team, you will lead the design and implementation of large-scale, high-performance inference services that support a wide range of models used across Maps, including deep learning and large language models. You will collaborate closely with research and product partners to bring models into production, with a strong focus on efficiency, reliability, and scalability. Your responsibilities span the full server stack, including onboarding new use cases, optimizing inference across heterogeneous accelerated compute hardware, deploying services on Kubernetes, building and integrating inference engines and control-plane components, and ensuring seamless integration with Maps infrastructure.

Responsibilities



  • Own the technical architecture of large-scale ML inference platforms, defining long-term design direction for serving deep learning and large language models across Apple Maps.

  • Lead system-level optimization efforts across the inference stack, balancing latency, throughput, accuracy, and cost through advanced techniques such as quantization, kernel fusion, speculative decoding, and efficient runtime scheduling.

  • Design and evolve control-plane services responsible for model lifecycle management, including deployment orchestration, versioning, traffic routing, rollout strategies, capacity planning, and failure handling in production environments.

  • Drive adoption of platform abstractions and standards that enable partner teams to onboard, deploy, and operate models reliably and efficiently at scale.

  • Partner closely with research, product, and infrastructure teams to translate model requirements into production-ready systems, providing technical guidance and feedback to influence upstream model design.

  • Optimize inference execution across heterogeneous compute environments, including GPUs and specialized accelerators, collaborating with runtime, compiler, and kernel teams to maximize hardware utilization.

  • Establish robust observability and performance diagnostics, defining metrics, dashboards, and profiling workflows to proactively identify bottlenecks and guide optimization decisions.

  • Provide technical leadership and mentorship, reviewing designs, setting engineering best practices, and raising the quality bar across teams contributing to the inference ecosystem.

  • Continuously evaluate emerging research and industry trends in LLM inference, distributed systems, and ML infrastructure, driving the transition of high-impact ideas into production systems.

Minimum Qualifications



  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience).

  • 5+ years in software engineering focused on ML inference, GPU acceleration, and large-scale systems.

  • Expertise in deploying and optimizing LLMs for high-performance, production-scale inference.

  • Proficiency in Python, Java or C++.

  • Experience with deep learning frameworks like PyTorch, TensorFlow, and Hugging Face Transformers.

  • Experience with model serving tools (e.g., NVIDIA Triton, TensorFlow Serving, VLLM, etc).

  • Experience with optimization techniques like Attention Fusion, Quantization, and Speculative Decoding.

  • Skilled in GPU optimization (e.g., CUDA, TensorRT-LLM, cuDNN) to accelerate inference tasks.

  • Skilled in cloud technologies like Kubernetes, Ingress, HAProxy for scalable deployment.

Preferred Qualifications



  • Master’s or PhD in Computer Science, Machine Learning, or a related field.

  • Understanding of ML Ops practices, continuous integration, and deployment pipelines for machine learning models.

  • Familiarity with model distillation, low-rank approximations, and other model compression techniques for reducing memory footprint and improving inference speed.

  • Strong understanding of distributed systems, multi-GPU/multi-node parallelism, and system-level optimization for large-scale inference.

Compensation and Benefits


At Apple, base pay is one part of our total compensation package and is determined within a range. The base pay range for this role is between $181,100 and $318,400, and your base pay will depend on your skills, qualifications, experience, and location.

Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. Additional benefits include comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and reimbursement for certain educational expenses—including tuition. This role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.

Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.

Apple accepts applications to this posting on an ongoing basis.

#J-18808-Ljbffr

Job Tags

Relocation

Similar Jobs

Cornell Global Inc.

Direct Marketing Intern Job at Cornell Global Inc.

 ...services while delivering a positive customer experience. We are seeking a Direct Marketing Intern to join our team in Houston. This entry-level opportunity is designed for individuals looking to gain hands-on experience in marketing, customer engagement, and sales within a... 

Dexian

Customer Service Representative #993393 Job at Dexian

Position: Customer Service Representative (CSR) Location: Onsite (5 days per week during training; eligible for 4 days onsite after performance is established) Schedule: MondayFriday Position Summary We are seeking a motivated and adaptable Customer Service ...

DND Groups, Inc.

General Manager Job at DND Groups, Inc.

 ...required. Candidates should have a strong sense of urgency, thrive under pressure and have the ability to motivate their team in a fast paced environment. Strong customer service skills, as well as the ability to adapt and engage with a diverse crew members and... 

FOCAL POINTE OF ST LOUIS LLC

Lanscape Internship Job at FOCAL POINTE OF ST LOUIS LLC

 ...Paid Summer Internship Landscape Professional Assistant - Focal Pointe Hazelwood, MO (St Louis, MO) 12 Week Summer Program | May 18 August 7, 2026 Are you interested in building a career outdoors while gaining real-world experience in the landscape industry... 

CRH

Business Systems Analyst Job at CRH

 ...Canadian provinces. Position Overview The Business Systems Analyst (BA) plays a critical role in the success of SAP and related...  ...Ability to effectively work and communicate with people with a wide range of skills, experience, cultures, and capabilities...