Research Engineer

2 days ago


San Jose, Philippines ByteDance Full time

Research Engineer / Scientist - Storage for LLM Location: San Jose Team: Infrastructure Employment Type: Regular Job Code: A Responsibilities Design and implement a distributed KV cache system to store and retrieve intermediate states (e.g., attention keys/values) for transformer-based LLMs across GPUs or nodes. Optimize low-latency access and eviction policies for caching long-context LLM inputs, token streams, and reused embeddings. Collaborate with inference and serving teams to integrate the cache with token streaming pipelines, batched decoding, and model parallelism. Develop cache consistency and synchronization protocols for multi-tenant, multi-request environments. Implement memory-aware sharding, eviction (e.g., windowed LRU, TTL), and replication strategies across GPUs or distributed memory backends. Monitor system performance and iterate on caching algorithms to reduce compute costs and response time for inference workloads. Evaluate and, where needed, extend open-source KV stores or build custom GPU-aware caching layers (e.g., CUDA, Triton, shared memory, RDMA). Qualifications Minimum Qualifications PhD in Computer Science, Applied Mathematics, Electrical Engineering, or a related technical field. Strong understanding of transformer-based model internals and how KV caching affects autoregressive decoding. Experience with distributed systems, memory management, and low-latency serving (RPC, gRPC, CUDA-aware networking). Familiarity with high-performance compute environments (NVIDIA GPUs, TensorRT, Triton Inference Server). Proficiency in languages like C++, Rust, Go, or CUDA for systems-level development. Preferred Qualifications Prior experience building inference-serving systems for LLMs (e.g., vLLM, SGLang, FasterTransformer, DeepSpeed, Hugging Face Text Generation Inference). Experience with memory hierarchy optimization (HBM, NUMA, NVLink) and GPU-to-GPU communication (NCCL, GDR, GDS, InfiniBand). Exposure to cache-aware scheduling, batching, and prefetching strategies in model serving. Job Information The base salary range for this position in the selected city is $136,800 - $359,720 annually. Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units. Benefits Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, among others. Employees also receive 10 paid holidays per year, 10 paid sick days per year and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure). The Company reserves the right to modify or change these benefits programs at any time, with or without notice. EEO Statement For Los Angeles County (unincorporated) Candidates: Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment: Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues; Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems; Exercising sound judgment. #J-18808-Ljbffr



  • San Jose, Philippines ByteDance Full time

    Location: San Jose Team: Algorithm Employment Type: Regular Job Code: A A Share this listing: Responsibilities We are seeking a highly skilled and motivated AI Infrastructure Engineer to join our dynamic team. In this role, you will be responsible for designing, building, deploying, and maintaining the robust and scalable infrastructure that powers our...


  • San Jose, Philippines ByteDance Full time

    Research Scientist in Generative AI for Science Location: San Jose Team: Technology Employment Type: Regular Job Code: A96227 Share this listing: Responsibilities Design and develop generative AI models for natural sciences, including protein structure prediction, molecular conformation analysis, and computational protein design. Reproduce and evaluate...


  • San Jose, Philippines ByteDance Full time

    Research Scientist Graduate, (AI-Native database systems)- 2026 Start (PhD) Location: San Jose Team: Infrastructure Employment Type: Regular Job Code: A14072 Responsibilities We are looking for talented individuals to join us in 2026. As a graduate, you will get unparalleled opportunities for you to kickstart your career, pursue bold ideas and explore...


  • San Jose, Philippines ByteDance Full time

    AI Vision Research Engineer- Graduate- Pico 2026 Start- (PHD) Location: San Jose Team: Technology Employment Type: Regular Job Code: A A Overview About Team: PICO is a leading VR/AR platform with independent innovation and R&D capabilities, focusing on VR all-in-one technology. PICO is committed to offering immersive and interactive VR experiences to our...

  • Research Scientist

    4 days ago


    San Jose, Philippines ByteDance Full time

    Research Scientist - Machine Learning System Location: San Jose Team: Technology Employment Type: Regular Job Code: A Responsibilities AML-MLsys combines system engineering and the art of machine learning to develop and maintain massively distributed ML training and inference system/services around the world, providing high-performance, highly reliable,...


  • San Jose, Philippines ByteDance Full time

    Research Scientist in Large Language Model, Doubao-PhD Graduates- 2025 Start Location: San Jose Team: Technology Employment Type: Regular Job Code: A24286A Overview Founded in 2023, ByteDance Doubao Team is dedicated to crafting the industry's most advanced LLMs. We aim to lead global research and foster both technological and social progress. With a...


  • San Jose, Philippines Lenovo Full time

    Lenovo is a global technology powerhouse generating US$69 billion in revenue and serving millions of customers worldwide. We are focused on delivering “Smarter Technology for All” through hybrid AI, innovation, and a full‑stack portfolio of devices, infrastructure, software, solutions, and services. About LATC Lenovo’s AI Technology Center (LATC)...

  • Student Researcher

    4 days ago


    San Jose, Philippines ByteDance Full time

    Student Researcher (Seed Vision – AI Platform) – 2026 Start (PhD) Location: San Jose Team: Technology Employment Type: Intern Job Code: A B Responsibilities The Seed Vision AI Platform team builds infrastructure and tooling to support large-scale training, evaluation, and deployment of vision foundation models. Our mission is to accelerate research and...

  • Student Researcher

    4 days ago


    San Jose, Philippines ByteDance Full time

    Student Researcher (Seed LLM - Code Generation) – 2026 Start (PhD) Location: San Jose Team: Technology Employment Type: Intern Responsibilities Develop methods for code generation and editing using large language models, including improving performance on tasks such as synthesis, repair, documentation, and test generation. Conduct research on...

  • Student Researcher

    2 days ago


    San Jose, Philippines ByteDance Full time

    Student Researcher (Doubao (Seed) - Foundation Model - Vision Generative AI) - 2025 Start (PhD) Location: San Jose Team: Technology Employment Type: Intern Job Code: A Responsibilities Conduct cutting‑edge research and development in computer vision and machine learning, especially in generative AI and data‑centric research. Solve unique, large‑scale,...