Global blockchain supervision and query platform

English
Download

Notion Slashes AI Embedding Costs 80% After Ditching Spark for Ray

Notion Slashes AI Embedding Costs 80% After Ditching Spark for Ray WikiBit 2026-04-11 04:00

James Ding Apr 09, 2026 16:48 Notion migrated from Spark on EMR to Ray, cutting embedding costs 80% and improving query

Notion has slashed its AI embedding pipeline costs by more than 80% after migrating from Apache Spark to Ray, the distributed computing framework backed by Anyscale. The productivity software company also achieved 10x improvements in query latency while consolidating three separate jobs per region into one.

The migration details emerged at Ray Day Seattle on April 9, 2026, where ML engineers from Notion, Uber, Salesforce, and Apple shared hard-won lessons about scaling AI infrastructure.

What Notion Actually Changed

Mickey Liu, a software engineer on Notions search platform team, walked through the overhaul. Their original setup used a three-step Spark pipeline running on Amazon EMR: data chunking, third-party API calls for embedding generation, and writes to a vector store.

The pain points were predictable but severe. Double compute costs. Third-party API rate limits throttling throughput. Debugging nightmares when failures occurred across tools—driver and executor logs werent even persisted in YARN.

The new architecture streams Kafka data directly into a Ray cluster handling CPU chunking, GPU embedding generation, and vector store writes in a single pipeline. No intermediate S3 handoffs. What started as the backend for a Q&A feature in 2023 now powers all of Notion AI and custom agents.

Uber and Salesforce Report Similar Gains

Ubers Peng Zhang detailed how their Michelangelo ML platform evolved from TensorFlow/Horovod to Ray with PyTorch. The standout move: separating CPU data-loading nodes from GPU training nodes in a heterogeneous cluster design. Result? GPU utilization jumped 20%, and training time dropped roughly 50% in select pipelines.

Salesforce tackled a different beast—summarizing documents up to 200,000 tokens long (roughly a short novel) with P95 latency under 15 seconds. Their team used Ray to chunk documents and run parallel inference across a distributed actor pool with vLLM, then merge results. They landed on 1-2 GPU data parallelism as the sweet spot after running scaling experiments directly on Ray.

Why This Matters Beyond These Companies

Robert Nishihara, Rays co-creator and Anyscale co-founder, opened the event by framing the core problem: AI infrastructure keeps getting harder. Multimodal data processing, reinforcement learning workloads, and multi-node LLM inference are pushing existing tools past their limits.

Every speaker landed on the same conclusion from different angles—their previous tooling ran out of road.

Apple engineers Charlie Chen and Haocheng Bian highlighted foundation model training challenges: massive unstructured data, billion-plus parameters, and sparse architectures like Mixture of Experts. Traditional engines fail because data pipelines and training frameworks run in separate environments with no shared context.

Whats Next

Ray Day Seattle kicked off Anyscales 2026 “Ray on the Road” tour—eight cities across three countries. The company is also running invite-only customer roundtables at each stop to preview their product roadmap.

Disclaimer:

The views in this article only represent the author's personal views, and do not constitute investment advice on this platform. This platform does not guarantee the accuracy, completeness and timeliness of the information in the article, and will not be liable for any loss caused by the use of or reliance on the information in the article.

  • Crypto token price conversion
  • Exchange rate conversion
  • Calculation for foreign exchange purchasing
/
PC(S)
Current Rate
Available

0.00