关键词:并行工作;调度;大规模数据;分析框架
摘 要:Large-scale data analytics frameworks are shifting towards shorter task durations and larger degrees of parallelism to provide low latency. However, scheduling highly parallel jobs that complete in hundreds of milliseconds poses a major challenge for cluster schedulers, which will need to place millions of tasks per second on appropriate nodes while offering millisecond-level latency and high availability.We demonstrate that a decentralized, randomized sampling approach provides nearoptimal performance while avoiding the throughput and availability limitations of a centralized design.We implement and deploy our scheduler, Sparrow, on a real cluster and demonstrate that Sparrow performswithin 14%of an ideal scheduler.