Distributed machine learning with Python : accelerating model training and serving with distributed systems /

Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter...

Бүрэн тодорхойлолт

Номзүйн дэлгэрэнгүй
Үндсэн зохиолч: Wang, Guanhua
Формат: Licensed eBooks
Хэл сонгох:англи
Хэвлэсэн: Birmingham : Packt Publishing, Limited, 2022.
Онлайн хандалт:https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=3242106
Тодорхойлолт
Тойм:Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter server -- Defining the worker -- Passing data between the parameter server and worker -- Issues with the parameter server -- The parameter server architecture introduces a high coding complexity for practitioners -- All-Reduce architecture -- Reduce -- All-Reduce -- Ring All-Reduce.
Зүйлийн тодорхойлолт:Pros and cons of pipeline parallelism.
Биет тодорхойлолт:1 online resource (284 pages) : color illustrations
ISBN:1801817219
9781801817219
9781801815697