Distributed machine learning with Python : accelerating model training and serving with distributed systems /
Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter...
主要作者: | |
---|---|
格式: | Licensed eBooks |
語言: | 英语 |
出版: |
Birmingham :
Packt Publishing, Limited,
2022.
|
在線閱讀: | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=3242106 |
總結: | Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter server -- Defining the worker -- Passing data between the parameter server and worker -- Issues with the parameter server -- The parameter server architecture introduces a high coding complexity for practitioners -- All-Reduce architecture -- Reduce -- All-Reduce -- Ring All-Reduce. |
---|---|
Item Description: | Pros and cons of pipeline parallelism. |
實物描述: | 1 online resource (284 pages) : color illustrations |
ISBN: | 1801817219 9781801817219 9781801815697 |