Distributed machine learning with Python : accelerating model training and serving with distributed systems /
Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter...
Prif Awdur: | |
---|---|
Fformat: | Licensed eBooks |
Iaith: | Saesneg |
Cyhoeddwyd: |
Birmingham :
Packt Publishing, Limited,
2022.
|
Mynediad Ar-lein: | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=3242106 |
Crynodeb: | Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter server -- Defining the worker -- Passing data between the parameter server and worker -- Issues with the parameter server -- The parameter server architecture introduces a high coding complexity for practitioners -- All-Reduce architecture -- Reduce -- All-Reduce -- Ring All-Reduce. |
---|---|
Disgrifiad o'r Eitem: | Pros and cons of pipeline parallelism. |
Disgrifiad Corfforoll: | 1 online resource (284 pages) : color illustrations |
ISBN: | 1801817219 9781801817219 9781801815697 |