TY - GEN T1 - Distributed machine learning with Python : accelerating model training and serving with distributed systems A1 - Wang, Guanhua LA - English PP - Birmingham PB - Packt Publishing, Limited YR - 2022 UL - https://ebooks.jgu.edu.in/Record/ebsco_acadsubs_on1312162521 AB - Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter server -- Defining the worker -- Passing data between the parameter server and worker -- Issues with the parameter server -- The parameter server architecture introduces a high coding complexity for practitioners -- All-Reduce architecture -- Reduce -- All-Reduce -- Ring All-Reduce. OP - 284 NO - Pros and cons of pipeline parallelism. CN - Q325.5 SN - 1801817219 SN - 9781801817219 SN - 9781801815697 KW - Machine learning. KW - Python (Computer program language) KW - Apprentissage automatique. KW - Python (Langage de programmation) KW - Machine learning ER -