Distributed machine learning with Python : accelerating model training and serving with distributed systems /

Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter...

Deskribapen osoa

Xehetasun bibliografikoak
Egile nagusia: Wang, Guanhua
Formatua: Licensed eBooks
Hizkuntza:ingelesa
Argitaratua: Birmingham : Packt Publishing, Limited, 2022.
Sarrera elektronikoa:https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=3242106