Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers / Nejlevnější knihy
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

Kód: 04834856

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

Autor Stephen Boyd, Neal Parikh, Eric Chu

Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to s ... celý popis

2396


Skladem u dodavatele
Odesíláme za 14-18 dnů
Přidat mezi přání

Mohlo by se vám také líbit

Dárkový poukaz: Radost zaručena

Objednat dárkový poukazVíce informací

Více informací o knize Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

Nákupem získáte 240 bodů

Anotace knihy

Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers argues that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, it discusses applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. It also discusses general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations

Parametry knihy

Zařazení knihy Knihy v angličtině Computing & information technology Computer science Artificial intelligence

2396



Osobní odběr Praha, Brno a 12903 dalších

Copyright ©2008-24 nejlevnejsi-knihy.cz Všechna práva vyhrazenaSoukromíCookies


Můj účet: Přihlásit se
Všechny knihy světa na jednom místě. Navíc za skvělé ceny.

Nákupní košík ( prázdný )

Vyzvednutí v Zásilkovně
zdarma nad 1 499 Kč.

Nacházíte se: