Models, Inference and Algorithms
Broad Institute of MIT and Harvard
September 26, 2018
MIA Meeting: [ Ссылка ]
Yaron Singer
Harvard CS
Maximizing submodular functions exponentially faster
Abstract: In this talk I’ll describe a novel approach called adaptive sampling that yields algorithms whose parallel running time is exponentially faster for a broad range of machine learning applications. The algorithms are designed for submodular function maximization which is the algorithmic engine behind applications such as clustering, network analysis, feature selection, Bayesian inference, ranking, speech and document summarization, recommendation systems, hyperparameter tuning, and many others. Since applications of submodular functions are ubiquitous across machine learning and data sets become larger, there is consistent demand for accelerating submodular maximization algorithms. In this talk I’ll introduce the adaptive sampling framework we recently developed and present experimental results from computational biology as well as other application domains.
Adam Breuer
Harvard Gov
Primer: Submodular maximization and machine learning
Abstract: Submodular functions are ubiquitous in machine learning, with applications ranging from item recommendation systems to feature selection, clustering, network analysis, document summarization, and Bayesian optimization. These functions capture a key property that is common to many problems: we experience diminishing returns as we select additional items (or features, clusters, nodes, keyphrases, etc.). In this talk, we will survey submodular functions in a variety of salient applications and show why maximizing such functions is challenging. We will then describe simple approximation algorithms with provably optimal guarantees and glimpse into cutting edge research.
For more information on the Broad Institute and MIA visit:
[ Ссылка ]
Copyright Broad Institute, 2018. All rights reserved.
Ещё видео!