I will introduce a declarative framework for machine learning where hypotheses are defined by formulas of a logic over some background structure. Within this framework, I will discuss positive as well as negative learnability results (in the "probably approximately correct" learning sense) for hypothesis classed defined in first-order logic and monadic second-order logic over strings, trees, and graphs of bounded degree. While purely theoretical at this point, the hope is that our framework may serve as a foundation for declarative approaches to ML in logic-affine areas such as database systems or automated verification.
(Joint work with Christof Löding and Martin Ritzert.)
The workshop was held on January 11th and 12th.
Logic has proved in the last decades a powerful tool in understanding complex systems. It is instrumental in the development of formal methods, which are mathematically based techniques obsessing on hard guarantees. Learning is a pervasive paradigm which has seen tremendous success recently. The use of statistical approaches yields practical solutions to problems which yesterday seemed out of reach. These two mindsets should not be kept apart, and many efforts have been made recently to combine the formal reasoning offered by logic and the power of learning.
The goal of this workshop is to bring together expertise from various areas to try and understand the opportunities offered by combining logic and learning.
There are 12 invited speakers and a light programme (less than 5h per day) so as to give enough time for discussions.
Ещё видео!