Numerical analysis describes a pristine mathematical theory about optimal numerical algorithms under assumptions which do not necessarily hold in practice. For example, in theory good ODE approximations have fast convergence as the step size approaches zero. But, in practice, a good differential equation solver takes as big steps as possible. How must one change their mathematical reasoning to match the software world? This talk will describe the nitty gritty reasoning that comes into play when building mathematical software. We will describe how one of the most important aspects of optimizing a Runge-Kutta method happens to be alternative floating point power approximations, and how "suboptimal methods" can be more optimal by taking advantage of certain assembly instructions in modern hardware. This demonstrates how the creation of mathematical software is a discipline unto itself.
Contents
00:00 Welcome and introduction
01:40 First problem: Small ODEs in pharmacometrics
02:49 Euler’s method and Runge-Kutta methods
07:28 Why not just use arbitrarily high order methods?
11:06 Dormand-Prince as default solver (e.g., ode45)
13:59 Can we drop the Dormand-Prince simplifying assumption?
14:56 Yes – this is why Julia defaults to Tsit5
16:35 Origins of Vern solvers
22:08 Building in adaptivity for solvers
25:20 Going beyond explicit Runge-Kutta methods
26:14 When to choose a non-BDF approach for stiff ODE solvers
29:54 Final comments and questions
S/O to [ Ссылка ] for the video timestamps!
Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: [ Ссылка ]
Interested in improving the auto generated captions? Get involved here: [ Ссылка ]
Ещё видео!