Jérémy Barbay was visiting me this week from Universidad de Chile. Although we overlapped at Waterloo by a few months, we had never talked in depth about research before. His visit was great timing to scoop me out of some research doldrums after a stressful winter quarter. He gave a great talk on adaptive analysis.
As we all know, algorithms often out-perform their worst-case analysis. There are a few theoretical tools for explaining this behaviour: think O(n log n) average case v. O(n^2) worst case for quick sort and poly-time smoothed analysis v. exponential-time worst case for the simplex algorithm. In adaptive analysis, we analyze an algorithm in terms of some (problem-dependent) hardness parameter, h. As an introductory example, consider insertion sort. If the input array is sorted, insertion sort takes O(n) time. If the input array is in reverse order, insertion sort takes O(n^2) time. If there are h pairs of elements that are mutually inverted, then insertion sort takes O(n+h) time: the running time depends on how hard a particular input instance is to sort.
Adaptive analysis has appeared several times in the past, but the word adaptive might not have been used. Jérémy would be a better person to provide a list. The most common examples seem to involve the analysis of existing algorithms. I would be most interested in the lens of adaptivity informing new algorithmic ideas, particularly those that would outperform existing algorithms, at least on interesting or useful inputs. Is there a collection of examples of such results? I know Jérémy mentioned one or two, but I’ve since forgotten in the whirlwind of whiteboard whimsy.