I have spent the last several months doing applied math, culminating in a submission of a paper to a robotics conference (although culminating might be the wrong word, since I'm still working on the project).

Unfortunately the review process is double-blind so I can't talk about that specifically, but I'm more interested in going over the math I ended up using (not expositing on it, just making a list, more or less). This is meant to be a moderate amount of empirical evidence for which pieces of math are actually useful, and which aren't (of course, the lack of appearance on this list doesn't imply uselessness, but should be taken as Bayesian evidence against usefulness).

I'll start with the stuff that I actually used in the paper, then stuff that helped me formulate the ideas in the paper, then stuff that I've used in other work that hasn't yet come to fruition. These will be labelled I, II, and III below. Let me know if you think something should be in III that isn't [in other words, you think there's a piece of math that is useful but not listed here, preferably with the application you have in mind], or if you have better links to any of the topics below.

I. Ideas used directly

Optimization: semidefinite optimization, convex optimization, sum-of-squares programming, Schur complements, Lagrange multipliers, KKT conditions

Differential equations: Lyapunov functions, linear differential equations, Poincaré return map, exponential stability, Ito calculus

Linear algebra: matrix exponential, trace, determinant, Cholesky decomposition, plus general matrix manipulation and familiarity with eigenvalues and quadratic forms

Probability theory: Markov's inequality, linearity of expectation, martingales, multivariate normal distribution, stochastic processes (Wiener process, Poisson process, Markov process, Lévy process, stopped process)

Multivariable calculus: partial derivative, full derivative, gradient, Hessian, Matrix calculus, Taylor expansion

II. Indirectly helpful ideas

Inequalities: Jensen's inequality, testing critical points

Optimization: (non-convex) function minimization

III. Other useful ideas

Calculus: calculus of variations, extended binomial theorem

Function Approximation: variational approximation, neural networks

Graph Theory: random walks and relation to Markov Chains, Perron-Frobenius Theorem, combinatorial linear algebra, graphical models (Bayesian networks, Markov random fields, factor graphs)

Miscellaneous: Kullback-Leibler divergence, Riccati equation, homogeneity / dimensional analysis, AM-GM, induced maps (in a general algebraic sense, not just the homotopy sense; unfortunately I have no good link for this one)

Probability: Bayes' rule, Dirichlet process, Beta and Bernoulli processes, details balance and Markov Chain Monte Carlo

Spectral analysis: Fourier transform, windowing, aliasing, wavelets, Pontryagin duality

Linear algebra: change of basis, Schur complement, adjoints, kernels, injectivity/surjectivity/bijectivity of linear operators, natural transformations / characteristic subspaces

Topology: compactness, open/closed sets, dense sets, continuity, uniform continuity, connectedness, path-connectedness

Analysis: Lipschitz continuity, Lesbesgue measure, Haar measure, manifolds, algebraic manifolds

Optimization: quasiconvexity