Recognising if a particular problem can be viewed as a Machine Learning problem.
Breaking down standard Machine Learning problems into more fundamental problems using tools from Calculus, Linear Algebra, Probability and Optimisation.
Recognising relationships between equation solving, projection onto a subspace, and the supervised learning problem of linear least squares regression.
Visualising eigenvalue/eigenvectors as a property of a matrix, and recognising its potential in practical unsupervised learning problems like dimensionality reduction and image compression.
Using, identifying failure modes, programming and debugging simple gradient descent methods for solving unconstrained optimisation problems.
Recognising the value of simple models like Gaussian mixture models for data, constructing algorithms for learning the parameters of such models, and interpreting these parameters.