Have you found that your machine learning code works beautifully on a few dozen examples, but leaves you wondering how to spend the next couple of hours after you start looping through all of your data? Are you only familiar with Python, and wish there was a way to speed things up without subjecting yourself to learning C? Are you confused by all the things you’re reading about vectorized operations, and want an easy way to understand them?
In this talk you'll see some simple tricks from linear algebra which can give you significant performance gains in your Python code, and how you can implement these in NumPy. We'll start by exploring an inefficient implementation of a machine learning algorithm that relies heavily on loops and lists. Throughout the talk, we'll iteratively replace bottlenecks in our code with NumPy vectorized operations. At each stage, you'll learn the linear algebra behind why these operations are more efficient so that you'll be able to utilize these concepts in your own code.