Reflections on the MSc Exam Period

Three months now since my last post. And what a busy three months I’ve had! Honestly, up to this point, these have been the most stressful months of my life. Now that I have a 2 week gap between my last 2 exams, I allowed myself to take a much needed one week break to relax and recharge my batteries. The first six exams did hit like a hurricane, one after the other, with a high dose of the unexpected (exactly what we all didn’t want).

Just one more exam now and then I will embark on my highly awaited adventure in Cambridge this summer, interning at Microsoft Research on the incredible Inner Eye project. This has been a dream I have had for ages and I still can’t believe it will soon be more real than ever. I will start to develop my Masters thesis at the same time, using the time over weekends and evenings to create hopefully an amazing project, under the supervision of Dr. Emine Yilmaz. She is an incredible researcher in the field of information retrieval and I can’t wait to learn more from her.

Now, looking back even after a few days after my last tough exam, I can honestly appreciate these past months of “torture” have definitely not been in vain. As our computer vision lecturer told us in the first class “my goal is to bring you to a level where if you pick up any computer vision paper, you’ll be able to understand the concepts behind it”. He did keep his promise. During my short break now, I started reading through some computer vision papers for my upcoming internship. Oh boy, you have no idea how happy I was reading through them. Really can’t match the feeling. It was like my brain was saying “We know this, and this and this!”.

In terms of learning all these complex concepts, I had to do proper Sherlock Holmes research throughout this exam revision period, to find good explanations. I am sharing here my precious list of “gems” that truly helped me make it through some of my toughest modules. They’re simply amazing and I hope they’ll prove helpful for others too!

  • Colah’s Blog – neural networks amazingly explained (RNNs – LSTM article is a true masterpiece, I don’t think there is a better explanation on them, CNNs):
  • Distill – new innovative journal on neural networks, still edited by the above mentioned Chris Olah together with a team incredible researchers
  • MIT 6.034 Artificial Intelligence, by Patrick H. Wilson – brilliant lecturer (winner of countless teaching awards), making any concept seem incredibly intuitive and easy (he has a great funny analogy to Romanians and vampires in the lecture for classification trees)
  • Princeton CS511 Foundations of Machine Learning by Rob Shapire – great lectures on online learning
  • Introduction to Statistical Learning, by Trevor Hastie and Rob Tibshirani (Stanford University) – an amazing 15 hours course, incredibly well explaining tree based methods and support vector machines – videos are only accessible through the links from this page
  • Import AI newsletter from Jack (OpenAI) – amazing weekly newsletter about the latest developments in AI – absolutely in love with it, excited every Monday to open a new newsletter – you can subscribe here
  • Computer Vision – models, learning and inference, from Simon Price – this was our machine vision textbook for the module and is an overall amazing resource for anything  theoretical computer vision topic – combines computer vision with ML, emphasising Bayesian approaches – available in pdf version here online
  • Bayesian Reasoning and Machine Learning, by David Barber, the lecturer of my applied machine learning class – again an incredible resource on the theory behind machine learning, widely used as a textbook for courses around the world, also available in updated pdf version online
  • Network Science, by Albert-Laszlo Barabasi, probably the best book on network science, in a great interactive format, includes a great overview of graph theory and network modelling – he is the co-author of the BA (Barabasi&Albert) network model, used to model extremely prevalent scale free networks, chapter 5 is devoted to it in his book
  • Neural Networks and Deep Learning, by Michael Nielsen – the explanation of backpropagation is the best I have ever seen – available online at

These are my absolutely favourite resources online. I might extend the list in time, but I’d rather value quality over quantity. Again, these are true gems (for me at least).

Hope you found them useful! I’ll certainly keep it as a reference whenever I’ll need to revisit some topics and I am currently on the lookout for a great machine learning podcast.