This is my first post. This blog will be dedicated to exploring the beautiful and horrifying world of deep learning. Why beautiful? Because the capabilities afforded by modeling computer programs on the mechanism of the biological brain can lead to surprising and incredible results. Why horrifying? Because deep learning, while consisting of simple operations on matrices, is still black-box learning. We still lack interpretibility, we still struggle to understand our hidden biases and, increasingly, we are over-confident in our use of cutting-edge tools without understanding the technology fully.

My approach then is to try to convince people to try more difficult things. To convince you to put yourself in more uncomfortable situations. To vary your experience of the world and it’s many particular problems. My own background is interdisciplinary and I want to convince you that this should be the standard approach to learning. During my time studying physics and working as an engineer, I noticed significant shortcomings in understanding the real-world implications of certain research, of deploying models, or harvesting data. I believe that thinking through the social or political ramifications of the work should be as important to a scientist as calculus is. On the other hand, during my time in the humanities department of Birkbeck, I was frustrated by the lack of rigour in thought. I was disappointed that analytical thinking wasn’t quite scientific enough, that understanding probability, mechanics, and the scientific method was rarely of interest. If anything, science was regularly cast aside as some new modern equivalent of the church.

Both enterprises are important in different ways and both can learn a great deal from each other by encouraging different forms of critical thinking and problem-solving. Learning both will foster cultural awareness and empathy, it will lead to creativity and innovation by combining fields and providing new insights and ideas. In a time where the issues we face – climate change, artificial intelligence – are complex and varied in nature, the only way we can respond is by asking more of ourselves, to confront more complexity and challenge ourselves by expanding the net of how we learn, even if it is difficult.

With this in mind, I want to blog about my adventures in deep learning. I want to work on projects and build neural networks from the ground up and demystify this seemingly complex realm within artificial intelligence so that we can better engage with problems of safety and risk. The aim here is three-fold.

  1. To reinforce my own knowledge of the technically difficult parts of deep learning - mathematics and programming - by practicing the ‘Feynman technique’.
  2. To provoke serious questions regarding AI safety, bias, risk, and Our Final Invention
  3. To embrace the two poles of my learning; science and humanities.