It took me a long process of reflection and self-adjustment to transcend all that. In this article, I detail the course of my first five years of experience, which started with grotesque innocence and led me to a satisfying form of maturity at the end.
1. The first year’s mistake: Trying to absorb as much technology as possible.
My engineering background gave me a fairly consistent and enriched algorithmic, mathematical and computer-science-based arsenal. I was trained to model a complex problem, in whichever domain I was involved with. My interlocutors until then were my professors. I trained myself and got used against my will to associate problems with complexity. Each time I was confronted with a new problem in a different subject, I added a new layer of complexity, in a view to maximizing my gratification in the form of an evaluation grade. The more the problem pushed us to think, the more our reflections leaned towards abstraction and complex thinking, the closer we got to the solution and the more gratification we gained. Even in some modules, we were required to keep track of the number of days taken to solve a project. Rewards were represented in my perception as very strongly correlated to the resources consumed in the solving process. This perception was the backbone of my reasoning when I started my graduate internship. Before that, I filtered the titles and descriptions of many internship offers according to what emerged as a fair deal of complexity. On the first day, I installed the first core dependency in my machine:
tensorflow. I was hearing about it quite often at that time and I thought that in order not to flunk out, I had to start with the mastery of the technology and the mathematical pillars that gave rise to it. My internship supervisors were understanding and rather sympathetic in that they immediately intercepted my obsession. Although the challenge of my internship was clear, I could hardly see how I could start otherwise than with
tensorflow and the other Python libraries ( numpy, pandas, other built-in modules ). I let myself go by the hype and I did what scientific fashion compelled me to do: convince myself that everything comes down to see the problem through the lens of neural networks, therefore to designing neural networks. The way I saw it, I had fun reproducing different architectures thanks to the flexibility that
Keras (a simplifying layer above
tensorflow ) provided to its end users. My experiments covered several designs that should normally be adapted to the type of problem at hand:
merge of two CNNs, hybrid Neural Networks (radial basis networks type), deep CNNs, RNNs, LSTM, … I already saw the power of the tool and I put aside the whole context of my internship.