This is a short blog post#
In the past few weeks, for various reasons, I have carefully read many articles and books about the history of artificial intelligence. Overall, I have gained a lot of insights and learned many interesting stories about the experts in this field, as well as their relationships and conflicts. However, besides the cliché topic of "where there are people, there is a community," I have another feeling.
I have been studying machine learning and deep learning for many years, and I have also run many models myself. When deep learning became a popular direction, I suddenly realized that many models that I had never seen before were being mentioned by everyone, including convolutional neural networks, long short-term memory networks, and later on, mind-blowing models like generative adversarial networks. I once thought that I was too dull to come up with these clever ideas.
But after reading the history of artificial intelligence, I realized that I was wrong. Whether it is convolutional neural networks, long short-term memory networks, reinforcement learning, or the core minimax optimization function in generative adversarial models, they were all proposed in the 70-year development process of artificial intelligence. However, they did not receive much attention at that time due to various reasons. It was only when computing power and data became sufficient that scientists rediscovered and utilized these methods, combining them with the latest model construction and training methods, giving these "ideas" a new life. Even the emergence of reinforcement learning itself predates the term "artificial intelligence."
For me, these are just a small but significant gain. But for the science and technology in our country, this situation precisely indicates that simply pursuing research hotspots is not the right approach. We need to lay a solid foundation and persist in various aspects in order to have a chance of success in the future. Simply chasing hotspots is equivalent to superficial understanding, thinking that it is just a moment of inspiration for top scientists.
Due to the criticism of neural networks in the United States, research on neural networks shifted to Canada, making the University of Toronto and McGill University the holy land of deep learning. Hinton and others persisted during the time when neural network models were criticized, which led to their brilliant achievements today. Therefore, it is only natural for them to produce good results.