This means that because of vanishing gradient they are hard to train. For example, AlexNet had 5 convolutional layers, while VGG and GoogLeNet had 19 and 22 layers respectively. The problem of ImageNet winning architectures that came before ResNet was that they were very deep and had a lot of layers. Let’s try to fine tune that model and maybe get even better results. During the experiments in the previous article we got the best results with ResNet architecture. Additionally, you can fine tune these models, by modifying behavior of the chosen layers. This way you can utilize some of the most famous neural networks without loosing too much time and resources on training. That is exactly what we have done in the previous article. Since, large datasets are usually used for some global solution you can customize pre-trained model and specialize it for certain problem. You can use it as out of the box solution and or you can use it with transfer learning. In essence, there are two ways in which you can use them. There are many pre-trained models available at module. ![]() We specialized them for “ Cats vs Dogs” dataset, the dataset that contains 23,262 images of cats and dogs. These architectures are all trained on ImageNet dataset and their weights are stored. We used several huge pre-trained models: VGG16, GoogLeNet and ResNet. In the previous article, we had a chance to explore transfer learning with TensorFlow 2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |