DenseNet inspired custom CNN architecture.
Aravind B N
Posted on July 7, 2023
Hi, I'm Aravind, working as a Junior Software Engineer at Luxoft India. In this article, I made my best effort to provide a clear overview of DenseNet. This is one of my interests in Machine learning (ML). The fundamental ideas of CNN will be covered in the first section of CNN, and the second section will concentrate on how a custom CNN was inspired by DenseNet.
Introduction
The 2016 study "DenseNet: Efficient Convolutional Networks" by its authors introduced the neural network architecture known as DenseNet (short for "Dense-Networks"). Modern deep learning models like DenseNet are frequently employed in computer vision jobs that need picture categorization. Dense connections between layers, which enable effective information transmission and improved feature aggregation, are the model's defining feature. A deep neural network that can learn intricate, hierarchical properties from input pictures is created by the architecture, which includes several thick blocks of layers. DenseNet has won the favour of researchers and professionals in the field of computer vision and image classification because to its effective design and potent classification performance.
DenseNet designs come in a variety of forms
DenseNet designs come in a variety of forms.
The various forms of DenseNet designs, each with unique benefits and drawbacks, include:
The initial DenseNet architecture suggested by Wang et al. is designated as DenseNet121. It is intended to classify images and has 121 layers.
DenseNet201: 201 layers make up this more complex version of DenseNet121. On some datasets, it has been demonstrated to perform better than DenseNet121 and is also intended for image classification.
Having 40 layers, DenseNet40 is a scaled-down variation of DenseNet121. It has been proven to work effectively on datasets with fewer classes and is optimised for smaller datasets.
DenseNet201x4 is an architecture that enables the network to handle bigger input pictures while still operating with a minimal set of parameters.
DenseNet64: This version of DenseNet121 employs 64 layers rather than 121. It has been proven to work effectively on datasets with fewer classes and is optimised for smaller datasets.
Creating a Custom CNN inspired by DensetNet201 Architecture
To solve the vanishing gradient problem during deep convolutional network training, the ResNet (residual network) was developed. ResNet makes it easier to train networks that are much deeper than those previously used by including a skip link that avoids on-linear transformations using an identity function. One advantage of ResNet is that the gradient can flow straight from one layer to the next through the identity function. The ResNet's layout was schematically depicted in Fig a. To increase the information flow across layers, a new connectivity pattern called densely connected convolutional networks was devised (DenseNet). Rather than taking representational power from excessively deep or wide architectures, DenseNet uses feature reuse to maximize the network's potential, resulting in condensed models that are simple to train and have great parameter efficiency. It has been established that feature maps that connect the information from all previous layers increase variety in the input of subsequent layers and improve network performance. This is a significant distinction between DenseNet and ResNet. The layout of the resulting Dense Grid was shown in Fig b.
Figure. The layout of the ResNet (a) and DenseNet (b).
The network was made up of four layers. The input layer was the first type, and it was this layer that fed the image patches into the network. The second type was a convolutional layer, which formed feature maps for each filter by convolving the learnt filters with the input pictures. The pooling layer was the third type, and max pooling was used in this project. By replacing each cube with its maximum value, max pooling reduced the feature map along the spatial dimensions while keeping the most influential characteristics for picture differentiation. The fully connected layer, which comprised of a number of input and output neurons, formed the learnt linear combination of all neurons from the previous layer and passed through a non-linearity, was the fourth type of layer. A self-learned weighting coefficient was assigned to each layer of the network, allowing the CNN to focus on more useful characteristics.
Here is an Example of Build DenseNet201
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=lr),
metrics=['accuracy']
)
K.clear_session()
gc.collect()
resnet = DenseNet201(
weights='imagenet',
include_top=False,
input_shape=(224,224,3)
)
model = build_model(resnet ,lr = 1e-4)
model.summary()
To understand this Example, let's go through it line by line:
A DenseNet201 model is being created from the pre-trained DenseNet201 model from the ImageNet dataset.
When the weights option is set to "imagenet," Keras is instructed to load the DenseNet201 weights that have already been learned from the ImageNet dataset.
The value of the include_top argument is False. The top-layer categorization network will not be connected to the DenseNet201 model, according to this.
The value of the input_shape argument is (224, 224, 3). This describes the form that the model will anticipate in the input photos. It specifically demands RGB pictures with a resolution of 224x224 pixels.
You may use the DenseNet201 model you create after running this code to carry out picture classification tasks. In order to utilise this model, you would first need to import some photographs and transform them into the desired input shape, after which you would run those images through the model.
Conclusion:
DenseNet is a cutting-edge deep learning model that is extensively employed in computer vision for picture categorization applications. It combines several thick layers to construct a deep neural network capable of learning complicated, hierarchical features from input photos. The ResNet was created to address the vanishing gradient issue that arises during the training of deep convolutional networks. By including a skip link that eliminates on-linear adjustments, ResNet makes it possible to train networks that are considerably deeper than those previously utilised. One benefit of ResNet is that, thanks to the identity function, the gradient can pass directly from one layer to the next. The network has four layers: an input layer, a convolutional layer, a pooling layer, and a fully connected layer. DenseNet is a potent design that can be applied to image classification tasks, and it is crucial to test it on various datasets in order to identify the optimum architecture for the particular task at hand.
Posted on July 7, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.