VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 1 www.viva-technology.org/New/IJRI An Analysis of Various Deep Learning Algorithms for Image Processing Geeta S. Lagad1 , Ankit J. Maurya2 , Kunal D. Mestry3 , Dnyaneshwar Bhabad4 1,2,3,4 (Computer Engineering Department, VIVA Institute of Technology, India) Abstract: Various applications of image processing has given it a wider scope when it comes to data analysis. Various Machine Learning Algorithms provide a powerful environment for training modules effectively to identify various entities of images and segment the same accordingly. Rather one can observe that though the image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task, deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the image processing domain. It has way higher accuracy and computational power for classifying images further and segregating their various entities as individual components of the image working region. Major focus will be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions. Keywords – Image processing, data analysis, machine learning, support vector machine, random forest algorithms, deep learning, artificial neural networks, convolution neural networks, region convolution neural networks 1. INTRODUCTION This model itself will make use preconfigured weight matrices to identify the traffic-density in the scene and thus differentiate it precisely. It will consider the different features of the images provided as input data in a convolution. [4] The model itself will be capable of analyzing and identifying any kind of traffic scene as it works on the Region Convolutional Neural Networks (R-CNN). [2] As the features aren’t predetermined as probabilistic data to the system, it should be able to work with any random traffic scene which is the overall motto behind using deep learning algorithms. 2. LITERATURE REVIEW A. Khan, et. al. [1] have proposed a system is developed to control and monitor the congestion of traffic. The principle inspiration is to distinguish the nearness and nonappearance of vehicles out and about utilizing factual methodology coordinated with traditional picture preparing strategies. For this reason, they have build up a "Probability Based Vehicle Detection (PBVD)" calculation based Vehicle Detection System (VDS) coordinated with post - handling subsystems to frame a total traffic control framework. The framework has the ability to
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 2 www.viva-technology.org/New/IJRI acquire vehicle insights amid controlling traffic. Reenactments are performed by creating total model traffic engineering. Correlation is finished utilizing the outcome gained from model framework and preparing a constant video of traffic scene. Reenactment results demonstrate the viability of the proposed plan. Shreyas, et. al. [2] have proposed Automatic Number Plate Recognition (ANPR) System is based on an image processing technology. The proposed framework can be fundamentally used to screen street traffic exercises, for example, the distinguishing proof of vehicle amid petty criminal offenses, for example, speed of vehicle and to identify at the road traffic signals path infringement. What's more, in this manner can be followed each vehicle for traffic rule infringement and can give the data to the worry expert to make further successful move. The proposed framework initially identifies for any vehicle which abuses traffic principle and afterward catches the vehicle picture. From the caught picture utilizing picture division procedure the vehicle number plate district will be extricated. Furthermore, the system utilized for the character acknowledgment on number plate is Optical character acknowledgment. The framework is executed and reproduced utilizing MATLAB. Z. Shao, et. al. [3] have proposed in this paper the recognition framework of car makes and models from a single image captured by a traffic camera. Due to various configurations of traffic cameras, a traffic image may be captured in different viewpoints and lighting conditions, and the image quality varies in resolution and color depth. In the framework, cars are first detected using a part-based detector, and license plates and headlamps are detected as cardinal anchor points to rectify projective distortion. Car features are extracted, normalized, and classified using an ensemble of neural-network classifiers. In the experiment, the performance of the proposed method is evaluated on a data set of practical traffic images. The results prove the effectiveness of the proposed method in vehicle detection and model recognition. K. Sohn, et. al. [4] have proposed existing methodologies to count vehicles from a road image have depended upon both hand-crafted feature engineering and rule-based algorithms. These require many predefined thresholds to detect and track vehicles. This paper provides a supervised learning methodology that requires no such feature engineering. A profound convolutional neural system was conceived to check the quantity of vehicles on a street fragment dependent on video pictures. The present strategy does not view an individual vehicle as an article to be distinguished independently; rather, it all in all checks the quantity of vehicles as a human would. The test outcomes demonstrate that the proposed procedure beats existing plans. For the most part, it is hard to represent how a CNN can check the quantity of vehicles precisely. In any case, channels were relied upon to extract the highlights of a picture, objects were perceived by the highlights, and the articles were then tallied by means of the last completely associated layer. Deepak, et. al. [5] have proposed Recently Deep Learning has shown remarkable promise in solving many computer vision tasks such as object recognition, detection, and tracking. Nonetheless, preparing profound learning structures require tremendous named datasets which are tedious and costly to gain. In this paper they evade this issue by information enlargement. By appropriately enlarging a current extensive general (non- traffic) dataset with a little low-goals heterogeneous traffic dataset (that we gathered), we get best in class vehicle identification execution. As far as we could possibly know the gathered dataset, named IITM-HeTra, is the first openly accessible marked dataset of heterogeneous traffic.
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 3 www.viva-technology.org/New/IJRI Gu, et. al. [6] have proposed a method for real-time vehicle detection and tracking using deep neural networks is proposed in this paper and a complete network architecture is presented. Using these model, you can obtain vehicle candidates, vehicle probabilities, and their coordinates in real-time. The proposed model is trained on the PASCAL VOC 2007 and 2012 image set and tested on ImageNet dataset. By a carefully design, the detection speed of these model is fast enough to process streaming video. Experimental results show that proposed model is a real-time, accurate vehicle detector, making it ideal for computer vision application. This network includes 9 convolutional layers, 4 inception modules, one SPP layer and 2 fully connected layers. Limitations of these network architecture is these system struggles with small and nearby object in groups. P. Bajaj, et. al. [7] have proposed vehicle detection and vehicle classification using neural network (NN), can be achieved by video monitoring systems. In most vehicle detection methods in the literature, only the detection of vehicles in frames of the given video is emphasized. However, further analysis is needed in order to obtain the useful information for traffic management such as real time traffic density and number of vehicle types passing through roads. This paper presents application of neural network for vehicle detection and classification. In scenes where the density of traffic is very high, causing many vehicles to occlude each other, the algorithms could detect multiple vehicles as a single vehicle, thus affecting the count and also causing a misclassification. These paper is useful for our system to detect individual vehicle. R.Girshick, et. al. [8] have proposed the method, called Mask R-CNN, expands Faster R-CNN by including a branch for anticipating an article veil in parallel with the current branch for bouncing box recognition.In guideline Mask R-CNN is an instinctive expansion of Faster R-CNN. Veil R-CNN accomplishes great outcomes even under testing conditions. Despite the fact that Mask R-CNN is quick, proposed configuration isn't enhanced for speed, and better speed/precision exchange offs could be accomplished. Division is a pixel-to- pixel assignment and endeavor the spatial format of covers by utilizing a FCN. Our methodology productively recognizes questions in a picture while all the while creating an amazing division cover for each example. Bishop, et. al. [9] have a proposed system has adopted two different algorithms that are built upon on region- based object detection, which achieves state-of-the-art performance on object detection tasks from other common object detection datasets such as PASCAL VOC and ImageNet. They have applied two region-based models to the Nvidia AI City dataset, evaluated their performance under different training settings and achieved state-of-the-art performance on the dataset. The frameworks used in the proposed system, while achieving a competitive result, still have much room for improvement. Goodfellow, et. al. [10] have proposed, In this paper, an efficient license plate recognition system had proposed that first detects vehicles and then retrieves license plates from vehicles to reduce false positives on plate detection. Then, apply convolution neural networks to improve the character recognition of blurred and obscure images. The results show the superiority of the performance in both accuracy and performance in comparison with traditional license plate recognition systems. The proposed LPRCNN model is composed of two convolutional layers, two maxpooling layers, two fully connected layers, and one output layer.
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 4 www.viva-technology.org/New/IJRI Y. Lin, et. al. [11] have a Proposed system used convolutional neural network on Keras with Tensorflow support the experimental results shows the time required to train, test and create the model in limited computing system. The system is trained with 60,000 images with 25 epochs each epoch is taking 722 to 760 seconds in training step on Tensorflow cpu system. At the end of 25 epochs the training accuracy is 96 percentage and the system can recognition input images based on train model and the output is respective label of images. We chose to utilize 60,000 pictures with a 32x32 pixel measure CIFAR-10 database. Python and TensorFlow has been utilized for the program. They chose to utilize 60,000 pictures with a 32x32 pixel measure CIFAR-10 database. Akhil, et. al. [12] have proposed a reformative CNN structure is presented to enhance recognition accuracy and greatly reduces the computational cost incurred when running a CNN. By introducing transfer learning for our system, accuracy rate is significantly improved. Extensive experiments have been performed, yielding promising results. In addition, a reformative fine tuned CNN structure is presented to enhance recognition accuracy. CNN was previously trained using the CIFAR-10 data set, which has 50,000 training images. So this pre-trained CNN is tuned for vehicle using only 40on road vehicle images. 3. ANALYSIS TABLE Sr. No. Title Of Paper Technique DataSet Used Accuracy/ Efficiency 1 Modeling, Design and Analysis of Intelligentm Traffic Control System Based on Integrated Statistical Image Processing Techniques.[1] .Probability Based Vehicle Detection(PBVD) algorithm A. Vehicle Detection System (VDS) B. Vehicle Counting and Classification System. (VCCS) The final results are satisfactory and show that the system can cope with a noisy environment. 2 Dynamic Traffic Rule Violation Monitoring System Using Automatic Number Plate Recognition with SMS Feedback.[2] Automatic Number Plate Recognition (ANPR) System. - We are able to achieve 95% of success rate in number plate detection. 3 Recognition of Car Makes and Models From a Single Traffic- Camera Image.[3] Naive Bayes SVM Classifier Cascade Classifier. Practical Traffic Images. To improve the recognition precision of car models above 60% accuracy. 4 Image-Based Learning to Measure Traffic Density Using a Deep Convolutional Neural Network.[4] Convolutional Neural Networks. (CNN) Snapshots from video streaming. Acceptable Accuracy.
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 5 www.viva-technology.org/New/IJRI 5 Training a Deep Learning Architecture for Vehicle Detection Using Limited Heterogeneous Traffic Data.[5] Hybrid computational intelligent techniques and fuzzy neural networks is applied to control the traffic signals. IITM-HeTra. High degree of accuracy. 6 Real-time vehicle detection and tracking using deep Neural networks.[6] Convolutional Neural Networks. (CNN) ImageNet Dataset. 80.5 % in detecting vehicles. 7 Vehicle Detection and Neural Network Application for Vehicle Classification.[7] Fuzzy neural networks. - 90% 8 Vehicle Classification using Neural Networks.[8] Radial Basis Function Networks. (RBFN) ImageNet Dataset. 71 % in detecting vehicles. 9 Mask R-CNN.[9] - - Pixel-to-pixel alignment. 10 Effective Object Detection From Traffic Camera Videos.[10] Faster RCNN and Region-based Fully Convolutional Network. ImageNet. 10−40% higher mean average precision (mAP) compared with other solutions. 11 An Efficient License Plate Recognition System Using Convolution Neural Networks.[11] LPR convolution neural networks. - 99.2% of character recognition accuracy. 12 Moving Vehicle Detection Using Deep Neural Network.[12] Recurrent Convolution Neural Networks.(RCNN) CIFAR-10 100% accuracy with respect to detection accuracy. 4. CONCLUSION The above paper mainly discussed deep learning techniques like RCNN, RNN, and CNN which gives different results on different datasets giving varied accuracy. Large number of datasets were used for proper prediction. The research shows text sequence prediction can be implemented through deep learning techniques which can change the scenario of typing whole sentences. The study of various papers gives a clear edge stating deep learning might provide better results when compared with other techniques. Previously, machine learning and natural language processing were used in prediction but deep learning models produced better accuracy.
VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 6 www.viva-technology.org/New/IJRI REFERENCES [1] S. Lai, L. Xu, K. Liu and J. Zhao, “Recurrent Convolutional Neural Networks for Text Classification”, Proceedings of the Twenty-Ninth AAAI Conference on AI 2015. [2] P. Ongsulee, “Artificial Intelligence, Machine Learning and Deep Learning”, 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE) [3] W. Yin, K. Kann, Mo Yu and H. Schütze, “Comparative study of CNN and RNN for Natural Language Processing”, Feb-17. [4] Z.Shi, M. Shi and C. Li, “The prediction of character based on Recurrent Neural network language model”, 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS). [5] V. Tran, K. Nguyen and D. Bui, “A Vietnamese Language Model Based on Recurrent Neural Network”, 2016 Eighth International Conference on Knowledge and Systems Engineering. [6] K. C. Arnold, K.Z. Gajos and A. T. Kalai, “On Suggesting Phrases vs. Predicting Words for Mobile Text Composition”; https://www.microsoft.com/enus/research/wpcontent/uploads /2016/12/ arnold16suggesting.pdf. [7] J. Lee and F. Dernoncourt, “Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks”, Conference paper at NAACL 2016. [8] M. Liang and X. Hu, “Recurrent Convolutional Neural Network for Object Recognition”, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). [9] A. Hassan and A.Mahmood, “Deep Learning for Sentence Classification”, 2017 IEEE Long Island Systems, Applications and Technology Conference (LISAT). [10]J. Shin, Y. Kim and S. Yoon, “Contextual CNN: A Novel Architecture Capturing Unified Meaning for Sentence Classification”, 2018 IEEE International Conference on Big Data and Smart Computing (BigComp). [11]W. Yin and H. Schutze, “Multichannel Variable-Size Convolution for Sentence Classification”, 19th Conference on Computational Language Learning, c 2015 Association for Computational Linguistics. [12] I.Sutskever, O.Vinyals and Q. V. Le, “Sequence to Sequence Learning with Neural Networks”, Dec-14. [13]Y. Zhang, B. Wallace, “A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification”, arXiv: 1510.03820v4 [cs.CL], 2016. [14]A. Salem, A. Almarimi, G Andrejková, “Text Dissimilarities Predictions Using Convolutional Neural Networks and Clustering” World Symposium on Digital Intelligence for Systems and Machines (DISA), 2018 [15]Y. Lin, J. Wang, “Research on text classification based on SVM-KNN” IEEE 5th International Conference on Software Engineering and Service Science, 2014 [16]A. Hassan, A. Mahmood, “Convolutional Recurrent Deep Learning Model for Sentence Classification”, IEEE Access, 2018

An Analysis of Various Deep Learning Algorithms for Image Processing

  • 1.
    VIVA-Tech International Journalfor Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 1 www.viva-technology.org/New/IJRI An Analysis of Various Deep Learning Algorithms for Image Processing Geeta S. Lagad1 , Ankit J. Maurya2 , Kunal D. Mestry3 , Dnyaneshwar Bhabad4 1,2,3,4 (Computer Engineering Department, VIVA Institute of Technology, India) Abstract: Various applications of image processing has given it a wider scope when it comes to data analysis. Various Machine Learning Algorithms provide a powerful environment for training modules effectively to identify various entities of images and segment the same accordingly. Rather one can observe that though the image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task, deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the image processing domain. It has way higher accuracy and computational power for classifying images further and segregating their various entities as individual components of the image working region. Major focus will be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions. Keywords – Image processing, data analysis, machine learning, support vector machine, random forest algorithms, deep learning, artificial neural networks, convolution neural networks, region convolution neural networks 1. INTRODUCTION This model itself will make use preconfigured weight matrices to identify the traffic-density in the scene and thus differentiate it precisely. It will consider the different features of the images provided as input data in a convolution. [4] The model itself will be capable of analyzing and identifying any kind of traffic scene as it works on the Region Convolutional Neural Networks (R-CNN). [2] As the features aren’t predetermined as probabilistic data to the system, it should be able to work with any random traffic scene which is the overall motto behind using deep learning algorithms. 2. LITERATURE REVIEW A. Khan, et. al. [1] have proposed a system is developed to control and monitor the congestion of traffic. The principle inspiration is to distinguish the nearness and nonappearance of vehicles out and about utilizing factual methodology coordinated with traditional picture preparing strategies. For this reason, they have build up a "Probability Based Vehicle Detection (PBVD)" calculation based Vehicle Detection System (VDS) coordinated with post - handling subsystems to frame a total traffic control framework. The framework has the ability to
  • 2.
    VIVA-Tech International Journalfor Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 2 www.viva-technology.org/New/IJRI acquire vehicle insights amid controlling traffic. Reenactments are performed by creating total model traffic engineering. Correlation is finished utilizing the outcome gained from model framework and preparing a constant video of traffic scene. Reenactment results demonstrate the viability of the proposed plan. Shreyas, et. al. [2] have proposed Automatic Number Plate Recognition (ANPR) System is based on an image processing technology. The proposed framework can be fundamentally used to screen street traffic exercises, for example, the distinguishing proof of vehicle amid petty criminal offenses, for example, speed of vehicle and to identify at the road traffic signals path infringement. What's more, in this manner can be followed each vehicle for traffic rule infringement and can give the data to the worry expert to make further successful move. The proposed framework initially identifies for any vehicle which abuses traffic principle and afterward catches the vehicle picture. From the caught picture utilizing picture division procedure the vehicle number plate district will be extricated. Furthermore, the system utilized for the character acknowledgment on number plate is Optical character acknowledgment. The framework is executed and reproduced utilizing MATLAB. Z. Shao, et. al. [3] have proposed in this paper the recognition framework of car makes and models from a single image captured by a traffic camera. Due to various configurations of traffic cameras, a traffic image may be captured in different viewpoints and lighting conditions, and the image quality varies in resolution and color depth. In the framework, cars are first detected using a part-based detector, and license plates and headlamps are detected as cardinal anchor points to rectify projective distortion. Car features are extracted, normalized, and classified using an ensemble of neural-network classifiers. In the experiment, the performance of the proposed method is evaluated on a data set of practical traffic images. The results prove the effectiveness of the proposed method in vehicle detection and model recognition. K. Sohn, et. al. [4] have proposed existing methodologies to count vehicles from a road image have depended upon both hand-crafted feature engineering and rule-based algorithms. These require many predefined thresholds to detect and track vehicles. This paper provides a supervised learning methodology that requires no such feature engineering. A profound convolutional neural system was conceived to check the quantity of vehicles on a street fragment dependent on video pictures. The present strategy does not view an individual vehicle as an article to be distinguished independently; rather, it all in all checks the quantity of vehicles as a human would. The test outcomes demonstrate that the proposed procedure beats existing plans. For the most part, it is hard to represent how a CNN can check the quantity of vehicles precisely. In any case, channels were relied upon to extract the highlights of a picture, objects were perceived by the highlights, and the articles were then tallied by means of the last completely associated layer. Deepak, et. al. [5] have proposed Recently Deep Learning has shown remarkable promise in solving many computer vision tasks such as object recognition, detection, and tracking. Nonetheless, preparing profound learning structures require tremendous named datasets which are tedious and costly to gain. In this paper they evade this issue by information enlargement. By appropriately enlarging a current extensive general (non- traffic) dataset with a little low-goals heterogeneous traffic dataset (that we gathered), we get best in class vehicle identification execution. As far as we could possibly know the gathered dataset, named IITM-HeTra, is the first openly accessible marked dataset of heterogeneous traffic.
  • 3.
    VIVA-Tech International Journalfor Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 3 www.viva-technology.org/New/IJRI Gu, et. al. [6] have proposed a method for real-time vehicle detection and tracking using deep neural networks is proposed in this paper and a complete network architecture is presented. Using these model, you can obtain vehicle candidates, vehicle probabilities, and their coordinates in real-time. The proposed model is trained on the PASCAL VOC 2007 and 2012 image set and tested on ImageNet dataset. By a carefully design, the detection speed of these model is fast enough to process streaming video. Experimental results show that proposed model is a real-time, accurate vehicle detector, making it ideal for computer vision application. This network includes 9 convolutional layers, 4 inception modules, one SPP layer and 2 fully connected layers. Limitations of these network architecture is these system struggles with small and nearby object in groups. P. Bajaj, et. al. [7] have proposed vehicle detection and vehicle classification using neural network (NN), can be achieved by video monitoring systems. In most vehicle detection methods in the literature, only the detection of vehicles in frames of the given video is emphasized. However, further analysis is needed in order to obtain the useful information for traffic management such as real time traffic density and number of vehicle types passing through roads. This paper presents application of neural network for vehicle detection and classification. In scenes where the density of traffic is very high, causing many vehicles to occlude each other, the algorithms could detect multiple vehicles as a single vehicle, thus affecting the count and also causing a misclassification. These paper is useful for our system to detect individual vehicle. R.Girshick, et. al. [8] have proposed the method, called Mask R-CNN, expands Faster R-CNN by including a branch for anticipating an article veil in parallel with the current branch for bouncing box recognition.In guideline Mask R-CNN is an instinctive expansion of Faster R-CNN. Veil R-CNN accomplishes great outcomes even under testing conditions. Despite the fact that Mask R-CNN is quick, proposed configuration isn't enhanced for speed, and better speed/precision exchange offs could be accomplished. Division is a pixel-to- pixel assignment and endeavor the spatial format of covers by utilizing a FCN. Our methodology productively recognizes questions in a picture while all the while creating an amazing division cover for each example. Bishop, et. al. [9] have a proposed system has adopted two different algorithms that are built upon on region- based object detection, which achieves state-of-the-art performance on object detection tasks from other common object detection datasets such as PASCAL VOC and ImageNet. They have applied two region-based models to the Nvidia AI City dataset, evaluated their performance under different training settings and achieved state-of-the-art performance on the dataset. The frameworks used in the proposed system, while achieving a competitive result, still have much room for improvement. Goodfellow, et. al. [10] have proposed, In this paper, an efficient license plate recognition system had proposed that first detects vehicles and then retrieves license plates from vehicles to reduce false positives on plate detection. Then, apply convolution neural networks to improve the character recognition of blurred and obscure images. The results show the superiority of the performance in both accuracy and performance in comparison with traditional license plate recognition systems. The proposed LPRCNN model is composed of two convolutional layers, two maxpooling layers, two fully connected layers, and one output layer.
  • 4.
    VIVA-Tech International Journalfor Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 4 www.viva-technology.org/New/IJRI Y. Lin, et. al. [11] have a Proposed system used convolutional neural network on Keras with Tensorflow support the experimental results shows the time required to train, test and create the model in limited computing system. The system is trained with 60,000 images with 25 epochs each epoch is taking 722 to 760 seconds in training step on Tensorflow cpu system. At the end of 25 epochs the training accuracy is 96 percentage and the system can recognition input images based on train model and the output is respective label of images. We chose to utilize 60,000 pictures with a 32x32 pixel measure CIFAR-10 database. Python and TensorFlow has been utilized for the program. They chose to utilize 60,000 pictures with a 32x32 pixel measure CIFAR-10 database. Akhil, et. al. [12] have proposed a reformative CNN structure is presented to enhance recognition accuracy and greatly reduces the computational cost incurred when running a CNN. By introducing transfer learning for our system, accuracy rate is significantly improved. Extensive experiments have been performed, yielding promising results. In addition, a reformative fine tuned CNN structure is presented to enhance recognition accuracy. CNN was previously trained using the CIFAR-10 data set, which has 50,000 training images. So this pre-trained CNN is tuned for vehicle using only 40on road vehicle images. 3. ANALYSIS TABLE Sr. No. Title Of Paper Technique DataSet Used Accuracy/ Efficiency 1 Modeling, Design and Analysis of Intelligentm Traffic Control System Based on Integrated Statistical Image Processing Techniques.[1] .Probability Based Vehicle Detection(PBVD) algorithm A. Vehicle Detection System (VDS) B. Vehicle Counting and Classification System. (VCCS) The final results are satisfactory and show that the system can cope with a noisy environment. 2 Dynamic Traffic Rule Violation Monitoring System Using Automatic Number Plate Recognition with SMS Feedback.[2] Automatic Number Plate Recognition (ANPR) System. - We are able to achieve 95% of success rate in number plate detection. 3 Recognition of Car Makes and Models From a Single Traffic- Camera Image.[3] Naive Bayes SVM Classifier Cascade Classifier. Practical Traffic Images. To improve the recognition precision of car models above 60% accuracy. 4 Image-Based Learning to Measure Traffic Density Using a Deep Convolutional Neural Network.[4] Convolutional Neural Networks. (CNN) Snapshots from video streaming. Acceptable Accuracy.
  • 5.
    VIVA-Tech International Journalfor Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 5 www.viva-technology.org/New/IJRI 5 Training a Deep Learning Architecture for Vehicle Detection Using Limited Heterogeneous Traffic Data.[5] Hybrid computational intelligent techniques and fuzzy neural networks is applied to control the traffic signals. IITM-HeTra. High degree of accuracy. 6 Real-time vehicle detection and tracking using deep Neural networks.[6] Convolutional Neural Networks. (CNN) ImageNet Dataset. 80.5 % in detecting vehicles. 7 Vehicle Detection and Neural Network Application for Vehicle Classification.[7] Fuzzy neural networks. - 90% 8 Vehicle Classification using Neural Networks.[8] Radial Basis Function Networks. (RBFN) ImageNet Dataset. 71 % in detecting vehicles. 9 Mask R-CNN.[9] - - Pixel-to-pixel alignment. 10 Effective Object Detection From Traffic Camera Videos.[10] Faster RCNN and Region-based Fully Convolutional Network. ImageNet. 10−40% higher mean average precision (mAP) compared with other solutions. 11 An Efficient License Plate Recognition System Using Convolution Neural Networks.[11] LPR convolution neural networks. - 99.2% of character recognition accuracy. 12 Moving Vehicle Detection Using Deep Neural Network.[12] Recurrent Convolution Neural Networks.(RCNN) CIFAR-10 100% accuracy with respect to detection accuracy. 4. CONCLUSION The above paper mainly discussed deep learning techniques like RCNN, RNN, and CNN which gives different results on different datasets giving varied accuracy. Large number of datasets were used for proper prediction. The research shows text sequence prediction can be implemented through deep learning techniques which can change the scenario of typing whole sentences. The study of various papers gives a clear edge stating deep learning might provide better results when compared with other techniques. Previously, machine learning and natural language processing were used in prediction but deep learning models produced better accuracy.
  • 6.
    VIVA-Tech International Journalfor Research and Innovation Volume 1, Issue 2 (2019) ISSN(Online): 2581-7280 Article No. 13 PP 1-6 6 www.viva-technology.org/New/IJRI REFERENCES [1] S. Lai, L. Xu, K. Liu and J. Zhao, “Recurrent Convolutional Neural Networks for Text Classification”, Proceedings of the Twenty-Ninth AAAI Conference on AI 2015. [2] P. Ongsulee, “Artificial Intelligence, Machine Learning and Deep Learning”, 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE) [3] W. Yin, K. Kann, Mo Yu and H. Schütze, “Comparative study of CNN and RNN for Natural Language Processing”, Feb-17. [4] Z.Shi, M. Shi and C. Li, “The prediction of character based on Recurrent Neural network language model”, 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS). [5] V. Tran, K. Nguyen and D. Bui, “A Vietnamese Language Model Based on Recurrent Neural Network”, 2016 Eighth International Conference on Knowledge and Systems Engineering. [6] K. C. Arnold, K.Z. Gajos and A. T. Kalai, “On Suggesting Phrases vs. Predicting Words for Mobile Text Composition”; https://www.microsoft.com/enus/research/wpcontent/uploads /2016/12/ arnold16suggesting.pdf. [7] J. Lee and F. Dernoncourt, “Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks”, Conference paper at NAACL 2016. [8] M. Liang and X. Hu, “Recurrent Convolutional Neural Network for Object Recognition”, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). [9] A. Hassan and A.Mahmood, “Deep Learning for Sentence Classification”, 2017 IEEE Long Island Systems, Applications and Technology Conference (LISAT). [10]J. Shin, Y. Kim and S. Yoon, “Contextual CNN: A Novel Architecture Capturing Unified Meaning for Sentence Classification”, 2018 IEEE International Conference on Big Data and Smart Computing (BigComp). [11]W. Yin and H. Schutze, “Multichannel Variable-Size Convolution for Sentence Classification”, 19th Conference on Computational Language Learning, c 2015 Association for Computational Linguistics. [12] I.Sutskever, O.Vinyals and Q. V. Le, “Sequence to Sequence Learning with Neural Networks”, Dec-14. [13]Y. Zhang, B. Wallace, “A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification”, arXiv: 1510.03820v4 [cs.CL], 2016. [14]A. Salem, A. Almarimi, G Andrejková, “Text Dissimilarities Predictions Using Convolutional Neural Networks and Clustering” World Symposium on Digital Intelligence for Systems and Machines (DISA), 2018 [15]Y. Lin, J. Wang, “Research on text classification based on SVM-KNN” IEEE 5th International Conference on Software Engineering and Service Science, 2014 [16]A. Hassan, A. Mahmood, “Convolutional Recurrent Deep Learning Model for Sentence Classification”, IEEE Access, 2018