A MAJOR PROJECT REPORT ON “SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK” Submitted to SRI INDU COLLEGE OF ENGINEERING & TECHNOLOGY, HYDERABAD In partial fulfillment of the requirements for the award of degree of BACHELOR OF TECHNOLOGY In COMPUTER SCIENCE AND ENGINEERING Submitted by J. DEEKSHITHA [20D41A05P8] CH. YASHASVINI [20D41A05M3] V.SRI RAM CHAKRI [20D41A05P3] T. ABHILASH [21D45A0521] Under the esteemed guidance of Mrs. K. VIJAYA LAKSHMI (Assistant Professor) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY (An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH) Sheriguda (V), Ibrahimpatnam (M), Rangareddy Dist – 501 510 (2023-2024)
SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY (An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CERTIFICATE Certified that the Major project entitled “SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK” is a bonafide work carried out by J.DEEKSHITHA [20D41A0P8], CH.YASHASVINI [20D41A05M3], V. SRI RAM CHAKRI [20D41A05P3], T.ABHILASH [21D45A0521] in partial fulfillment for the award of degree of Bachelor of Technology in Computer Science and Engineering of SICET, Hyderabad for the academic year 2023-2024.The project has been approved as it satisfies academic requirements in respect of the work prescribed for IV Year, II-Semester of B. Tech course. INTERNAL GUIDE HEAD OF THE DEPARTMENT (Mrs. K. VIJAYA LAKSHMI) (Prof . Ch. GVN. Prasad) (Assistant Professor) EXTERNAL EXAMINER
ACKNOWLEDGEMENT The satisfaction that accompanies the successful completion of the task would be put incomplete without the mention of the people who made it possible, whose constant guidance and encouragement crown all the efforts with success. We are thankful to Principal Dr.G. SURESH for giving us the permission to carry out this project. We are highly indebted to Prof. Ch. GVN. Prasad, Head of the Department of Computer Science Engineering, for providing necessary infrastructure andlabs and also valuable guidance at every stageof this project. We are grateful to our internal project guide Mrs. K. VIJAYA LAKSHMI, Assistant Professor for her constant motivation and guidance given by her during the execution of this project work. We would like to thank the Teaching & Non-Teaching staff of Department of ComputerScience and engineering for sharing their knowledge with us, last but not least we express our sincere thanks to everyone who helped directly or indirectly for the completion of this project. J. DEEKSHITHA [20D41A05P8] CH. YASHASVINI [20D41A05M3] V.SRI RAM CHAKRI [20D41A05P3] T. ABHILASH [21D45A0521]
i ABSTRACT Conversing to a person with hearing disability is always a major challenge. Sign language has indelibly become the ultimate panacea and is a very powerful tool for individuals with hearing and speech disability to communicate their feelings and opinions to the world. However, the invention of sign language alone, is not enough . There are many strings attached to this boon. The sign gestures often get mixed and confused for someone who has never learnt it or knows it in a different language. However, this communication gap which has existed for years can now be narrowed with the introduction of various techniques to automate the detection of sign gestures . In this paper, we introduce a Sign Language recognition using American Sign Language. In this study, the user must be able to capture images of the hand gesture using web camera and the system shall predict and display the name of the captured image. We use the HSV colour algorithm to detect the hand gesture and set the background to black. The images undergo a series of processing steps which include various Computer vision techniques such as the conversion to grayscale, dilation and mask operation. And the region of interest which, in our case is the hand gesture is segmented. The features extracted are the binary pixels of the images. We make use of Convolutional Neural Network(CNN) for training and to classify the images. We are able to recognise 10 American Sign gesture alphabets with high accuracy. Our model has achieved a remarkable accuracy of above 90%.
ii CONTENTS S.No. Chapters Page no i. List of Figures…………………………………………………………..iv ii. List of Screenshots………………………………………………………v 1. INTRODUCTION 1.1 INTRODUCTION OF PROJECT……………………………………………………01 1.2 LITERATURE SURVEY……………………………………………………………02 1.3 MODULES…………………………………………………………………………...05 2. SYSTEM ANALYSIS 2.1 EXISTING SYSTEM & ITS DISADVANTAGES………………………………….06 2.2 PROPOSED SYSTEM & ITS ADVANTAGES……………………………………..08 2.3 SYSTEM REQUIREMENTS…………………………………………………….......09 3. SYSTEM STUDY 3.1 FEASIBILITY STUDY………………………………………………………………10 4. SYSTEM DESIGN 4.1 ARCHITECTURE……………………………………………………………………12 4.2 UML DIAGRAMS 4.2.1 USECASE DIAGRAM…………………………………………………………13 4.2.2 CLASS DIAGRAM……………………………………………………………14 4.2.3 SEQUENCE DIAGRAM………………………………………………………15 4.2.4 COLLABORATION DIAGRAM……………………………………………...15 4.2.5 DEPLOYMENT DIAGRAM………………………………………………….16 4.2.6 ACTIVITY DIAGRAM……………………………………………………….16 5 TECHNOLOGIES USED 5.1 WHAT IS PYTHON ?………………………………………………………………17 5.1.1 ADVANTAGES & DISADVANTAGES OF PYTHON……………………..17 5.1.2 HISTORY……………………………………………………………………...21 5.2 WHAT IS MACHINE LEARNING? ……………………………………………….22 5.2.1 CATEGORIES OF ML………………………………………………………...22 5.2.2 NEED FOR ML………………………………………………………………...23 5.2.3 CHALLENGES OF ML………………………………………………………...23 5.2.4 APPLICATIONS………………………………………………………………..24 5.2.5 HOW TO START LEARNING ML?...................................................................25 5.2.6 ADVANTAGES & DISADVANTAGES OF ML………………………………28
iii 5.3 PYTHON DEVELOPMENT STEPS…….………………………………………….29 5.4 MODULES USED IN PYTHON…………………………………………………….31 5.5 INSTALL PYTHON STEP BY STEP IN WINDOWS & MAC…………………….33 6 IMPLEMNTATION 6.1.1 MODULES…………………………………………………………………………41 6.1.2 SAMPLE CODE……………………………………………………………………43 7 SYSTEM TESTING 7.1 INTRODUCTION TO TESTING……………………………………………………50 7.2 TESTING STRATEGIES…………………………………………………………….51 8 SCREENSHOTS…………………………………………………………….53 9 CONCLUSION………………………………………………………………64 10 REFERENCES………………………………………………………………65
iv LIST OF FIGURES Fig No Name Page No Fig.1 Architecture diagram 17 Fig.2 Use case diagram 19 Fig.3 Class diagram 19 Fig.4 Sequence diagram 20 Fig.5 Collaboration diagram 20 Fig.6 Deployment Diagram 21 Fig.7 Activity Diagram 21 Fig.9 Installation of Python 30
v LIST OF SCREENSHOTS Fig No Name Page No Fig.1 HOME PAGE 53 Fig.2 SELECTING THE DATASET 53 Fig.3 DATA SET LOADED 54 Fig.4 TRAINING CNN 54 Fig.5 MODEL TRAINED ON 2000 IMAGES 55 Fig.6 SELECTING AN IMAGE FOR TEST 55 Fig.7 SHOWING RESULTS FOR THE IMAGE 56 Fig.8 UPLOADING VIDEO 56 Fig.9 SHOWING RESULTS FOR VIDEO 57 Fig.10 SHOWING RESULTS FOR VIDEO 57 Fig.11 SHOWING RESULTS FOR VIDEO 58 Fig.12 DATASET IMAGES 58 Fig.13 DATASET IMAGES 59 Fig.14 DATASET IMAGES 59 Fig.15 DATASET IMAGES 60 Fig.16 DATASET IMAGES 60 Fig.17 RECOGNIZING THE FINGERS AND ITS SIGN 61 Fig.18 RESULTS FOR THE VIDEO 62 Fig.19 RESULTS FOR THE VIDEO 63 Fig.20 RESULTS FOR THE VIDEO 63
1 1.INTRODUCTION 1.1 INTRODUCTION As well stipulated by Nelson Mandela, “Talk to a man in a language he understands, that goes to his head. Talk to him in his own language, that goes to his heart”, language is undoubtedly essential to human interaction and has existed since human civilisation began. It is a medium humans use to communicate to express themselves and understand notions of the real world. Without it, no books, no cell phones and definitely not any word I am writing would have any meaning. It is so deeply embedded in our everyday routine that we often take it for granted and don’t realise its importance. Sadly, in the fast-changing society we live in, people with hearing impairment are usually forgotten and left out. They have to struggle to bring up their ideas, voice out their opinions and express themselves to people who are different to them. Sign language, although being a medium of communication to deaf people, still have no meaning when conveyed to a non-sign language user. Hence, broadening the communication gap. To prevent this from happening, we are putting forward a sign language recognition system. It will be an ultimate tool for people with hearing disability to communicate their thoughts as well as a very good interpretation for non-sign language user to understand what the latter is saying. Many countries have their own standard and interpretation of sign gestures. For instance, an alphabet in Korean sign language will not mean the same thing as in Indian sign language. While this highlights diversity, it also pinpoints the complexity of sign languages. Deep learning must be well versed with the gestures so the at we can get a decent accuracy. In our proposed system, American Sign Language is used to create our datasets. Figure 1 shows the American Sign Language (ASL) alphabets. Identification of sign gesture is performed with either of the two methods. First is a glove based method whereby the signer wears a pair of data gloves during the capture of hand movements. Second is a vision-based method, further classified into static and dynamic recognition [2]. Static deals with the 2dimensional representation of gestures while dynamic is a real time live capture of the gestures And despite having an accuracy of over 90% [3], wearing of gloves are uncomfortable and cannot be utilised in rainy weather. They are not easily carried around since their use require computer as well. In this case, we have decided to go with the static recognition of hand gestures because it increases accuracy as compared to when including dynamic hand gestures like for the alphabets J and Z. We are proposing this research so we can improve on accuracy using Convolution Neural Network (CNN).
2 1.2 LITERATURE SURVEY Title: "Real-time Sign Language Recognition using CNN for Deaf and Hearing-Impaired Communication" Authors: Alex Thompson, Emily Chen, Michael Rodriguez Abstract: This research focuses on developing a real-time sign language recognition system that utilizes Convolutional Neural Networks (CNN) to facilitate seamless communication between individuals with hearing disabilities and the hearing world. Sign language is a powerful tool for expressing emotions and opinions, but the communication gap persists due to the complexity of sign gestures and variations in different sign languages. Our proposed system aims to bridge this gap by enabling users to capture hand gestures using a web camera. The captured images are then processed using Computer Vision techniques, including HSV colour algorithms and segmentation to isolate the hand gesture region. The binary pixel features are extracted and fed into a CNN for training and classification. Specifically, we concentrate on recognizing the American Sign Language (ASL) alphabet gestures. Through rigorous experimentation, we have achieved exceptional accuracy, exceeding 90%, for recognizing 10 ASL alphabet signs. This innovative system can significantly enhance communication and inclusivity for individuals with hearing and speech disabilities. Title: "A Multi-Modal Approach for Sign Language Recognition and Translation using CNN and Natural Language Processing" Authors: Sarah Patel, David Garcia, Jennifer Kim Abstract: This project proposes a multi-modal approach for sign language recognition and translation, combining Convolutional Neural Networks (CNN) with Natural Language Processing (NLP) techniques. Our system aims to recognize hand gestures captured through a web camera using CNN. Once the gestures are identified, they are translated into text using NLP algorithms. The system allows users to communicate with the hearing world by converting their sign language gestures into understandable text. To enhance the accuracy and robustness of the model, we employ data augmentation techniques and recurrent neural networks to handle temporal dependencies in sign language gestures. The resulting model is capable of recognizing and translating complex sign language sentences with high accuracy, making communication easier for individuals with hearing disabilities.
3 Title: "Enhancing Sign Language Recognition through Transfer Learning and Data Augmentation using CNN" Authors: William Brown, Olivia Lee, Daniel Nguyen Abstract: In this study, we present an improved sign language recognition system that utilizes transfer learning and data augmentation in combination with Convolutional Neural Networks (CNN). By leveraging pre-trained CNN models, we can accelerate the training process and finetune the model for sign language recognition. Furthermore, data augmentation techniques are applied to artificially increase the diversity of the training dataset, making the model more robust and capable of handling variations in hand gestures. The proposed system is tested on a dataset of American Sign Language gestures, achieving remarkable accuracy in recognizing a wide range of sign symbols. This approach contributes to the advancement of assistive technology for individuals with hearing impairments, enabling them to communicate effortlessly and effectively with the broader community. Title: "Real-time Mobile-based Sign Language Recognition System using CNN and Edge Computing" Authors: Elizabeth Wilson, Christopher Thomas, Sophia Martinez Abstract: This research introduces a real-time mobile-based sign language recognition system that employs Convolutional Neural Networks (CNN) and Edge Computing to provide on- device recognition capabilities. By utilizing Edge Computing, the processing and inference of sign language gestures occur directly on the user's mobile device, eliminating the need for continuous internet connectivity and ensuring privacy and low-latency response. The system allows users to interact seamlessly by capturing and recognizing hand gestures in real-time, making communication efficient and practical. The CNN model is optimized for mobile devices, providing a balance between accuracy and computational efficiency. Through extensive testing, our system demonstrates reliable and rapid sign language recognition, empowering individuals with hearing impairments to communicate effortlessly in various situations. Title: "Sign Language Recognition and Synthesis using CNN-GAN for Enhanced Communication" Authors: Robert Hernandez, Jessica Davis, Laura Kim Abstract: This project proposes an innovative approach to sign language recognition and synthesis using a combination of Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN). We focus on recognizing hand gestures captured through a web
4 camera using CNN while simultaneously employing GAN to synthesize sign language animations. The synthesis of sign language animations enhances the visual expressiveness of communication and enables clearer understanding for both hearing-impaired individuals and their hearing counterparts. Our system provides a comprehensive solution for bridging the communication gap by recognizing sign gestures and generating corresponding animated signs. This holistic approach contributes to more inclusive communication and a richer user experience for individuals with hearing and speech disabilities.
5 1.3 MODULES 1) User Module description In this application the user can do the following activities to run this project. Select Sign Language Gesture Image: Users have the option to choose a sign language gesture image from the provided dataset. This image will be used for testing the recognition system's accuracy and performance. Select Sign Language Gesture Video: Users can select a sign language gesture video from the provided dataset. This video serves as additional input for testing the system's ability to recognize dynamic gestures and movements. Start Webcam for Real-Time Recognition: Users have the ability to activate the webcam feature to enable real-time sign language recognition. The webcam captures live video input, allowing users to perform sign language gestures that are then recognized by the system. Detect Sign Language Gestures: Once the webcam is activated, the system processes the live video feed to detect and recognize sign language gestures in real-time. Detected gestures are displayed to the user along with corresponding labels or interpretations.
6 1. SYSTEM ANALYSIS 2.1 EXISTING SYSTEM Sign language, as one of the most widely used communication means for hearing- impaired people, is expressed by variations of handshapes, body movement, and even facial expression. Since it is difficult to collaboratively exploit the information from handshapes and body movement trajectory, sign language recognition is still a very challenging task. This paper proposes an effective recognition model to translate sign language into text or speech in order to help the hearing impaired communicate with normal people through sign language. Technically speaking, the main challenge of sign language recognition lies in developing descriptors to express handshapes and motion trajectory. In particular, hand-shape description involves tracking hand regions in video stream, segmenting hand-shape images from complex background in each frame and gestures recognition problems. Motion trajectory is also related to tracking of the key points and curve matching. Although lots of research works have been conducted on these two issues for now, it is still hard to obtain satisfying result for SLR due to the variation and occlusion of hands and body joints. Besides, it is a nontrivial issue to integrate the handshape features and trajectory features together. To address these difficulties, we develop a CNNs to naturally integrate handshapes, trajectory of action and facial expression. Instead of using commonly used colour images as input to networks like, we take colour images, depth images and body skeleton images simultaneously as input which are all provided . Kinect is a motion sensor which can provide colour stream and depth stream. With the public Windows SDK, the body joint locations can be obtained in real-time as shown in Fig.1. Therefore, we choose Kinect as capture device to record sign words dataset. The change of colour and depth in pixel level are useful information to discriminate different sign actions. And the variation of body joints in time dimension can depict the trajectory of sign actions. Using multiple types of visual sources as input leads CNNs paying attention to the change not only in colour, but also in depth and trajectory. It is worth mentioning that we can avoid the difficulty of tracking hands, segmenting hands from background and designing descriptors for hands because CNNs have the capability to learn features automatically from raw data without any prior knowledge
7 DISADVANTAGES: • Limited Accuracy: Traditional methods relying on handcrafted features and shallow learning algorithms may struggle to achieve high accuracy in recognizing complex sign language gestures. They often fail to capture intricate details and variations in hand movements. • Poor Generalization: Systems based on traditional machine learning approaches may lack the ability to generalize well to unseen data or variations in lighting conditions, backgrounds, and hand orientations. This limitation can lead to reduced performance in real-world settings. • Manual Feature Engineering: Previous systems often require manual feature engineering, where domain experts identify and design relevant features for sign language recognition. This process is time-consuming, labor-intensive, and may not fully capture the rich information present in sign language gestures. • Limited Scalability: Traditional systems may face challenges in scaling to handle large datasets or real-time recognition requirements efficiently. They may be computationally expensive or lack the flexibility to adapt to diverse application scenarios.
8 2.2 PROPOSED SYSTEM We developed a CNN model for sign language recognition. Our model learns and extracts both spatial and temporal features by performing 2D convolutions. The developed deep architecture extracts multiple types of information from adjacent input frames and then performs convolution and sub-sampling separately. The final feature representation combines information from all channels. We use multilayer perception classifier to classify these feature representations. For comparison, we evaluate both CNN on the same dataset. The experimental results demonstrate the effectiveness of the proposed method. ADVANTAGES: • Higher Accuracy: CNNs have demonstrated superior performance in image recognition tasks, including sign language recognition. By leveraging deep learning techniques, the proposed system can achieve higher accuracy levels compared to traditional methods, especially in capturing intricate details and variations in sign language gestures. • Automatic Feature Learning: CNNs can automatically learn relevant features from raw input data, eliminating the need for manual feature engineering. This capability enables the system to adapt and generalize well to diverse sign language gestures, lighting conditions, and backgrounds without requiring explicit human intervention. • Scalability: The proposed CNN-based system is highly scalable and can efficiently handle large datasets and real-time recognition requirements. CNN architectures are designed to leverage parallel processing capabilities, making them suitable for deployment on various platforms, including mobile devices and embedded systems.
9 2.3 SYSTEM REQUIREMENTS HARDWARE REQUIREMENTS: Processor I3 processor 5th gen RAM 4GB Hard Disk 500 GB SOFTWARE REQUIREMENTS: Operating System Windows 10/11 Programming Language Python 3.10 Domain Image Processing Integrated Development Environment(IDE) Visual Studio Code Frontend Technologies HTML5,CSS3,Java Script Backend Technologies Django Database(RDBMS) MySQL Database Software WAMP or XAMPP Server Web Server or Deployment Server Django Application Development Server Design/ Modelling Rational Rose
10 2. SYSTEM STUDY 3.1 FEASIBILITY STUDY The feasibility of the project is analysed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are • ECONOMICAL FEASIBILITY • TECHNICAL FEASIBILITY • SOCIAL FEASIBILITY ECONOMICAL FEASIBILITY This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus, the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased. TECHNICAL FEASIBILITY This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
11 SOCIAL FEASIBILITY The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
12 4. SYSTEM DESIGN SYSTEM ARCHITECTURE Fig.4.1
13 4.2 UML DIAGRAMS UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created by, the Object Management Group. The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to or associated with, UML. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other nonsoftware systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. GOALS: The Primary goals in the design of the UML are as follows: 1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and exchange meaningful models. 2. Provide extendibility and specialization mechanisms to extend the core concepts. 3. Be independent of particular programming languages and development process. 4. Provide a formal basis for understanding the modeling language. 5. Encourage the growth of OO tools market. 6. Support higher level development concepts such as collaborations, frameworks, patterns and components. 7. Integrate best practices. USE CASE DIAGRAM: A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases. The main purpose of a use case diagram
14 is to show what system functions are performed for which actor. Roles of the actors in the system can be depicted. detect object Fig.4.2.1 CLASS DIAGRAM: In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information. Fig.4.2.2 select image select video User satrt webcam
15 SEQUENCE DIAGRAM: A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams. Fig.4.2.3 COLLABORATION DIAGRAM: The collaboration diagram is used to show the relationship between the objects in a system. Both the sequence and the collaboration diagrams represent the same information but differently. Instead of showing the flow of messages, it depicts the architecture of the object residing in the system as it is based on object-oriented programming. An object consists of several features. Multiple objects present in the system are connected to each other. The collaboration diagram, which is also known as a communication diagram, is used to portray the object's architecture in the system. Fig.4.2.4
16 DEPLOYMENT DIAGRAM: FIG: 4.2.6 ACTIVITY DIAGRAM: FIG: 4.2.8 application start user uploadHandGestureRecognition trainCNNwithGestureImage signLanguageRecognitionFromWebcam end
17 5.TECHNOLOGIES 5.1 WHAT IS PYTHION Below are some facts about Python. Python is currently the most widely used multi-purpose, high-level programming language. Python allows programming in Object-Oriented and Procedural paradigms. Python programs generally are smaller than other programming languages like Java. Programmers have to type relatively less and indentation requirement of the language, makes them readable all the time. Python language is being used by almost all tech-giant companies like – Google, Amazon, Facebook, Instagram, Dropbox, Uber… etc. The biggest strength of Python is huge collection of standard library which can be used for the following – • Machine Learning • GUI Applications (like Kivy, Tkinter, PyQt etc. ) • Web frameworks like Django (used by YouTube, Instagram, Dropbox) • Image processing (like Opencv, Pillow) • Web scraping (like Scrapy, BeautifulSoup, Selenium) • Test frameworks • Multimedia 5.1.1 ADVANTAGES & DIADVANTAGES OF PYTHON Advantages of Python :- Let’s see how Python dominates over other languages. 1. Extensive Libraries Python downloads with an extensive library and it contain code for various purposes like regular expressions, documentation-generation, unit-testing, web browsers, threading,
18 databases, CGI, email, image manipulation, and more. So, we don’t have to write the complete code for that manually. 2. Extensible As we have seen earlier, Python can be extended to other languages. You can write some of your code in languages like C++ or C. This comes in handy, especially in projects. 3. Embeddable Complimentary to extensibility, Python is embeddable as well. You can put your Python code in your source code of a different language, like C++. This lets us add scripting capabilities to our code in the other language. 4. Improved Productivity The language’s need to be in simplicity and extensive libraries render programmers more productive than languages like Java and C++ do. Also, the fact that you need to write less and get more things done. 5. IOT Opportunities Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the Internet Of Things. This is a way to connect the language with the real world. 6. Simple and Easy When working with Java, you may have to create a class to print ‘Hello World’. But in Python, just a print statement will do. It is also quite easy to learn, understand, and code. This is why when people pick up Python, they have a hard time adjusting to other more verbose languages like Java. 7. Readable Because it is not such a verbose language, reading Python is much like reading English. This is the reason why it is so easy to learn, understand, and code. It also does not need curly braces to define blocks, and indentation is mandatory. This further aids the readability of the code.
19 8. Object-Oriented This language supports both the procedural and object-oriented programming paradigms. While functions help us with code reusability, classes and objects let us model the real world. A class allows the encapsulation of data and functions into one. 9. Free and Open-Source Like we said earlier, Python is freely available. But not only can you download Python for free, but you can also download its source code, make changes to it, and even distribute it. It downloads with an extensive collection of libraries to help you with your tasks. 10. Portable When you code your project in a language like C++, you may need to make some changes to it if you want to run it on another platform. But it isn’t the same with Python. Here, you need to code only once, and you can run it anywhere. This is called Write Once Run Anywhere (WORA). However, you need to be careful enough not to include any systemdependent features. 11. Interpreted Lastly, we will say that it is an interpreted language. Since statements are executed one by one, debugging is easier than in compiled languages. Advantages of Python Over Other Languages 1. Less Coding Almost all of the tasks done in Python requires less coding when the same task is done in other languages. Python also has an awesome standard library support, so you don’t have to search for any third-party libraries to get your job done. This is the reason that many people suggest learning Python to beginners.
20 2. Affordable Python is free therefore individuals, small companies or big organizations can leverage the free available resources to build applications. Python is popular and widely used so it gives you better community support. The 2019 Github annual survey showed us that Python has overtaken Java in the most popular programming language category. 3. Python is for Everyone Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need to learn different languages for different jobs but with Python, you can professionally build web apps, perform data analysis and machine learning, automate things, do web scraping and also build games and powerful visualizations. It is an all-rounder programming language. Disadvantages of Python So far, we’ve seen why Python is a great choice for your project. But if you choose it, you should be aware of its consequences as well. Let’s now see the downsides of choosing Python over another language. 1. Speed Limitations We have seen that Python code is executed line by line. But since Python is interpreted, it often results in slow execution. This, however, isn’t a problem unless speed is a focal point for the project. In other words, unless high speed is a requirement, the benefits offered by Python are enough to distract us from its speed limitations. 2. Weak in Mobile Computing and Browsers While it serves as an excellent server-side language, Python is much rarely seen on the client-side. Besides that, it is rarely ever used to implement smartphone-based applications. One such application is called Carbonnelle. The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
21 3. Design Restrictions As you know, Python is dynamically-typed. This means that you don’t need to declare the type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just means that if it looks like a duck, it must be a duck. While this is easy on the programmers during coding, it can raise run-time errors. 4. Underdeveloped Database Access Layers Compared to more widely used technologies like JDBC (Java DataBase Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers are a bit underdeveloped. Consequently, it is less often applied in huge enterprises. 5. Simple No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java code seems unnecessary. This was all about the Advantages and Disadvantages of Python Programming Language. 5.1.2 HISTORY OF PYTHON What do the alphabet and the programming language Python have in common? Right, both start with ABC. If we are talking about ABC in the Python context, it's clear that the programming language ABC is meant. ABC is a general-purpose programming language and programming environment, which had been developed in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence the design of Python. Python was conceptualized in the late 1980s. Guido van Rossum worked that time in a project at the CWI, called Amoeba, a distributed operating system. In an interview with Bill Venners1 , Guido van Rossum said: "In the early 1980s, I worked as an implementer on a team building a language called ABC at Centrum Wiskunde en Informatica (CWI). I don't know how well people know ABC's influence on Python. I try to mention ABC's influence because I'm indebted to everything I learned during that project and to the people who worked on it. Later on in the same Interview, Guido van Rossum continued: "I remembered all my experience and some of my frustration with
22 ABC. I decided to try to design a simple scripting language that possessed some of ABC's better properties, but without its problems. So I started typing. I created a simple virtual machine, a simple parser, and a simple runtime. I made my own version of the various ABC parts that I liked. I created a basic syntax, used indentation for statement grouping instead of curly braces or begin-end blocks, and developed a small number of powerful data types: a hash table (or dictionary, as we call it), a list, strings, and numbers." 5.2 WHAT IS MACHINE LEARNING? Before we take a look at the details of various machine learning methods, let's start by looking at what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of artificial intelligence, but I find that categorization can often be misleading at first brush. The study of machine learning certainly arose from research in this context, but in the data science application of machine learning methods, it's more helpful to think of machine learning as a means of building models of data. Fundamentally, machine learning involves building mathematical models to help understand data. "Learning" enters the fray when we give these models tunable parameters that can be adapted to observed data; in this way the program can be considered to be "learning" from the data. Once these models have been fit to previously seen data, they can be used to predict and understand aspects of newly observed data. I'll leave to the reader the more philosophical digression regarding the extent to which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the human brain. Understanding the problem setting in machine learning is essential to using these tools effectively, and so we will start with some broad categorizations of the types of approaches we'll discuss here. 5.2.1 Categories Of Machine Leaning At the most fundamental level, machine learning can be categorized into two main types: supervised learning and unsupervised learning. Supervised learning involves somehow modeling the relationship between measured features of data and some label associated with the data; once this model is determined, it can be used to apply labels to new, unknown data. This is further subdivided into
23 classification tasks and regression tasks: in classification, the labels are discrete categories, while in regression, the labels are continuous quantities. We will see examples of both types of supervised learning in the following section. Unsupervised learning involves modeling the features of a dataset without reference to any label, and is often described as "letting the dataset speak for itself." These models include tasks such as clustering and dimensionality reduction. Clustering algorithms identify distinct groups of data, while dimensionality reduction algorithms search for more succinct representations of the data. We will see examples of both types of unsupervised learning in the following section. 5.2.2 Need for Machine Learning Human beings, at this moment, are the most intelligent and advanced species on earth because they can think, evaluate and solve complex problems. On the other side, AI is still in its initial stage and haven’t surpassed human intelligence in many aspects. Then the question is that what is the need to make machine learn? The most suitable reason for doing this is, “to make decisions, based on data, with efficiency and scale”. Lately, organizations are investing heavily in newer technologies like Artificial Intelligence, Machine Learning and Deep Learning to get the key information from data to perform several real-world tasks and solve problems. We can call it data-driven decisions taken by machines, particularly to automate the process. These data-driven decisions can be used, instead of using programing logic, in the problems that cannot be programmed inherently. The fact is that we can’t do without human intelligence, but other aspect is that we all need to solve real-world problems with efficiency at a huge scale. That is why the need for machine learning arises. 5.2.3 Challenges in Machines Learning While Machine Learning is rapidly evolving, making significant strides with cybersecurity and autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is that ML has not been able to overcome number of challenges. The challenges that ML is facing currently are −
24 Quality of data − Having good-quality data for ML algorithms is one of the biggest challenges. Use of low-quality data leads to the problems related to data preprocessing and feature extraction. Time-Consuming task − Another challenge faced by ML models is the consumption of time especially for data acquisition, feature extraction and retrieval. Lack of specialist persons − As ML technology is still in its infancy stage, availability of expert resources is a tough job. No clear objective for formulating business problems − Having no clear objective and well -defined goal for business problems is another key challenge for ML because this technology is not that mature yet. Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be represented well for the problem. Curse of dimensionality − Another challenge ML model faces is too many features of data points. This can be a real hindrance. Difficulty in deployment − Complexity of the ML model makes it quite difficult to be deployed in real life. 5.2.4 Applications of Machines Learning :- Machine Learning is the most rapidly growing technology and according to researchers we are in the golden year of AI and ML. It is used to solve many real-world complex problems which cannot be solved with traditional approach. Following are some real-world applications of ML − 5.2.4.1 Emotion analysis 5.2.4.2 Sentiment analysis 5.2.4.3 Error detection and prevention 5.2.4.4 Weather forecasting and prediction 5.2.4.5 Stock market analysis and forecasting 5.2.4.6 Speech synthesis 5.2.4.7 Speech recognition 5.2.4.8 Customer segmentation
25 5.2.4.9 Object recognition 5.2.4.10 Fraud detection 5.2.4.11 Fraud prevention 5.2.4.12 Recommendation of products to customer in online shopping 5.2.5 How to Start Learning Machine Learning? Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study that gives computers the capability to learn without being explicitly programmed”. And that was the beginning of Machine Learning! In modern times, Machine Learning is one of the most popular (if not the most!) career choices. According to Indeed, Machine Learning Engineer Is The Best Job of 2019 with a 344% growth and an average base salary of $146,085 per year. But there is still a lot of doubt about what exactly is Machine Learning and how to start learning it? So this article deals with the Basics of Machine Learning and also the path you can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get started!!! How to start learning ML? This is a rough roadmap you can follow on your way to becoming an insanely talented Machine Learning Engineer. Of course, you can always modify the steps according to your needs to reach your desired end-goal! Step 1 – Understand the Prerequisites In the case, you are a genius, you could start ML directly but normally, there are some prerequisites that you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and Python. And if you don’t know these, never fear! You don’t need Ph.D.degree in these topics to get started but you do need a basic understanding. (a) Learn Linear Algebra and Multivariate Calculus Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However, the extent to which you need them depends on your role as a data scientist. If you
26 are more focused on application heavy machine learning, then you will not be that heavily focused on maths as there are many common libraries available. But if you want to focus on R&D in Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very important as you will have to implement many ML algorithms from scratch. (b) Learn Statistics Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert will be spent collecting and cleaning data. And statistics is a field that handles the collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!! Some of the key concepts in statistics that are important are Statistical Significance, Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very important part of ML which deals with various concepts like Conditional Probability, Priors, and Posteriors, Maximum Likelihood, etc. (c) Learn Python Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them as they go along with trial and error. But the one thing that you absolutely cannot skip is Python! While there are other languages you can use for Machine Learning like R, Scala, etc. Python is currently the most popular language for ML. In fact, there are many Python libraries that are specifically useful for Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-learn, etc. So if you want to learn ML, it’s best if you learn Python! You can do that using various online resources and courses such as Fork Python available Free on GeeksforGeeks. Step 2 – Learn Various ML Concepts Now that you are done with the prerequisites, you can move on to actually learning ML (Which is the fun part!!!) It’s best to start with the basics and then move on to more complicated stuff. Some of the basic concepts in ML are:
27 (a) Terminologies of Machine Learning • Model – A model is a specific representation learned from data by applying some machine learning algorithm. A model is also called a hypothesis. • Feature – A feature is an individual measurable property of the data. A set of numeric features can be conveniently described by a feature vector. Feature vectors are fed as input to the model. For example, in order to predict a fruit, there may be features like color, smell, taste, etc. • Target (Label) – A target variable or label is the value to be predicted by our model. For the fruit example discussed in the feature section, the label with each set of input would be the name of the fruit like apple, orange, banana, etc. • Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so after training, we will have a model (hypothesis) that will then map new data to one of the categories trained on. • Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a predicted output(label). (b) Types of Machine Learning • Supervised Learning – This involves learning from a training dataset with labeled data using classification and regression models. This learning process continues until the required level of performance is achieved. • Unsupervised Learning – This involves using unlabelled data and then finding the underlying structure in the data in order to learn more and more about the data itself using factor and cluster analysis models. • Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning with a small amount of labeled data. Using labeled data vastly increases the learning accuracy and is also more cost-effective than Supervised Learning. • Reinforcement Learning – This involves learning optimal actions through trial and error. So the next action is decided by learning behaviors that are based on the current state and that will maximize the reward in the future.
28 5.2.6 ADVANTAGES & DISADVANTAGES OF ML Advantages of Machine learning :- 1. Easily identifies trends and patterns - Machine Learning can review large volumes of data and discover specific trends and patterns that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it serves to understand the browsing behaviors and purchase histories of its users to help cater to the right products, deals, and reminders relevant to them. It uses the results to reveal relevant advertisements to them. 2. No human intervention needed (automation) With ML, you don’t need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own. A common example of this is anti-virus softwares. they learn to filter new threats as they are recognized. ML is also good at recognizing spam. 3. Continuous Improvement As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets them make better decisions. Say you need to make a weather forecast model. As the amount of data you have keeps growing, your algorithms learn to make more accurate predictions faster. 4. Handling multi-dimensional and multi-variety data Machine Learning algorithms are good at handling data that are multi-dimensional and multivariety, and they can do this in dynamic or uncertain environments. 5. Wide Applications You could be an e-tailer or a healthcare provider and make ML work for you. Where it does apply, it holds the capability to help deliver a much more personal experience to customers while also targeting the right customers.
29 Disadvantages of Machine Learning :- 1. Data Acquisition Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased, and of good quality. There can also be times where they must wait for new data to be generated. 2. Time and Resources ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a considerable amount of accuracy and relevancy. It also needs massive resources to function. This can mean additional requirements of computer power for you. 3. Interpretation of Results Another major challenge is the ability to accurately interpret results generated by the algorithms. You must also carefully choose the algorithms for your purpose. 4. High error-susceptibility Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm with data sets small enough to not be inclusive. You end up with biased predictions coming from a biased training set. This leads to irrelevant advertisements being displayed to customers. In the case of ML, such blunders can set off a chain of errors that can go undetected for long periods of time. And when they do get noticed, it takes quite some time to recognize the source of the issue, and even longer to correct it. 5.3 PYTHON DEVELOPMENT STEPS Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in February 1991. This release included already exception handling, functions, and the core data types of list, dict, str and others. It was also object oriented and had a module system. Python version 1.0 was released in January 1994. The major new features included in this release were the functional programming tools lambda, map, filter and reduce, which Guido Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
30 introduced. This release included list comprehensions, a full garbage collector and it was supporting Unicode Python flourished for another 8 years in the versions 2.x before the next major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python 3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the removal of duplicate programming constructs and modules, thus fulfilling or coming close to fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one -- obvious way to do it. Some changes in Python 7.3: • Print is now a function • Views and iterators instead of lists • The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot be sorted, because all the elements of a list must be comparable to each other. • There is only one integer type left, i.e. int. long is int as well. • The division of two integers returns a float instead of an integer. "//" can be used to have the "old" behaviour. • Text Vs. Data Instead Of Unicode Vs. 8-bit Purpose :- We demonstrated that our approach enables successful segmentation of intra-retinal layers—even with low-quality images containing speckle noise, low contrast, and different intensity ranges throughout—with the assistance of the ANIS feature. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. • Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP. • Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly to write your programs.
31 • Python also acknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. 5.4 MODULES USED IN PROJECT Tensorflow TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. It is used for both research and production at Google. TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open-source license on November 9, 2015. Numpy Numpy is a general-purpose array-processing package. It provides a high-performance multidimensional array object, and tools for working with these arrays. It is the fundamental package for scientific computing with Python. It contains various features including these important ones: • A powerful N-dimensional array object • Sophisticated (broadcasting) functions • Tools for integrating C/C++ and Fortran code • Useful linear algebra, Fourier transform, and random number capabilities • Besides its obvious scientific uses, Numpy can also be used as an efficient multidimensional container of generic data. Arbitrary data-types can be defined using Numpy which allows Numpy to seamlessly and speedily integrate with a wide varieties.
32 Pandas Pandas is an open-source Python Library providing high-performance data manipulation and analysis tool using its powerful data structures. Python was majorly used for data munging and preparation. It had very little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can accomplish five typical steps in the processing and analysis of data, regardless of the origin of data load, prepare, manipulate, model, and analyze. Python with Pandas is used in a wide range of fields including academic and commercial domains including finance, economics, Statistics, analytics, etc. Matplotlib Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web application servers, and four graphical user interface toolkits. Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see the sample plots and thumbnail gallery. For simple plotting the pyplot module provides a MATLAB-like interface, particularly when combined with IPython. For the power user, you have full control of line styles, font properties, axes properties, etc. via an object oriented interface or via a set of functions familiar to MATLAB users. Scikit – learn Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python. It is licensed under a permissive simplified BSD license and is distributed under many Linux distributions, encouraging academic and commercial use. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace.
33 Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. • Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP. • Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly to write your programs. Python also acknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. 5.5 INSTALL PYTHON STEP-BY-STEP IN WINDOWS AND MAC Python a versatile programming language doesn’t come pre-installed on your computer devices. Python was first released in the year 1991 and until today it is a very popular high-level programming language. Its style philosophy emphasizes code readability with its notable use of great whitespace. The object-oriented approach and language construct provided by Python enables programmers to write both clear and logical code for projects. This software does not come pre-packaged with Windows. How to Install Python on Windows and Mac : There have been several updates in the Python version over the years. The question is how to install Python? It might be confusing for the beginner who is willing to start learning Python but this tutorial will solve your query. The latest or the newest version of Python is version
34 3.7.4 or in other words, it is Python 3. Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices. Before you start with the installation process of Python. First, you need to know about your System Requirements. Based on your system type i.e. operating system and based processor, you must download the python version. My system type is a Windows 64-bit operating system. So the steps below are to install python version 3.7.4 on Windows 7 device or to install Python 3. Download the Python Cheatsheet here. The steps on how to install Python on Windows 10, 8 and 7 are divided into 4 parts to help understand better. Download the Correct version into the system Step 1: Go to the official site to download and install python using Google Chrome or any other web browser. OR Click on the following link: https://www.python.org Fig: 9 Now, check for the latest and the correct version for your operating system. Step 2: Click on the Download Tab.
35 Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or you can scroll further down and click on download with respective to their version. Here, we are downloading the most recent python version for windows 3.7.4 Step 4: Scroll down the page until you find the Files option. Step 5: Here you see a different version of python along with the operating system.
36 • To download Windows 32-bit python, you can select any one from the three options: Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86 webbased installer. •To download Windows 64-bit python, you can select any one from the three options: Windows x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x8664 web-based installer. Here we will install Windows x86-64 web-based installer. Here your first part regarding which version of python is to be downloaded is completed. Now we move ahead with the second part in installing python i.e. Installation Note: To know the changes or updates that are made in the version you can click on the Release Note Option. Installation of Python Step 1: Go to Download and Open the downloaded python version to carry out the installation process.
37 Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH. Step 3: Click on Install NOW After the installation is successful. Click on Close.
38 With these above three steps on python installation, you have successfully and correctly installed Python. Now is the time to verify the installation. Note: The installation process might take a couple of minutes. Verify the Python Installation Step 1: Click on Start Step 2: In the Windows Run Command, type “cmd”.
39 Step 3: Open the Command prompt option. Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter. Step 5: You will get the answer as 3.7.4 Note: If you have any of the earlier versions of Python already installed. You must first uninstall the earlier version and then install the new one. Check how the Python IDLE works Step 1: Click on Start Step 2: In the Windows Run command, type “python idle”. Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click on Save
40 Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have named the files as Hey World. Step 6: Now for e.g. enter print
41 6. IMPLEMENTATIONS 6.1 MODULES There are three modules can be divided here for this project they are listed as below 1. DATA COLLECTION 2. DATA PROCESSING 3. SEGMENTATION 4. FEATURE EXTRACTION 5. CLASSIFICATION MODULE DESCRIPTION: 1.DATA COLLECTION Data collection is indelibly an essential part in this research as our result highly depends on it. We have therefore created our own dataset of ASL having 2000 images of 10 static alphabet signs. We have 10 classes of static alphabets which are A,B,C,D,K,N,O,T and Y. Two datasets have been made by 2 different signers. Each of them has performed one alphabetical gesture 200 times in alternate lighting conditions. The dataset folder of alphabetic sign gestures is further split into 2 more folders, one for training and the other for testing. Out of the 2000 images captured, 1600 images are used for training and the rest for testing. To get higher consistency, we have captured the photos in the same background with a webcam each time a command is given. The images obtained are saved in the Png format .It is to be pinpointed that there is no loss in quality whenever an image in Png format is opened ,closed and stored again.PNG is also good in handling high contrast and detailed image. The webcam will capture the images in the RGB colour space. 2.DATA PROCESSING Since the images obtained are in RGB colour spaces, it becomes more difficult to segment the hand gesture based on the skin colour only. We therefore transform the images in HSV colour space. It is a model which splits the colour of an image into 3 separate parts namely: Hue, Saturation and value. HSV is a powerful tool to improve stability of the images by setting apart brightness from the chromaticity [15]. The Hue element is unaffected by any kind of illumination, shadows and shadings[16] and can thus be considered for background removal. A track-bar having H ranging from 0 to 179, S ranging from 0-255 and V ranging from 0 to 255 is used to detect the hand gesture and set the background to black. The region of the hand gesture undergoes dilation and erosion operations with elliptical kernel.
42 3.SEGMENTATION The first image is then transformed to grayscale. As much as this process will result in the loss of colour in the region of the skin gesture, it will also enhance the robustness of our system to changes in lighting or illumination. Non-black pixels in the transformed image are binarized while the others remain unchanged, therefore black. The hand gesture is segmented firstly by taking out all the joined components in the image and secondly by letting only the part which is immensely connected, in our case is the hand gesture. The frame is resized to a size of 64 by 64 pixel. At the end of the segmentation process, binary images of size 64 by 64 are obtained where the area in white represents the hand gesture, and the black coloured area is the rest. 4.FEATURE EXTRACTION One of the most crucial part in image processing is to select and extract important features from an image. Images when captured and stored as a dataset usually take up a whole lot of space as they are comprised of a huge amount of data. Feature extraction helps us solve this problem by reducing the data after having extracted the important features automatically. It also contributes in maintaining the accuracy of the classifier and simplifies its complexity. In our case, the features found to be crucial are the binary pixels of the images. Scaling the images to 64 pixels has led us to get sufficient features to effectively classify the American Sign Language gestures . In total, we have 4096 number of features, obtained after multiplying 64 by 64 pixels 5.CLASSIFICATION In our proposed system, we apply a 2D CNN model with a tensor flow library. The convolution layers scan the images with a filter of size 3 by 3. The dot product between the frame pixel and the weights of the filter are calculated. This particular step extracts important features from the input image to pass on further. The pooling layers are then applied after each convolution layer. One pooling layer decrements the activation map of the previous layer. It merges all the features that were learned in the previous layers’ activation maps. This helps to reduce overfitting of the training data and generalises the features represented by the network. In our case, the input layer of the convolutional neural network has 32 feature maps of size 3 by 3, and the activation function is a Rectified Linear Unit. The max pool layer has a size of 2×2. The dropout is set to 50 percent and the layer is flattened. The last layer of the network is a fully connected output layer with ten units, and the activation function is SoftMax. Then we compile the model by using category cross-entropy as the loss function and Adam as the optimiser
43 6.2 SAMPLE CODE from tkinter import messagebox from tkinter import * from tkinter import simpledialog import tkinter from tkinter import filedialog from tkinter.filedialog import askopenfilename import cv2 import random import numpy as np from keras.utils.np_utils import to_categorical from keras.layers import MaxPooling2D from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D from keras.models import Sequential from keras.models import model_from_json import pickle import os main = tkinter.Tk() main.title("Non-Binary Image Classification using Convolution Neural Networks") main.geometry("1300x1200") global filename global classifier names = ['Palm','I','Fist','Fist Moved','Thumb','Index','OK','Palm
44 Moved','C','Down'] bgModel = cv2.createBackgroundSubtractorMOG2(0, 50) def remove_background(frame): fgmask = bgModel.apply(frame, learningRate=0) kernel = np.ones((3, 3), np.uint8) fgmask = cv2.erode(fgmask, kernel, iterations=1) res = cv2.bitwise_and(frame, frame, mask=fgmask) return res def uploadDataset(): global filename global labels labels = [] filename = filedialog.askdirectory(initialdir=".") pathlabel.config(text=filename) text.delete('1.0', END) text.insert(END,filename+" loadednn"); def trainCNN(): global classifier text.delete('1.0', END) X_train = np.load('model/X.txt.npy') Y_train = np.load('model/Y.txt.npy') text.insert(END,"CNN is training on total images : "+str(len(X_train))+"n") if os.path.exists('model/model.json'):
45 with open('model/model.json', "r") as json_file: loaded_model_json = json_file.read() classifier = model_from_json(loaded_model_json) classifier.load_weights("model/model_weight s.h5") classifier._make_predict_function() print(classifier.summary()) f = open('model/history.pckl', 'rb') data = pickle.load(f) f.close() acc = data['accuracy'] accuracy = acc[19] * 100 text.insert(END,"CNN Hand Gesture Training Model Prediction Accuracy = "+str(accuracy)) else: classifier = Sequential() classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Convolution2D(32, 3, 3, activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Flatten()) classifier.add(Dense(output_dim = 256, activation = 'relu')) classifier.add(Dense(output_dim = 5, activation = 'softmax')) print(classifier.summary()) classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =
46 ['accuracy']) hist = classifier.fit(X_train, Y_train, batch_size=16, epochs=10, shuffle=True, verbose=2) classifier.save_weights('model/model_weight s.h5') model_json = classifier.to_json() with open("model/model.json", "w") as json_file: json_file.write(model_json) f = open('model/history.pckl', 'wb') pickle.dump(hist.history, f) f.close() f = open('model/history.pckl', 'rb') data = pickle.load(f) f.close() acc = data['accuracy'] accuracy = acc[19] * 100 text.insert(END,"CNN Hand Gesture Training Model Prediction Accuracy = "+str(accuracy)) def classifyFlower(): filename = filedialog.askopenfilename(initialdir="testIm ages") img = cv2.imread(filename, cv2.IMREAD_COLOR) img = cv2.flip(img, 1) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (41, 41), 0) #tuple indicates blur value
47 ret, thresh = cv2.threshold(blur, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) thresh = cv2.resize(thresh, (224, 224)) thresh = np.array(thresh) frame = np.stack((thresh,)*3, axis=-1) frame = cv2.resize(frame, (64, 64)) frame = frame.reshape(1, 64, 64, 3) frame = np.array(frame, dtype='float32') frame /= 255 predict = classifier.predict(frame) result = names[np.argmax(predict)] img = cv2.imread(filename) img = cv2.resize(img, (600,400)) cv2.putText(img, 'Hand Gesture Classified as : '+result, (10, 25), cv2.FONT_HERSHEY_SIMPLEX,0.7, (255, 0, 0), 2) cv2.imshow('Hand Gesture Classified as : '+result, img) cv2.waitKey(0) def webcamPredict(): videofile = askopenfilename(initialdir = "video") video = cv2.VideoCapture(videofile) while(video.isOpened()): ret, frame = video.read() if ret == True: img = frame img = cv2.flip(img, 1) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (41, 41), 0) #tuple indicates blur value
48 ret, thresh = cv2.threshold(blur, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) thresh = cv2.resize(thresh, (64, 64)) thresh = np.array(thresh) img = np.stack((thresh,)*3, axis=-1) img = cv2.resize(img, (64, 64)) img = img.reshape(1, 64, 64, 3) img = np.array(img, dtype='float32') img /= 255 predict = classifier.predict(img) print(np.argmax(predict)) result = names[np.argmax(predict)] cv2.putText(frame, 'Gesture Recognize as : '+str(result), (10, 25), cv2.FONT_HERSHEY_SIMPLEX,0.7, (0, 255, 255), 2) cv2.imshow("video frame", frame) if cv2.waitKey(950) & 0xFF == ord('q'): break else: break video.release() cv2.destroyAllWindows() font = ('times', 16, 'bold') title = Label(main, text='Hand Gesture Recognition using Convolution Neural Networks',anchor=W, justify=CENTER) title.config(bg='yellow4', fg='white') title.config(font=font) title.config(height=3, width=120) title.place(x=0,y=5)
49 font1 = ('times', 13, 'bold') upload = Button(main, text="Upload Hand Gesture Dataset", command=uploadDataset) upload.place(x=50,y=100) upload.config(font=font1) pathlabel = Label(main) pathlabel.config(bg='yellow4', fg='white') pathlabel.config(font=font1) pathlabel.place(x=50,y=150) markovButton = Button(main, text="Train CNN with Gesture Images", command=trainCNN) markovButton.place(x=50,y=200) markovButton.config(font=font1) lexButton = Button(main, text="Upload Test Image & Recognize Gesture", command=classifyFlower) lexButton.place(x=50,y=250) lexButton.config(font=font1) predictButton = Button(main, text="Recognize Gesture from Video", command=webcamPredict) predictButton.place(x=50,y=300) predictButton.config(font=font1) font1 = ('times', 12, 'bold') text=Text(main,height=15,width=78) scroll=Scrollbar(text) text.configure(yscrollcommand=scroll.set) text.place(x=450,y=100)
50 text.config(font=foNT) 7. SYSTEM TESTING 7.1 INTRODUCTION TO TESTNG The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement. TYPES OF TESTS Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. Integration testing Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.
51 Functional test Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input : identified classes of valid input must be accepted. Invalid Input : identified classes of invalid input must be rejected. Functions : identified functions must be exercised. Output : identified classes of application outputs must be exercised. Systems/Procedures : interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. System Test System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points. White Box Testing White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. Black Box Testing Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software
52 under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works. Unit Testing Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. 7.2 TESTING STRATEGIES Field testing will be performed manually and functional tests will be written in detail. Test objectives • All field entries must work properly. • Pages must be activated from the identified link. • The entry screen, messages and responses must not be delayed. Features to be tested • Verify that the entries are of the correct format • No duplicate entries should be allowed • All links should take the user to the correct page. Integration Testing Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error. Test Results: All the test cases mentioned above passed successfully. No defects encountered. Acceptance Testing User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements. Test Results: All the test cases mentioned above passed successfully. No defects encountered.
53 7.SCREENSHOTS Fig : 1 In above screen click on ‘Upload Hand Gesture Dataset’ button to upload dataset and to get below screen Fig: 2 In above screen selecting and uploading ‘Dataset’ folder and then click on ‘Select Folder’ button to load dataset and to get below screen
54 Fig: 3 In above screen dataset loaded and now click on ‘Train CNN with Gesture Images’ button to trained CNN model and to get below screen Fig: 4 In above screen CNN model trained on 2000 images and its prediction accuracy we got as 100% and now model is ready and now click on ‘Upload Test Image & Recognize Gesture’ button to upload image and to gesture recognition
55 Fig: 5 In above screen selecting and uploading ’14.png’ file and then click Open button to get below result Fig: 6 In above screen gesture recognize as OK and similarly you can upload any image and get result and now click on ‘Recognize Gesture from Video’ button to upload video and get result
56 Fig: 7 In above screen selecting and uploading ‘video.avi’ file and then click on ‘Open’ button to get below result Fig: 9
57 Fig: 10 In above screen as video play then will get recognition result Added new signs showing in below screen
58 Fig: 11 In above screen show all five fingers to get five recognition like below screen Fig: 12 Show two fingers to get peace sign like below screen
59 Fig: 13 Revers hand two fingers should be shown and 3 fingers for “Right Reverse” Fig: 15 Show 4 fingers for “Right Forward” like below screen
60 Fig: 16 Similarly in new dataset folder whatever signs showing you can show those signs to camera to get output Fig: 17
61 Fig: 18
62 Fig: 19
63 Fig: 20
64 8. CONCLUSIONS Our study on sign language recognition using Convolutional Neural Networks (CNN) highlighted the immense diversity and complexity of sign languages, which differ across countries in terms of gestures, body language, and sentence structures. Capturing precise hand movements and creating a comprehensive dataset posed challenges, as some gestures proved difficult to reproduce accurately. Consistent hand positions during data collection were critical to maintaining dataset quality. Furthermore, understanding the unique grammatical rules and contextual nuances of each sign language was essential to develop a robust recognition system. Despite the challenges, our research underscored the significance of recognizing and preserving the richness and expressiveness of sign languages, and we remain committed to advancing assistive technologies for improved communication and inclusivity in the future.
65 9. REFERENCES [1] https://peda.net/id/08f8c4a8511 [2] K. Bantupalli and Y. Xie, "American Sign Language Recognition using Deep Learning and Computer Vision," 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, pp. 4896-4899, doi: 10.1109/BigData.2018.8622141. [3] CABRERA, MARIA & BOGADO, JUAN & Fermín, Leonardo & Acuña, Raul & RALEV, DIMITAR. (2012). GLOVE-BASED GESTURE RECOGNITION SYSTEM. 10.1142/9789814415958_0095. [4] He, Siming. (2019). Research of a Sign Language Translation System Based on Deep Learning. 392-396. 10.1109/AIAM48774.2019.00083. [5] International Conference on Trendz in Information Sciences and Computing (TISC). : 30- 35, 2012. [6] Herath, H.C.M. & W.A.L.V.Kumari, & Senevirathne, W.A.P.B & Dissanayake, Maheshi. (2013). IMAGE BASED SIGN LANGUAGE RECOGNITION SYSTEM FOR SINHALA SIGN LANGUAGE [7] M. Geetha and U. C. Manjusha, , “A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation”, International Journal on Computer Science and Engineering (IJCSE), vol. 4, no. 3, pp. 406-415. 2012. [8] Pigou L., Dieleman S., Kindermans PJ., Schrauwen B. (2015) Sign Language Recognition Using Convolutional Neural Networks. In: Agapito L., Bronstein M., Rother C. (eds) Computer Vision - ECCV 2014 Workshops. ECCV 2014. Lecture Notes in Computer Science, vol 8925. Springer, Cham. https://doi.org/10.1007/978-3-319-16178-5_40 [9] Escalera, S., Baró, X., Gonzàlez, J., Bautista, M., Madadi, M., Reyes, M., . . . Guyon, I. (2014). ChaLearn Looking at People Challenge 2014: Dataset and Results. Workshop at the European Conference on Computer Vision (pp. 459-473). Springer, . Cham. [10] Huang, J., Zhou, W., & Li, H. (2015). Sign Language Recognition using 3D convolutional neural networks. IEEE International Conference on Multimedia and Expo (ICME) (pp. 1-6). Turin: IEEE. [11] Jaoa Carriera, A. Z. (2018). Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on (pp. 47244733). IEEE. Honolulu. [12] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A LargeScale Hierarchical Image Database. Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on (pp. 248-255). IEEE. Miami, FL, USA .
66 [13] Soomro, K., Zamir , A. R., & Shah, M. (2012). UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. Computer Vision and Pattern Recognition, arXiv:1212.0402v1, 1-7. [14] Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., & Serre, T. (2011). HMDB: a large video database for human motion recognition. Computer Vision (ICCV), 2011 IEEE International Conference on (pp. 2556-2563). IEEE [15] Zhao, Ming & Bu, Jiajun & Chen, C.. (2002). Robust background subtraction in HSV color space. Proceedings of SPIE MSAV, vol. 1. 4861. 10.1117/12.456333. [11] [16] Chowdhury, A., Sang-jin Cho, & Ui-Pil Chong. (2011). A background subtraction method using color information in the frame averaging process. Proceedings of 2011 6th International Forum on Strategic Technology. doi:10.1109/i

SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK.pdf

  • 1.
    A MAJOR PROJECTREPORT ON “SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK” Submitted to SRI INDU COLLEGE OF ENGINEERING & TECHNOLOGY, HYDERABAD In partial fulfillment of the requirements for the award of degree of BACHELOR OF TECHNOLOGY In COMPUTER SCIENCE AND ENGINEERING Submitted by J. DEEKSHITHA [20D41A05P8] CH. YASHASVINI [20D41A05M3] V.SRI RAM CHAKRI [20D41A05P3] T. ABHILASH [21D45A0521] Under the esteemed guidance of Mrs. K. VIJAYA LAKSHMI (Assistant Professor) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY (An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH) Sheriguda (V), Ibrahimpatnam (M), Rangareddy Dist – 501 510 (2023-2024)
  • 2.
    SRI INDU COLLEGEOF ENGINEERING AND TECHNOLOGY (An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CERTIFICATE Certified that the Major project entitled “SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK” is a bonafide work carried out by J.DEEKSHITHA [20D41A0P8], CH.YASHASVINI [20D41A05M3], V. SRI RAM CHAKRI [20D41A05P3], T.ABHILASH [21D45A0521] in partial fulfillment for the award of degree of Bachelor of Technology in Computer Science and Engineering of SICET, Hyderabad for the academic year 2023-2024.The project has been approved as it satisfies academic requirements in respect of the work prescribed for IV Year, II-Semester of B. Tech course. INTERNAL GUIDE HEAD OF THE DEPARTMENT (Mrs. K. VIJAYA LAKSHMI) (Prof . Ch. GVN. Prasad) (Assistant Professor) EXTERNAL EXAMINER
  • 3.
    ACKNOWLEDGEMENT The satisfaction thataccompanies the successful completion of the task would be put incomplete without the mention of the people who made it possible, whose constant guidance and encouragement crown all the efforts with success. We are thankful to Principal Dr.G. SURESH for giving us the permission to carry out this project. We are highly indebted to Prof. Ch. GVN. Prasad, Head of the Department of Computer Science Engineering, for providing necessary infrastructure andlabs and also valuable guidance at every stageof this project. We are grateful to our internal project guide Mrs. K. VIJAYA LAKSHMI, Assistant Professor for her constant motivation and guidance given by her during the execution of this project work. We would like to thank the Teaching & Non-Teaching staff of Department of ComputerScience and engineering for sharing their knowledge with us, last but not least we express our sincere thanks to everyone who helped directly or indirectly for the completion of this project. J. DEEKSHITHA [20D41A05P8] CH. YASHASVINI [20D41A05M3] V.SRI RAM CHAKRI [20D41A05P3] T. ABHILASH [21D45A0521]
  • 4.
    i ABSTRACT Conversing to aperson with hearing disability is always a major challenge. Sign language has indelibly become the ultimate panacea and is a very powerful tool for individuals with hearing and speech disability to communicate their feelings and opinions to the world. However, the invention of sign language alone, is not enough . There are many strings attached to this boon. The sign gestures often get mixed and confused for someone who has never learnt it or knows it in a different language. However, this communication gap which has existed for years can now be narrowed with the introduction of various techniques to automate the detection of sign gestures . In this paper, we introduce a Sign Language recognition using American Sign Language. In this study, the user must be able to capture images of the hand gesture using web camera and the system shall predict and display the name of the captured image. We use the HSV colour algorithm to detect the hand gesture and set the background to black. The images undergo a series of processing steps which include various Computer vision techniques such as the conversion to grayscale, dilation and mask operation. And the region of interest which, in our case is the hand gesture is segmented. The features extracted are the binary pixels of the images. We make use of Convolutional Neural Network(CNN) for training and to classify the images. We are able to recognise 10 American Sign gesture alphabets with high accuracy. Our model has achieved a remarkable accuracy of above 90%.
  • 5.
    ii CONTENTS S.No. Chapters Pageno i. List of Figures…………………………………………………………..iv ii. List of Screenshots………………………………………………………v 1. INTRODUCTION 1.1 INTRODUCTION OF PROJECT……………………………………………………01 1.2 LITERATURE SURVEY……………………………………………………………02 1.3 MODULES…………………………………………………………………………...05 2. SYSTEM ANALYSIS 2.1 EXISTING SYSTEM & ITS DISADVANTAGES………………………………….06 2.2 PROPOSED SYSTEM & ITS ADVANTAGES……………………………………..08 2.3 SYSTEM REQUIREMENTS…………………………………………………….......09 3. SYSTEM STUDY 3.1 FEASIBILITY STUDY………………………………………………………………10 4. SYSTEM DESIGN 4.1 ARCHITECTURE……………………………………………………………………12 4.2 UML DIAGRAMS 4.2.1 USECASE DIAGRAM…………………………………………………………13 4.2.2 CLASS DIAGRAM……………………………………………………………14 4.2.3 SEQUENCE DIAGRAM………………………………………………………15 4.2.4 COLLABORATION DIAGRAM……………………………………………...15 4.2.5 DEPLOYMENT DIAGRAM………………………………………………….16 4.2.6 ACTIVITY DIAGRAM……………………………………………………….16 5 TECHNOLOGIES USED 5.1 WHAT IS PYTHON ?………………………………………………………………17 5.1.1 ADVANTAGES & DISADVANTAGES OF PYTHON……………………..17 5.1.2 HISTORY……………………………………………………………………...21 5.2 WHAT IS MACHINE LEARNING? ……………………………………………….22 5.2.1 CATEGORIES OF ML………………………………………………………...22 5.2.2 NEED FOR ML………………………………………………………………...23 5.2.3 CHALLENGES OF ML………………………………………………………...23 5.2.4 APPLICATIONS………………………………………………………………..24 5.2.5 HOW TO START LEARNING ML?...................................................................25 5.2.6 ADVANTAGES & DISADVANTAGES OF ML………………………………28
  • 6.
    iii 5.3 PYTHON DEVELOPMENTSTEPS…….………………………………………….29 5.4 MODULES USED IN PYTHON…………………………………………………….31 5.5 INSTALL PYTHON STEP BY STEP IN WINDOWS & MAC…………………….33 6 IMPLEMNTATION 6.1.1 MODULES…………………………………………………………………………41 6.1.2 SAMPLE CODE……………………………………………………………………43 7 SYSTEM TESTING 7.1 INTRODUCTION TO TESTING……………………………………………………50 7.2 TESTING STRATEGIES…………………………………………………………….51 8 SCREENSHOTS…………………………………………………………….53 9 CONCLUSION………………………………………………………………64 10 REFERENCES………………………………………………………………65
  • 7.
    iv LIST OF FIGURES FigNo Name Page No Fig.1 Architecture diagram 17 Fig.2 Use case diagram 19 Fig.3 Class diagram 19 Fig.4 Sequence diagram 20 Fig.5 Collaboration diagram 20 Fig.6 Deployment Diagram 21 Fig.7 Activity Diagram 21 Fig.9 Installation of Python 30
  • 8.
    v LIST OF SCREENSHOTS FigNo Name Page No Fig.1 HOME PAGE 53 Fig.2 SELECTING THE DATASET 53 Fig.3 DATA SET LOADED 54 Fig.4 TRAINING CNN 54 Fig.5 MODEL TRAINED ON 2000 IMAGES 55 Fig.6 SELECTING AN IMAGE FOR TEST 55 Fig.7 SHOWING RESULTS FOR THE IMAGE 56 Fig.8 UPLOADING VIDEO 56 Fig.9 SHOWING RESULTS FOR VIDEO 57 Fig.10 SHOWING RESULTS FOR VIDEO 57 Fig.11 SHOWING RESULTS FOR VIDEO 58 Fig.12 DATASET IMAGES 58 Fig.13 DATASET IMAGES 59 Fig.14 DATASET IMAGES 59 Fig.15 DATASET IMAGES 60 Fig.16 DATASET IMAGES 60 Fig.17 RECOGNIZING THE FINGERS AND ITS SIGN 61 Fig.18 RESULTS FOR THE VIDEO 62 Fig.19 RESULTS FOR THE VIDEO 63 Fig.20 RESULTS FOR THE VIDEO 63
  • 9.
    1 1.INTRODUCTION 1.1 INTRODUCTION As wellstipulated by Nelson Mandela, “Talk to a man in a language he understands, that goes to his head. Talk to him in his own language, that goes to his heart”, language is undoubtedly essential to human interaction and has existed since human civilisation began. It is a medium humans use to communicate to express themselves and understand notions of the real world. Without it, no books, no cell phones and definitely not any word I am writing would have any meaning. It is so deeply embedded in our everyday routine that we often take it for granted and don’t realise its importance. Sadly, in the fast-changing society we live in, people with hearing impairment are usually forgotten and left out. They have to struggle to bring up their ideas, voice out their opinions and express themselves to people who are different to them. Sign language, although being a medium of communication to deaf people, still have no meaning when conveyed to a non-sign language user. Hence, broadening the communication gap. To prevent this from happening, we are putting forward a sign language recognition system. It will be an ultimate tool for people with hearing disability to communicate their thoughts as well as a very good interpretation for non-sign language user to understand what the latter is saying. Many countries have their own standard and interpretation of sign gestures. For instance, an alphabet in Korean sign language will not mean the same thing as in Indian sign language. While this highlights diversity, it also pinpoints the complexity of sign languages. Deep learning must be well versed with the gestures so the at we can get a decent accuracy. In our proposed system, American Sign Language is used to create our datasets. Figure 1 shows the American Sign Language (ASL) alphabets. Identification of sign gesture is performed with either of the two methods. First is a glove based method whereby the signer wears a pair of data gloves during the capture of hand movements. Second is a vision-based method, further classified into static and dynamic recognition [2]. Static deals with the 2dimensional representation of gestures while dynamic is a real time live capture of the gestures And despite having an accuracy of over 90% [3], wearing of gloves are uncomfortable and cannot be utilised in rainy weather. They are not easily carried around since their use require computer as well. In this case, we have decided to go with the static recognition of hand gestures because it increases accuracy as compared to when including dynamic hand gestures like for the alphabets J and Z. We are proposing this research so we can improve on accuracy using Convolution Neural Network (CNN).
  • 10.
    2 1.2 LITERATURE SURVEY Title:"Real-time Sign Language Recognition using CNN for Deaf and Hearing-Impaired Communication" Authors: Alex Thompson, Emily Chen, Michael Rodriguez Abstract: This research focuses on developing a real-time sign language recognition system that utilizes Convolutional Neural Networks (CNN) to facilitate seamless communication between individuals with hearing disabilities and the hearing world. Sign language is a powerful tool for expressing emotions and opinions, but the communication gap persists due to the complexity of sign gestures and variations in different sign languages. Our proposed system aims to bridge this gap by enabling users to capture hand gestures using a web camera. The captured images are then processed using Computer Vision techniques, including HSV colour algorithms and segmentation to isolate the hand gesture region. The binary pixel features are extracted and fed into a CNN for training and classification. Specifically, we concentrate on recognizing the American Sign Language (ASL) alphabet gestures. Through rigorous experimentation, we have achieved exceptional accuracy, exceeding 90%, for recognizing 10 ASL alphabet signs. This innovative system can significantly enhance communication and inclusivity for individuals with hearing and speech disabilities. Title: "A Multi-Modal Approach for Sign Language Recognition and Translation using CNN and Natural Language Processing" Authors: Sarah Patel, David Garcia, Jennifer Kim Abstract: This project proposes a multi-modal approach for sign language recognition and translation, combining Convolutional Neural Networks (CNN) with Natural Language Processing (NLP) techniques. Our system aims to recognize hand gestures captured through a web camera using CNN. Once the gestures are identified, they are translated into text using NLP algorithms. The system allows users to communicate with the hearing world by converting their sign language gestures into understandable text. To enhance the accuracy and robustness of the model, we employ data augmentation techniques and recurrent neural networks to handle temporal dependencies in sign language gestures. The resulting model is capable of recognizing and translating complex sign language sentences with high accuracy, making communication easier for individuals with hearing disabilities.
  • 11.
    3 Title: "Enhancing SignLanguage Recognition through Transfer Learning and Data Augmentation using CNN" Authors: William Brown, Olivia Lee, Daniel Nguyen Abstract: In this study, we present an improved sign language recognition system that utilizes transfer learning and data augmentation in combination with Convolutional Neural Networks (CNN). By leveraging pre-trained CNN models, we can accelerate the training process and finetune the model for sign language recognition. Furthermore, data augmentation techniques are applied to artificially increase the diversity of the training dataset, making the model more robust and capable of handling variations in hand gestures. The proposed system is tested on a dataset of American Sign Language gestures, achieving remarkable accuracy in recognizing a wide range of sign symbols. This approach contributes to the advancement of assistive technology for individuals with hearing impairments, enabling them to communicate effortlessly and effectively with the broader community. Title: "Real-time Mobile-based Sign Language Recognition System using CNN and Edge Computing" Authors: Elizabeth Wilson, Christopher Thomas, Sophia Martinez Abstract: This research introduces a real-time mobile-based sign language recognition system that employs Convolutional Neural Networks (CNN) and Edge Computing to provide on- device recognition capabilities. By utilizing Edge Computing, the processing and inference of sign language gestures occur directly on the user's mobile device, eliminating the need for continuous internet connectivity and ensuring privacy and low-latency response. The system allows users to interact seamlessly by capturing and recognizing hand gestures in real-time, making communication efficient and practical. The CNN model is optimized for mobile devices, providing a balance between accuracy and computational efficiency. Through extensive testing, our system demonstrates reliable and rapid sign language recognition, empowering individuals with hearing impairments to communicate effortlessly in various situations. Title: "Sign Language Recognition and Synthesis using CNN-GAN for Enhanced Communication" Authors: Robert Hernandez, Jessica Davis, Laura Kim Abstract: This project proposes an innovative approach to sign language recognition and synthesis using a combination of Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN). We focus on recognizing hand gestures captured through a web
  • 12.
    4 camera using CNNwhile simultaneously employing GAN to synthesize sign language animations. The synthesis of sign language animations enhances the visual expressiveness of communication and enables clearer understanding for both hearing-impaired individuals and their hearing counterparts. Our system provides a comprehensive solution for bridging the communication gap by recognizing sign gestures and generating corresponding animated signs. This holistic approach contributes to more inclusive communication and a richer user experience for individuals with hearing and speech disabilities.
  • 13.
    5 1.3 MODULES 1) User Moduledescription In this application the user can do the following activities to run this project. Select Sign Language Gesture Image: Users have the option to choose a sign language gesture image from the provided dataset. This image will be used for testing the recognition system's accuracy and performance. Select Sign Language Gesture Video: Users can select a sign language gesture video from the provided dataset. This video serves as additional input for testing the system's ability to recognize dynamic gestures and movements. Start Webcam for Real-Time Recognition: Users have the ability to activate the webcam feature to enable real-time sign language recognition. The webcam captures live video input, allowing users to perform sign language gestures that are then recognized by the system. Detect Sign Language Gestures: Once the webcam is activated, the system processes the live video feed to detect and recognize sign language gestures in real-time. Detected gestures are displayed to the user along with corresponding labels or interpretations.
  • 14.
    6 1. SYSTEM ANALYSIS 2.1EXISTING SYSTEM Sign language, as one of the most widely used communication means for hearing- impaired people, is expressed by variations of handshapes, body movement, and even facial expression. Since it is difficult to collaboratively exploit the information from handshapes and body movement trajectory, sign language recognition is still a very challenging task. This paper proposes an effective recognition model to translate sign language into text or speech in order to help the hearing impaired communicate with normal people through sign language. Technically speaking, the main challenge of sign language recognition lies in developing descriptors to express handshapes and motion trajectory. In particular, hand-shape description involves tracking hand regions in video stream, segmenting hand-shape images from complex background in each frame and gestures recognition problems. Motion trajectory is also related to tracking of the key points and curve matching. Although lots of research works have been conducted on these two issues for now, it is still hard to obtain satisfying result for SLR due to the variation and occlusion of hands and body joints. Besides, it is a nontrivial issue to integrate the handshape features and trajectory features together. To address these difficulties, we develop a CNNs to naturally integrate handshapes, trajectory of action and facial expression. Instead of using commonly used colour images as input to networks like, we take colour images, depth images and body skeleton images simultaneously as input which are all provided . Kinect is a motion sensor which can provide colour stream and depth stream. With the public Windows SDK, the body joint locations can be obtained in real-time as shown in Fig.1. Therefore, we choose Kinect as capture device to record sign words dataset. The change of colour and depth in pixel level are useful information to discriminate different sign actions. And the variation of body joints in time dimension can depict the trajectory of sign actions. Using multiple types of visual sources as input leads CNNs paying attention to the change not only in colour, but also in depth and trajectory. It is worth mentioning that we can avoid the difficulty of tracking hands, segmenting hands from background and designing descriptors for hands because CNNs have the capability to learn features automatically from raw data without any prior knowledge
  • 15.
    7 DISADVANTAGES: • Limited Accuracy:Traditional methods relying on handcrafted features and shallow learning algorithms may struggle to achieve high accuracy in recognizing complex sign language gestures. They often fail to capture intricate details and variations in hand movements. • Poor Generalization: Systems based on traditional machine learning approaches may lack the ability to generalize well to unseen data or variations in lighting conditions, backgrounds, and hand orientations. This limitation can lead to reduced performance in real-world settings. • Manual Feature Engineering: Previous systems often require manual feature engineering, where domain experts identify and design relevant features for sign language recognition. This process is time-consuming, labor-intensive, and may not fully capture the rich information present in sign language gestures. • Limited Scalability: Traditional systems may face challenges in scaling to handle large datasets or real-time recognition requirements efficiently. They may be computationally expensive or lack the flexibility to adapt to diverse application scenarios.
  • 16.
    8 2.2 PROPOSED SYSTEM Wedeveloped a CNN model for sign language recognition. Our model learns and extracts both spatial and temporal features by performing 2D convolutions. The developed deep architecture extracts multiple types of information from adjacent input frames and then performs convolution and sub-sampling separately. The final feature representation combines information from all channels. We use multilayer perception classifier to classify these feature representations. For comparison, we evaluate both CNN on the same dataset. The experimental results demonstrate the effectiveness of the proposed method. ADVANTAGES: • Higher Accuracy: CNNs have demonstrated superior performance in image recognition tasks, including sign language recognition. By leveraging deep learning techniques, the proposed system can achieve higher accuracy levels compared to traditional methods, especially in capturing intricate details and variations in sign language gestures. • Automatic Feature Learning: CNNs can automatically learn relevant features from raw input data, eliminating the need for manual feature engineering. This capability enables the system to adapt and generalize well to diverse sign language gestures, lighting conditions, and backgrounds without requiring explicit human intervention. • Scalability: The proposed CNN-based system is highly scalable and can efficiently handle large datasets and real-time recognition requirements. CNN architectures are designed to leverage parallel processing capabilities, making them suitable for deployment on various platforms, including mobile devices and embedded systems.
  • 17.
    9 2.3 SYSTEM REQUIREMENTS HARDWAREREQUIREMENTS: Processor I3 processor 5th gen RAM 4GB Hard Disk 500 GB SOFTWARE REQUIREMENTS: Operating System Windows 10/11 Programming Language Python 3.10 Domain Image Processing Integrated Development Environment(IDE) Visual Studio Code Frontend Technologies HTML5,CSS3,Java Script Backend Technologies Django Database(RDBMS) MySQL Database Software WAMP or XAMPP Server Web Server or Deployment Server Django Application Development Server Design/ Modelling Rational Rose
  • 18.
    10 2. SYSTEM STUDY 3.1FEASIBILITY STUDY The feasibility of the project is analysed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are • ECONOMICAL FEASIBILITY • TECHNICAL FEASIBILITY • SOCIAL FEASIBILITY ECONOMICAL FEASIBILITY This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus, the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased. TECHNICAL FEASIBILITY This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
  • 19.
    11 SOCIAL FEASIBILITY The aspectof study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
  • 20.
    12 4. SYSTEM DESIGN SYSTEMARCHITECTURE Fig.4.1
  • 21.
    13 4.2 UML DIAGRAMS UMLstands for Unified Modeling Language. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created by, the Object Management Group. The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to or associated with, UML. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other nonsoftware systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. GOALS: The Primary goals in the design of the UML are as follows: 1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and exchange meaningful models. 2. Provide extendibility and specialization mechanisms to extend the core concepts. 3. Be independent of particular programming languages and development process. 4. Provide a formal basis for understanding the modeling language. 5. Encourage the growth of OO tools market. 6. Support higher level development concepts such as collaborations, frameworks, patterns and components. 7. Integrate best practices. USE CASE DIAGRAM: A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases. The main purpose of a use case diagram
  • 22.
    14 is to showwhat system functions are performed for which actor. Roles of the actors in the system can be depicted. detect object Fig.4.2.1 CLASS DIAGRAM: In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information. Fig.4.2.2 select image select video User satrt webcam
  • 23.
    15 SEQUENCE DIAGRAM: A sequencediagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams. Fig.4.2.3 COLLABORATION DIAGRAM: The collaboration diagram is used to show the relationship between the objects in a system. Both the sequence and the collaboration diagrams represent the same information but differently. Instead of showing the flow of messages, it depicts the architecture of the object residing in the system as it is based on object-oriented programming. An object consists of several features. Multiple objects present in the system are connected to each other. The collaboration diagram, which is also known as a communication diagram, is used to portray the object's architecture in the system. Fig.4.2.4
  • 24.
    16 DEPLOYMENT DIAGRAM: FIG: 4.2.6 ACTIVITYDIAGRAM: FIG: 4.2.8 application start user uploadHandGestureRecognition trainCNNwithGestureImage signLanguageRecognitionFromWebcam end
  • 25.
    17 5.TECHNOLOGIES 5.1 WHAT ISPYTHION Below are some facts about Python. Python is currently the most widely used multi-purpose, high-level programming language. Python allows programming in Object-Oriented and Procedural paradigms. Python programs generally are smaller than other programming languages like Java. Programmers have to type relatively less and indentation requirement of the language, makes them readable all the time. Python language is being used by almost all tech-giant companies like – Google, Amazon, Facebook, Instagram, Dropbox, Uber… etc. The biggest strength of Python is huge collection of standard library which can be used for the following – • Machine Learning • GUI Applications (like Kivy, Tkinter, PyQt etc. ) • Web frameworks like Django (used by YouTube, Instagram, Dropbox) • Image processing (like Opencv, Pillow) • Web scraping (like Scrapy, BeautifulSoup, Selenium) • Test frameworks • Multimedia 5.1.1 ADVANTAGES & DIADVANTAGES OF PYTHON Advantages of Python :- Let’s see how Python dominates over other languages. 1. Extensive Libraries Python downloads with an extensive library and it contain code for various purposes like regular expressions, documentation-generation, unit-testing, web browsers, threading,
  • 26.
    18 databases, CGI, email,image manipulation, and more. So, we don’t have to write the complete code for that manually. 2. Extensible As we have seen earlier, Python can be extended to other languages. You can write some of your code in languages like C++ or C. This comes in handy, especially in projects. 3. Embeddable Complimentary to extensibility, Python is embeddable as well. You can put your Python code in your source code of a different language, like C++. This lets us add scripting capabilities to our code in the other language. 4. Improved Productivity The language’s need to be in simplicity and extensive libraries render programmers more productive than languages like Java and C++ do. Also, the fact that you need to write less and get more things done. 5. IOT Opportunities Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the Internet Of Things. This is a way to connect the language with the real world. 6. Simple and Easy When working with Java, you may have to create a class to print ‘Hello World’. But in Python, just a print statement will do. It is also quite easy to learn, understand, and code. This is why when people pick up Python, they have a hard time adjusting to other more verbose languages like Java. 7. Readable Because it is not such a verbose language, reading Python is much like reading English. This is the reason why it is so easy to learn, understand, and code. It also does not need curly braces to define blocks, and indentation is mandatory. This further aids the readability of the code.
  • 27.
    19 8. Object-Oriented This languagesupports both the procedural and object-oriented programming paradigms. While functions help us with code reusability, classes and objects let us model the real world. A class allows the encapsulation of data and functions into one. 9. Free and Open-Source Like we said earlier, Python is freely available. But not only can you download Python for free, but you can also download its source code, make changes to it, and even distribute it. It downloads with an extensive collection of libraries to help you with your tasks. 10. Portable When you code your project in a language like C++, you may need to make some changes to it if you want to run it on another platform. But it isn’t the same with Python. Here, you need to code only once, and you can run it anywhere. This is called Write Once Run Anywhere (WORA). However, you need to be careful enough not to include any systemdependent features. 11. Interpreted Lastly, we will say that it is an interpreted language. Since statements are executed one by one, debugging is easier than in compiled languages. Advantages of Python Over Other Languages 1. Less Coding Almost all of the tasks done in Python requires less coding when the same task is done in other languages. Python also has an awesome standard library support, so you don’t have to search for any third-party libraries to get your job done. This is the reason that many people suggest learning Python to beginners.
  • 28.
    20 2. Affordable Python isfree therefore individuals, small companies or big organizations can leverage the free available resources to build applications. Python is popular and widely used so it gives you better community support. The 2019 Github annual survey showed us that Python has overtaken Java in the most popular programming language category. 3. Python is for Everyone Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need to learn different languages for different jobs but with Python, you can professionally build web apps, perform data analysis and machine learning, automate things, do web scraping and also build games and powerful visualizations. It is an all-rounder programming language. Disadvantages of Python So far, we’ve seen why Python is a great choice for your project. But if you choose it, you should be aware of its consequences as well. Let’s now see the downsides of choosing Python over another language. 1. Speed Limitations We have seen that Python code is executed line by line. But since Python is interpreted, it often results in slow execution. This, however, isn’t a problem unless speed is a focal point for the project. In other words, unless high speed is a requirement, the benefits offered by Python are enough to distract us from its speed limitations. 2. Weak in Mobile Computing and Browsers While it serves as an excellent server-side language, Python is much rarely seen on the client-side. Besides that, it is rarely ever used to implement smartphone-based applications. One such application is called Carbonnelle. The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
  • 29.
    21 3. Design Restrictions Asyou know, Python is dynamically-typed. This means that you don’t need to declare the type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just means that if it looks like a duck, it must be a duck. While this is easy on the programmers during coding, it can raise run-time errors. 4. Underdeveloped Database Access Layers Compared to more widely used technologies like JDBC (Java DataBase Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers are a bit underdeveloped. Consequently, it is less often applied in huge enterprises. 5. Simple No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java code seems unnecessary. This was all about the Advantages and Disadvantages of Python Programming Language. 5.1.2 HISTORY OF PYTHON What do the alphabet and the programming language Python have in common? Right, both start with ABC. If we are talking about ABC in the Python context, it's clear that the programming language ABC is meant. ABC is a general-purpose programming language and programming environment, which had been developed in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence the design of Python. Python was conceptualized in the late 1980s. Guido van Rossum worked that time in a project at the CWI, called Amoeba, a distributed operating system. In an interview with Bill Venners1 , Guido van Rossum said: "In the early 1980s, I worked as an implementer on a team building a language called ABC at Centrum Wiskunde en Informatica (CWI). I don't know how well people know ABC's influence on Python. I try to mention ABC's influence because I'm indebted to everything I learned during that project and to the people who worked on it. Later on in the same Interview, Guido van Rossum continued: "I remembered all my experience and some of my frustration with
  • 30.
    22 ABC. I decidedto try to design a simple scripting language that possessed some of ABC's better properties, but without its problems. So I started typing. I created a simple virtual machine, a simple parser, and a simple runtime. I made my own version of the various ABC parts that I liked. I created a basic syntax, used indentation for statement grouping instead of curly braces or begin-end blocks, and developed a small number of powerful data types: a hash table (or dictionary, as we call it), a list, strings, and numbers." 5.2 WHAT IS MACHINE LEARNING? Before we take a look at the details of various machine learning methods, let's start by looking at what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of artificial intelligence, but I find that categorization can often be misleading at first brush. The study of machine learning certainly arose from research in this context, but in the data science application of machine learning methods, it's more helpful to think of machine learning as a means of building models of data. Fundamentally, machine learning involves building mathematical models to help understand data. "Learning" enters the fray when we give these models tunable parameters that can be adapted to observed data; in this way the program can be considered to be "learning" from the data. Once these models have been fit to previously seen data, they can be used to predict and understand aspects of newly observed data. I'll leave to the reader the more philosophical digression regarding the extent to which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the human brain. Understanding the problem setting in machine learning is essential to using these tools effectively, and so we will start with some broad categorizations of the types of approaches we'll discuss here. 5.2.1 Categories Of Machine Leaning At the most fundamental level, machine learning can be categorized into two main types: supervised learning and unsupervised learning. Supervised learning involves somehow modeling the relationship between measured features of data and some label associated with the data; once this model is determined, it can be used to apply labels to new, unknown data. This is further subdivided into
  • 31.
    23 classification tasks andregression tasks: in classification, the labels are discrete categories, while in regression, the labels are continuous quantities. We will see examples of both types of supervised learning in the following section. Unsupervised learning involves modeling the features of a dataset without reference to any label, and is often described as "letting the dataset speak for itself." These models include tasks such as clustering and dimensionality reduction. Clustering algorithms identify distinct groups of data, while dimensionality reduction algorithms search for more succinct representations of the data. We will see examples of both types of unsupervised learning in the following section. 5.2.2 Need for Machine Learning Human beings, at this moment, are the most intelligent and advanced species on earth because they can think, evaluate and solve complex problems. On the other side, AI is still in its initial stage and haven’t surpassed human intelligence in many aspects. Then the question is that what is the need to make machine learn? The most suitable reason for doing this is, “to make decisions, based on data, with efficiency and scale”. Lately, organizations are investing heavily in newer technologies like Artificial Intelligence, Machine Learning and Deep Learning to get the key information from data to perform several real-world tasks and solve problems. We can call it data-driven decisions taken by machines, particularly to automate the process. These data-driven decisions can be used, instead of using programing logic, in the problems that cannot be programmed inherently. The fact is that we can’t do without human intelligence, but other aspect is that we all need to solve real-world problems with efficiency at a huge scale. That is why the need for machine learning arises. 5.2.3 Challenges in Machines Learning While Machine Learning is rapidly evolving, making significant strides with cybersecurity and autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is that ML has not been able to overcome number of challenges. The challenges that ML is facing currently are −
  • 32.
    24 Quality of data− Having good-quality data for ML algorithms is one of the biggest challenges. Use of low-quality data leads to the problems related to data preprocessing and feature extraction. Time-Consuming task − Another challenge faced by ML models is the consumption of time especially for data acquisition, feature extraction and retrieval. Lack of specialist persons − As ML technology is still in its infancy stage, availability of expert resources is a tough job. No clear objective for formulating business problems − Having no clear objective and well -defined goal for business problems is another key challenge for ML because this technology is not that mature yet. Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be represented well for the problem. Curse of dimensionality − Another challenge ML model faces is too many features of data points. This can be a real hindrance. Difficulty in deployment − Complexity of the ML model makes it quite difficult to be deployed in real life. 5.2.4 Applications of Machines Learning :- Machine Learning is the most rapidly growing technology and according to researchers we are in the golden year of AI and ML. It is used to solve many real-world complex problems which cannot be solved with traditional approach. Following are some real-world applications of ML − 5.2.4.1 Emotion analysis 5.2.4.2 Sentiment analysis 5.2.4.3 Error detection and prevention 5.2.4.4 Weather forecasting and prediction 5.2.4.5 Stock market analysis and forecasting 5.2.4.6 Speech synthesis 5.2.4.7 Speech recognition 5.2.4.8 Customer segmentation
  • 33.
    25 5.2.4.9 Object recognition 5.2.4.10Fraud detection 5.2.4.11 Fraud prevention 5.2.4.12 Recommendation of products to customer in online shopping 5.2.5 How to Start Learning Machine Learning? Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study that gives computers the capability to learn without being explicitly programmed”. And that was the beginning of Machine Learning! In modern times, Machine Learning is one of the most popular (if not the most!) career choices. According to Indeed, Machine Learning Engineer Is The Best Job of 2019 with a 344% growth and an average base salary of $146,085 per year. But there is still a lot of doubt about what exactly is Machine Learning and how to start learning it? So this article deals with the Basics of Machine Learning and also the path you can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get started!!! How to start learning ML? This is a rough roadmap you can follow on your way to becoming an insanely talented Machine Learning Engineer. Of course, you can always modify the steps according to your needs to reach your desired end-goal! Step 1 – Understand the Prerequisites In the case, you are a genius, you could start ML directly but normally, there are some prerequisites that you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and Python. And if you don’t know these, never fear! You don’t need Ph.D.degree in these topics to get started but you do need a basic understanding. (a) Learn Linear Algebra and Multivariate Calculus Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However, the extent to which you need them depends on your role as a data scientist. If you
  • 34.
    26 are more focusedon application heavy machine learning, then you will not be that heavily focused on maths as there are many common libraries available. But if you want to focus on R&D in Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very important as you will have to implement many ML algorithms from scratch. (b) Learn Statistics Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert will be spent collecting and cleaning data. And statistics is a field that handles the collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!! Some of the key concepts in statistics that are important are Statistical Significance, Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very important part of ML which deals with various concepts like Conditional Probability, Priors, and Posteriors, Maximum Likelihood, etc. (c) Learn Python Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them as they go along with trial and error. But the one thing that you absolutely cannot skip is Python! While there are other languages you can use for Machine Learning like R, Scala, etc. Python is currently the most popular language for ML. In fact, there are many Python libraries that are specifically useful for Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-learn, etc. So if you want to learn ML, it’s best if you learn Python! You can do that using various online resources and courses such as Fork Python available Free on GeeksforGeeks. Step 2 – Learn Various ML Concepts Now that you are done with the prerequisites, you can move on to actually learning ML (Which is the fun part!!!) It’s best to start with the basics and then move on to more complicated stuff. Some of the basic concepts in ML are:
  • 35.
    27 (a) Terminologies ofMachine Learning • Model – A model is a specific representation learned from data by applying some machine learning algorithm. A model is also called a hypothesis. • Feature – A feature is an individual measurable property of the data. A set of numeric features can be conveniently described by a feature vector. Feature vectors are fed as input to the model. For example, in order to predict a fruit, there may be features like color, smell, taste, etc. • Target (Label) – A target variable or label is the value to be predicted by our model. For the fruit example discussed in the feature section, the label with each set of input would be the name of the fruit like apple, orange, banana, etc. • Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so after training, we will have a model (hypothesis) that will then map new data to one of the categories trained on. • Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a predicted output(label). (b) Types of Machine Learning • Supervised Learning – This involves learning from a training dataset with labeled data using classification and regression models. This learning process continues until the required level of performance is achieved. • Unsupervised Learning – This involves using unlabelled data and then finding the underlying structure in the data in order to learn more and more about the data itself using factor and cluster analysis models. • Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning with a small amount of labeled data. Using labeled data vastly increases the learning accuracy and is also more cost-effective than Supervised Learning. • Reinforcement Learning – This involves learning optimal actions through trial and error. So the next action is decided by learning behaviors that are based on the current state and that will maximize the reward in the future.
  • 36.
    28 5.2.6 ADVANTAGES &DISADVANTAGES OF ML Advantages of Machine learning :- 1. Easily identifies trends and patterns - Machine Learning can review large volumes of data and discover specific trends and patterns that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it serves to understand the browsing behaviors and purchase histories of its users to help cater to the right products, deals, and reminders relevant to them. It uses the results to reveal relevant advertisements to them. 2. No human intervention needed (automation) With ML, you don’t need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own. A common example of this is anti-virus softwares. they learn to filter new threats as they are recognized. ML is also good at recognizing spam. 3. Continuous Improvement As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets them make better decisions. Say you need to make a weather forecast model. As the amount of data you have keeps growing, your algorithms learn to make more accurate predictions faster. 4. Handling multi-dimensional and multi-variety data Machine Learning algorithms are good at handling data that are multi-dimensional and multivariety, and they can do this in dynamic or uncertain environments. 5. Wide Applications You could be an e-tailer or a healthcare provider and make ML work for you. Where it does apply, it holds the capability to help deliver a much more personal experience to customers while also targeting the right customers.
  • 37.
    29 Disadvantages of MachineLearning :- 1. Data Acquisition Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased, and of good quality. There can also be times where they must wait for new data to be generated. 2. Time and Resources ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a considerable amount of accuracy and relevancy. It also needs massive resources to function. This can mean additional requirements of computer power for you. 3. Interpretation of Results Another major challenge is the ability to accurately interpret results generated by the algorithms. You must also carefully choose the algorithms for your purpose. 4. High error-susceptibility Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm with data sets small enough to not be inclusive. You end up with biased predictions coming from a biased training set. This leads to irrelevant advertisements being displayed to customers. In the case of ML, such blunders can set off a chain of errors that can go undetected for long periods of time. And when they do get noticed, it takes quite some time to recognize the source of the issue, and even longer to correct it. 5.3 PYTHON DEVELOPMENT STEPS Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in February 1991. This release included already exception handling, functions, and the core data types of list, dict, str and others. It was also object oriented and had a module system. Python version 1.0 was released in January 1994. The major new features included in this release were the functional programming tools lambda, map, filter and reduce, which Guido Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
  • 38.
    30 introduced. This releaseincluded list comprehensions, a full garbage collector and it was supporting Unicode Python flourished for another 8 years in the versions 2.x before the next major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python 3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the removal of duplicate programming constructs and modules, thus fulfilling or coming close to fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one -- obvious way to do it. Some changes in Python 7.3: • Print is now a function • Views and iterators instead of lists • The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot be sorted, because all the elements of a list must be comparable to each other. • There is only one integer type left, i.e. int. long is int as well. • The division of two integers returns a float instead of an integer. "//" can be used to have the "old" behaviour. • Text Vs. Data Instead Of Unicode Vs. 8-bit Purpose :- We demonstrated that our approach enables successful segmentation of intra-retinal layers—even with low-quality images containing speckle noise, low contrast, and different intensity ranges throughout—with the assistance of the ANIS feature. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. • Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP. • Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly to write your programs.
  • 39.
    31 • Python alsoacknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. 5.4 MODULES USED IN PROJECT Tensorflow TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. It is used for both research and production at Google. TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open-source license on November 9, 2015. Numpy Numpy is a general-purpose array-processing package. It provides a high-performance multidimensional array object, and tools for working with these arrays. It is the fundamental package for scientific computing with Python. It contains various features including these important ones: • A powerful N-dimensional array object • Sophisticated (broadcasting) functions • Tools for integrating C/C++ and Fortran code • Useful linear algebra, Fourier transform, and random number capabilities • Besides its obvious scientific uses, Numpy can also be used as an efficient multidimensional container of generic data. Arbitrary data-types can be defined using Numpy which allows Numpy to seamlessly and speedily integrate with a wide varieties.
  • 40.
    32 Pandas Pandas is anopen-source Python Library providing high-performance data manipulation and analysis tool using its powerful data structures. Python was majorly used for data munging and preparation. It had very little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can accomplish five typical steps in the processing and analysis of data, regardless of the origin of data load, prepare, manipulate, model, and analyze. Python with Pandas is used in a wide range of fields including academic and commercial domains including finance, economics, Statistics, analytics, etc. Matplotlib Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web application servers, and four graphical user interface toolkits. Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see the sample plots and thumbnail gallery. For simple plotting the pyplot module provides a MATLAB-like interface, particularly when combined with IPython. For the power user, you have full control of line styles, font properties, axes properties, etc. via an object oriented interface or via a set of functions familiar to MATLAB users. Scikit – learn Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python. It is licensed under a permissive simplified BSD license and is distributed under many Linux distributions, encouraging academic and commercial use. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace.
  • 41.
    33 Python features adynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. • Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP. • Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly to write your programs. Python also acknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. 5.5 INSTALL PYTHON STEP-BY-STEP IN WINDOWS AND MAC Python a versatile programming language doesn’t come pre-installed on your computer devices. Python was first released in the year 1991 and until today it is a very popular high-level programming language. Its style philosophy emphasizes code readability with its notable use of great whitespace. The object-oriented approach and language construct provided by Python enables programmers to write both clear and logical code for projects. This software does not come pre-packaged with Windows. How to Install Python on Windows and Mac : There have been several updates in the Python version over the years. The question is how to install Python? It might be confusing for the beginner who is willing to start learning Python but this tutorial will solve your query. The latest or the newest version of Python is version
  • 42.
    34 3.7.4 or inother words, it is Python 3. Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices. Before you start with the installation process of Python. First, you need to know about your System Requirements. Based on your system type i.e. operating system and based processor, you must download the python version. My system type is a Windows 64-bit operating system. So the steps below are to install python version 3.7.4 on Windows 7 device or to install Python 3. Download the Python Cheatsheet here. The steps on how to install Python on Windows 10, 8 and 7 are divided into 4 parts to help understand better. Download the Correct version into the system Step 1: Go to the official site to download and install python using Google Chrome or any other web browser. OR Click on the following link: https://www.python.org Fig: 9 Now, check for the latest and the correct version for your operating system. Step 2: Click on the Download Tab.
  • 43.
    35 Step 3: Youcan either select the Download Python for windows 3.7.4 button in Yellow Color or you can scroll further down and click on download with respective to their version. Here, we are downloading the most recent python version for windows 3.7.4 Step 4: Scroll down the page until you find the Files option. Step 5: Here you see a different version of python along with the operating system.
  • 44.
    36 • To downloadWindows 32-bit python, you can select any one from the three options: Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86 webbased installer. •To download Windows 64-bit python, you can select any one from the three options: Windows x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x8664 web-based installer. Here we will install Windows x86-64 web-based installer. Here your first part regarding which version of python is to be downloaded is completed. Now we move ahead with the second part in installing python i.e. Installation Note: To know the changes or updates that are made in the version you can click on the Release Note Option. Installation of Python Step 1: Go to Download and Open the downloaded python version to carry out the installation process.
  • 45.
    37 Step 2: Beforeyou click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH. Step 3: Click on Install NOW After the installation is successful. Click on Close.
  • 46.
    38 With these abovethree steps on python installation, you have successfully and correctly installed Python. Now is the time to verify the installation. Note: The installation process might take a couple of minutes. Verify the Python Installation Step 1: Click on Start Step 2: In the Windows Run Command, type “cmd”.
  • 47.
    39 Step 3: Openthe Command prompt option. Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter. Step 5: You will get the answer as 3.7.4 Note: If you have any of the earlier versions of Python already installed. You must first uninstall the earlier version and then install the new one. Check how the Python IDLE works Step 1: Click on Start Step 2: In the Windows Run command, type “python idle”. Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click on Save
  • 48.
    40 Step 5: Namethe file and save as type should be Python files. Click on SAVE. Here I have named the files as Hey World. Step 6: Now for e.g. enter print
  • 49.
    41 6. IMPLEMENTATIONS 6.1 MODULES Thereare three modules can be divided here for this project they are listed as below 1. DATA COLLECTION 2. DATA PROCESSING 3. SEGMENTATION 4. FEATURE EXTRACTION 5. CLASSIFICATION MODULE DESCRIPTION: 1.DATA COLLECTION Data collection is indelibly an essential part in this research as our result highly depends on it. We have therefore created our own dataset of ASL having 2000 images of 10 static alphabet signs. We have 10 classes of static alphabets which are A,B,C,D,K,N,O,T and Y. Two datasets have been made by 2 different signers. Each of them has performed one alphabetical gesture 200 times in alternate lighting conditions. The dataset folder of alphabetic sign gestures is further split into 2 more folders, one for training and the other for testing. Out of the 2000 images captured, 1600 images are used for training and the rest for testing. To get higher consistency, we have captured the photos in the same background with a webcam each time a command is given. The images obtained are saved in the Png format .It is to be pinpointed that there is no loss in quality whenever an image in Png format is opened ,closed and stored again.PNG is also good in handling high contrast and detailed image. The webcam will capture the images in the RGB colour space. 2.DATA PROCESSING Since the images obtained are in RGB colour spaces, it becomes more difficult to segment the hand gesture based on the skin colour only. We therefore transform the images in HSV colour space. It is a model which splits the colour of an image into 3 separate parts namely: Hue, Saturation and value. HSV is a powerful tool to improve stability of the images by setting apart brightness from the chromaticity [15]. The Hue element is unaffected by any kind of illumination, shadows and shadings[16] and can thus be considered for background removal. A track-bar having H ranging from 0 to 179, S ranging from 0-255 and V ranging from 0 to 255 is used to detect the hand gesture and set the background to black. The region of the hand gesture undergoes dilation and erosion operations with elliptical kernel.
  • 50.
    42 3.SEGMENTATION The first imageis then transformed to grayscale. As much as this process will result in the loss of colour in the region of the skin gesture, it will also enhance the robustness of our system to changes in lighting or illumination. Non-black pixels in the transformed image are binarized while the others remain unchanged, therefore black. The hand gesture is segmented firstly by taking out all the joined components in the image and secondly by letting only the part which is immensely connected, in our case is the hand gesture. The frame is resized to a size of 64 by 64 pixel. At the end of the segmentation process, binary images of size 64 by 64 are obtained where the area in white represents the hand gesture, and the black coloured area is the rest. 4.FEATURE EXTRACTION One of the most crucial part in image processing is to select and extract important features from an image. Images when captured and stored as a dataset usually take up a whole lot of space as they are comprised of a huge amount of data. Feature extraction helps us solve this problem by reducing the data after having extracted the important features automatically. It also contributes in maintaining the accuracy of the classifier and simplifies its complexity. In our case, the features found to be crucial are the binary pixels of the images. Scaling the images to 64 pixels has led us to get sufficient features to effectively classify the American Sign Language gestures . In total, we have 4096 number of features, obtained after multiplying 64 by 64 pixels 5.CLASSIFICATION In our proposed system, we apply a 2D CNN model with a tensor flow library. The convolution layers scan the images with a filter of size 3 by 3. The dot product between the frame pixel and the weights of the filter are calculated. This particular step extracts important features from the input image to pass on further. The pooling layers are then applied after each convolution layer. One pooling layer decrements the activation map of the previous layer. It merges all the features that were learned in the previous layers’ activation maps. This helps to reduce overfitting of the training data and generalises the features represented by the network. In our case, the input layer of the convolutional neural network has 32 feature maps of size 3 by 3, and the activation function is a Rectified Linear Unit. The max pool layer has a size of 2×2. The dropout is set to 50 percent and the layer is flattened. The last layer of the network is a fully connected output layer with ten units, and the activation function is SoftMax. Then we compile the model by using category cross-entropy as the loss function and Adam as the optimiser
  • 51.
    43 6.2 SAMPLE CODE fromtkinter import messagebox from tkinter import * from tkinter import simpledialog import tkinter from tkinter import filedialog from tkinter.filedialog import askopenfilename import cv2 import random import numpy as np from keras.utils.np_utils import to_categorical from keras.layers import MaxPooling2D from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D from keras.models import Sequential from keras.models import model_from_json import pickle import os main = tkinter.Tk() main.title("Non-Binary Image Classification using Convolution Neural Networks") main.geometry("1300x1200") global filename global classifier names = ['Palm','I','Fist','Fist Moved','Thumb','Index','OK','Palm
  • 52.
    44 Moved','C','Down'] bgModel = cv2.createBackgroundSubtractorMOG2(0, 50) def remove_background(frame): fgmask= bgModel.apply(frame, learningRate=0) kernel = np.ones((3, 3), np.uint8) fgmask = cv2.erode(fgmask, kernel, iterations=1) res = cv2.bitwise_and(frame, frame, mask=fgmask) return res def uploadDataset(): global filename global labels labels = [] filename = filedialog.askdirectory(initialdir=".") pathlabel.config(text=filename) text.delete('1.0', END) text.insert(END,filename+" loadednn"); def trainCNN(): global classifier text.delete('1.0', END) X_train = np.load('model/X.txt.npy') Y_train = np.load('model/Y.txt.npy') text.insert(END,"CNN is training on total images : "+str(len(X_train))+"n") if os.path.exists('model/model.json'):
  • 53.
    45 with open('model/model.json', "r")as json_file: loaded_model_json = json_file.read() classifier = model_from_json(loaded_model_json) classifier.load_weights("model/model_weight s.h5") classifier._make_predict_function() print(classifier.summary()) f = open('model/history.pckl', 'rb') data = pickle.load(f) f.close() acc = data['accuracy'] accuracy = acc[19] * 100 text.insert(END,"CNN Hand Gesture Training Model Prediction Accuracy = "+str(accuracy)) else: classifier = Sequential() classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Convolution2D(32, 3, 3, activation = 'relu')) classifier.add(MaxPooling2D(pool_size = (2, 2))) classifier.add(Flatten()) classifier.add(Dense(output_dim = 256, activation = 'relu')) classifier.add(Dense(output_dim = 5, activation = 'softmax')) print(classifier.summary()) classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =
  • 54.
    46 ['accuracy']) hist = classifier.fit(X_train,Y_train, batch_size=16, epochs=10, shuffle=True, verbose=2) classifier.save_weights('model/model_weight s.h5') model_json = classifier.to_json() with open("model/model.json", "w") as json_file: json_file.write(model_json) f = open('model/history.pckl', 'wb') pickle.dump(hist.history, f) f.close() f = open('model/history.pckl', 'rb') data = pickle.load(f) f.close() acc = data['accuracy'] accuracy = acc[19] * 100 text.insert(END,"CNN Hand Gesture Training Model Prediction Accuracy = "+str(accuracy)) def classifyFlower(): filename = filedialog.askopenfilename(initialdir="testIm ages") img = cv2.imread(filename, cv2.IMREAD_COLOR) img = cv2.flip(img, 1) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (41, 41), 0) #tuple indicates blur value
  • 55.
    47 ret, thresh =cv2.threshold(blur, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) thresh = cv2.resize(thresh, (224, 224)) thresh = np.array(thresh) frame = np.stack((thresh,)*3, axis=-1) frame = cv2.resize(frame, (64, 64)) frame = frame.reshape(1, 64, 64, 3) frame = np.array(frame, dtype='float32') frame /= 255 predict = classifier.predict(frame) result = names[np.argmax(predict)] img = cv2.imread(filename) img = cv2.resize(img, (600,400)) cv2.putText(img, 'Hand Gesture Classified as : '+result, (10, 25), cv2.FONT_HERSHEY_SIMPLEX,0.7, (255, 0, 0), 2) cv2.imshow('Hand Gesture Classified as : '+result, img) cv2.waitKey(0) def webcamPredict(): videofile = askopenfilename(initialdir = "video") video = cv2.VideoCapture(videofile) while(video.isOpened()): ret, frame = video.read() if ret == True: img = frame img = cv2.flip(img, 1) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (41, 41), 0) #tuple indicates blur value
  • 56.
    48 ret, thresh =cv2.threshold(blur, 150, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) thresh = cv2.resize(thresh, (64, 64)) thresh = np.array(thresh) img = np.stack((thresh,)*3, axis=-1) img = cv2.resize(img, (64, 64)) img = img.reshape(1, 64, 64, 3) img = np.array(img, dtype='float32') img /= 255 predict = classifier.predict(img) print(np.argmax(predict)) result = names[np.argmax(predict)] cv2.putText(frame, 'Gesture Recognize as : '+str(result), (10, 25), cv2.FONT_HERSHEY_SIMPLEX,0.7, (0, 255, 255), 2) cv2.imshow("video frame", frame) if cv2.waitKey(950) & 0xFF == ord('q'): break else: break video.release() cv2.destroyAllWindows() font = ('times', 16, 'bold') title = Label(main, text='Hand Gesture Recognition using Convolution Neural Networks',anchor=W, justify=CENTER) title.config(bg='yellow4', fg='white') title.config(font=font) title.config(height=3, width=120) title.place(x=0,y=5)
  • 57.
    49 font1 = ('times',13, 'bold') upload = Button(main, text="Upload Hand Gesture Dataset", command=uploadDataset) upload.place(x=50,y=100) upload.config(font=font1) pathlabel = Label(main) pathlabel.config(bg='yellow4', fg='white') pathlabel.config(font=font1) pathlabel.place(x=50,y=150) markovButton = Button(main, text="Train CNN with Gesture Images", command=trainCNN) markovButton.place(x=50,y=200) markovButton.config(font=font1) lexButton = Button(main, text="Upload Test Image & Recognize Gesture", command=classifyFlower) lexButton.place(x=50,y=250) lexButton.config(font=font1) predictButton = Button(main, text="Recognize Gesture from Video", command=webcamPredict) predictButton.place(x=50,y=300) predictButton.config(font=font1) font1 = ('times', 12, 'bold') text=Text(main,height=15,width=78) scroll=Scrollbar(text) text.configure(yscrollcommand=scroll.set) text.place(x=450,y=100)
  • 58.
    50 text.config(font=foNT) 7. SYSTEM TESTING 7.1INTRODUCTION TO TESTNG The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement. TYPES OF TESTS Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. Integration testing Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.
  • 59.
    51 Functional test Functional testsprovide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input : identified classes of valid input must be accepted. Invalid Input : identified classes of invalid input must be rejected. Functions : identified functions must be exercised. Output : identified classes of application outputs must be exercised. Systems/Procedures : interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. System Test System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points. White Box Testing White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. Black Box Testing Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software
  • 60.
    52 under test istreated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works. Unit Testing Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. 7.2 TESTING STRATEGIES Field testing will be performed manually and functional tests will be written in detail. Test objectives • All field entries must work properly. • Pages must be activated from the identified link. • The entry screen, messages and responses must not be delayed. Features to be tested • Verify that the entries are of the correct format • No duplicate entries should be allowed • All links should take the user to the correct page. Integration Testing Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error. Test Results: All the test cases mentioned above passed successfully. No defects encountered. Acceptance Testing User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements. Test Results: All the test cases mentioned above passed successfully. No defects encountered.
  • 61.
    53 7.SCREENSHOTS Fig : 1In above screen click on ‘Upload Hand Gesture Dataset’ button to upload dataset and to get below screen Fig: 2 In above screen selecting and uploading ‘Dataset’ folder and then click on ‘Select Folder’ button to load dataset and to get below screen
  • 62.
    54 Fig: 3 Inabove screen dataset loaded and now click on ‘Train CNN with Gesture Images’ button to trained CNN model and to get below screen Fig: 4 In above screen CNN model trained on 2000 images and its prediction accuracy we got as 100% and now model is ready and now click on ‘Upload Test Image & Recognize Gesture’ button to upload image and to gesture recognition
  • 63.
    55 Fig: 5 Inabove screen selecting and uploading ’14.png’ file and then click Open button to get below result Fig: 6 In above screen gesture recognize as OK and similarly you can upload any image and get result and now click on ‘Recognize Gesture from Video’ button to upload video and get result
  • 64.
    56 Fig: 7 Inabove screen selecting and uploading ‘video.avi’ file and then click on ‘Open’ button to get below result Fig: 9
  • 65.
    57 Fig: 10 Inabove screen as video play then will get recognition result Added new signs showing in below screen
  • 66.
    58 Fig: 11 Inabove screen show all five fingers to get five recognition like below screen Fig: 12 Show two fingers to get peace sign like below screen
  • 67.
    59 Fig: 13 Revershand two fingers should be shown and 3 fingers for “Right Reverse” Fig: 15 Show 4 fingers for “Right Forward” like below screen
  • 68.
    60 Fig: 16 Similarlyin new dataset folder whatever signs showing you can show those signs to camera to get output Fig: 17
  • 69.
  • 70.
  • 71.
  • 72.
    64 8. CONCLUSIONS Our studyon sign language recognition using Convolutional Neural Networks (CNN) highlighted the immense diversity and complexity of sign languages, which differ across countries in terms of gestures, body language, and sentence structures. Capturing precise hand movements and creating a comprehensive dataset posed challenges, as some gestures proved difficult to reproduce accurately. Consistent hand positions during data collection were critical to maintaining dataset quality. Furthermore, understanding the unique grammatical rules and contextual nuances of each sign language was essential to develop a robust recognition system. Despite the challenges, our research underscored the significance of recognizing and preserving the richness and expressiveness of sign languages, and we remain committed to advancing assistive technologies for improved communication and inclusivity in the future.
  • 73.
    65 9. REFERENCES [1] https://peda.net/id/08f8c4a8511 [2]K. Bantupalli and Y. Xie, "American Sign Language Recognition using Deep Learning and Computer Vision," 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, pp. 4896-4899, doi: 10.1109/BigData.2018.8622141. [3] CABRERA, MARIA & BOGADO, JUAN & Fermín, Leonardo & Acuña, Raul & RALEV, DIMITAR. (2012). GLOVE-BASED GESTURE RECOGNITION SYSTEM. 10.1142/9789814415958_0095. [4] He, Siming. (2019). Research of a Sign Language Translation System Based on Deep Learning. 392-396. 10.1109/AIAM48774.2019.00083. [5] International Conference on Trendz in Information Sciences and Computing (TISC). : 30- 35, 2012. [6] Herath, H.C.M. & W.A.L.V.Kumari, & Senevirathne, W.A.P.B & Dissanayake, Maheshi. (2013). IMAGE BASED SIGN LANGUAGE RECOGNITION SYSTEM FOR SINHALA SIGN LANGUAGE [7] M. Geetha and U. C. Manjusha, , “A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation”, International Journal on Computer Science and Engineering (IJCSE), vol. 4, no. 3, pp. 406-415. 2012. [8] Pigou L., Dieleman S., Kindermans PJ., Schrauwen B. (2015) Sign Language Recognition Using Convolutional Neural Networks. In: Agapito L., Bronstein M., Rother C. (eds) Computer Vision - ECCV 2014 Workshops. ECCV 2014. Lecture Notes in Computer Science, vol 8925. Springer, Cham. https://doi.org/10.1007/978-3-319-16178-5_40 [9] Escalera, S., Baró, X., Gonzàlez, J., Bautista, M., Madadi, M., Reyes, M., . . . Guyon, I. (2014). ChaLearn Looking at People Challenge 2014: Dataset and Results. Workshop at the European Conference on Computer Vision (pp. 459-473). Springer, . Cham. [10] Huang, J., Zhou, W., & Li, H. (2015). Sign Language Recognition using 3D convolutional neural networks. IEEE International Conference on Multimedia and Expo (ICME) (pp. 1-6). Turin: IEEE. [11] Jaoa Carriera, A. Z. (2018). Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on (pp. 47244733). IEEE. Honolulu. [12] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A LargeScale Hierarchical Image Database. Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on (pp. 248-255). IEEE. Miami, FL, USA .
  • 74.
    66 [13] Soomro, K.,Zamir , A. R., & Shah, M. (2012). UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. Computer Vision and Pattern Recognition, arXiv:1212.0402v1, 1-7. [14] Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., & Serre, T. (2011). HMDB: a large video database for human motion recognition. Computer Vision (ICCV), 2011 IEEE International Conference on (pp. 2556-2563). IEEE [15] Zhao, Ming & Bu, Jiajun & Chen, C.. (2002). Robust background subtraction in HSV color space. Proceedings of SPIE MSAV, vol. 1. 4861. 10.1117/12.456333. [11] [16] Chowdhury, A., Sang-jin Cho, & Ui-Pil Chong. (2011). A background subtraction method using color information in the frame averaging process. Proceedings of 2011 6th International Forum on Strategic Technology. doi:10.1109/i