Skip to content

Commit 844d69c

Browse files
authored
Update README.md
1 parent f3ada4d commit 844d69c

File tree

1 file changed

+8
-7
lines changed

1 file changed

+8
-7
lines changed

README.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,20 +18,21 @@ The projects are divided into various categories listed below -
1818
- [Linear Regression Single Variables.](https://github.com/suubh/Machine-Learning-in-Python/blob/master/Linear%20Regression/LinearRegressionSingle%20Variables.ipynb) : A Simple Linear Regression Model to model the linear relationship between Population and Profit for plot sales.
1919
- [Linear Regression Multiple Variables.](https://github.com/suubh/Machine-Learning-in-Python/blob/master/Linear%20Regression/LinearRegressionMultipleVariables.ipynb) : In this project, I build a Linear Regression Model for multiple variables for predicting the House price based on acres and number of rooms.
2020

21-
- [Logistic Regression](https://github.com/suubh/Machine-Learning-in-Python/blob/master/Logistic%20Regression/Logistic/Untitled.ipynb) : In this project, I train a binary Logistic Regression classifier to predict whether a student will get selected on the basis of mid semester and end semester marks.
21+
- [**Logistic Regression**](https://github.com/suubh/Machine-Learning-in-Python/blob/master/Logistic%20Regression/Logistic/Untitled.ipynb) : In this project, I train a binary Logistic Regression classifier to predict whether a student will get selected on the basis of mid semester and end semester marks.
2222

23-
- [Support Vector Machine](https://github.com/suubh/Machine-Learning-in-Python/blob/master/SVM/Untitled.ipynb) : In this project, I build a Support Vector Machines classifier for predicting Social Network Ads . It predicts whether a user with age and estimated salary will buy the product after watching the ads or not. It uses the Radial Basic Function Kernal of SVM. (90.83%)
23+
- [**Support Vector Machine**](https://github.com/suubh/Machine-Learning-in-Python/blob/master/SVM/Untitled.ipynb) : In this project, I build a Support Vector Machines classifier for predicting Social Network Ads . It predicts whether a user with age and estimated salary will buy the product after watching the ads or not. It uses the Radial Basic Function Kernal of SVM. (90.83%)
2424

25-
- [K Nearest Neighbours](https://github.com/suubh/Machine-Learning-in-Python/blob/master/K-NN/Untitled.ipynb) : K Nearest Neighbours or KNN is the simplest of all machine learning algorithms. In this project, I build a kNN classifier on the Iris Species Dataset which predict the three species of Iris with four features `sepal_length`,`sepal_width`,`petal_length` and `petal_width`.
25+
- [**K Nearest Neighbours**](https://github.com/suubh/Machine-Learning-in-Python/blob/master/K-NN/Untitled.ipynb) : K Nearest Neighbours or KNN is the simplest of all machine learning algorithms. In this project, I build a kNN classifier on the Iris Species Dataset which predict the three species of Iris with four features `sepal_length`,`sepal_width`,`petal_length` and `petal_width`.
2626

27-
- [Naive Bayes](https://github.com/suubh/Machine-Learning-in-Python/blob/master/TextClassification/Textclassification.ipynb) : In this project, I build a Naïve Bayes Classifier to classify the different class of a message from sklearn dataset called [fetch_20newsgroups](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html).
27+
- [**Naive Bayes**](https://github.com/suubh/Machine-Learning-in-Python/blob/master/TextClassification/Textclassification.ipynb) : In this project, I build a Naïve Bayes Classifier to classify the different class of a message from sklearn dataset called [fetch_20newsgroups](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html).
2828

29-
- [Decision Tree Classification](https://github.com/suubh/Machine-Learning-in-Python/blob/master/Decision%20Tree/Untitled.ipynb) : In this project, I used the Iris Dataset and tried a Decision Tree Classifier which give an accuracy of 96.7% which is less than KNN (98.33%).
29+
- [**Decision Tree Classification**](https://github.com/suubh/Machine-Learning-in-Python/blob/master/Decision%20Tree/Untitled.ipynb) : In this project, I used the Iris Dataset and tried a Decision Tree Classifier which give an accuracy of 96.7% which is less than KNN (98.33%).
3030

31-
- [Random Forest Classification](https://github.com/suubh/Machine-Learning-in-Python/blob/master/RandomForest/RandomForest.ipynb) : In this project I used Random Forest Classifier (90.0%) and Random Forest Regressor (61.8%) on the Social Network Ads dataset.
31+
- [**Random Forest Classification**](https://github.com/suubh/Machine-Learning-in-Python/blob/master/RandomForest/RandomForest.ipynb) : In this project I used Random Forest Classifier (90.0%) and Random Forest Regressor (61.8%) on the Social Network Ads dataset.
3232

3333
## Unsupervised Learning
34-
- [K Means Clustering](https://github.com/suubh/Machine-Learning-in-Python/blob/master/K-means/creditcard.ipynb) K-Means clustering is used to find intrinsic groups within the unlabelled dataset and draw inferences.It is one of the most detailed projects, In this project, I implement K-Means Clustering on Credit Card Dataset to cluster different credit card users based on the features.I scaled the data using `StandardScaler` because normalizing will improves the convergence.I also implemented the [Elbow Method](https://en.wikipedia.org/wiki/Elbow_method_(clustering)) to search for the best numbers of clusters.For visualizing the dataset I used [PCA(Principal Component Analysis)](https://en.wikipedia.org/wiki/Principal_component_analysis) for dimensionality reduction as the dataset features were large in number.In the end I used [Silhouette Score]() which is used to calculate the performance of clustering . It ranges from -1 to 1 and I got a score of 0.203.
34+
- [**K Means Clustering**](https://github.com/suubh/Machine-Learning-in-Python/blob/master/K-means/creditcard.ipynb)
35+
K-Means clustering is used to find intrinsic groups within the unlabelled dataset and draw inferences.It is one of the most detailed projects, In this project, I implement K-Means Clustering on Credit Card Dataset to cluster different credit card users based on the features.I scaled the data using `StandardScaler` because normalizing will improves the convergence.I also implemented the [Elbow Method](https://en.wikipedia.org/wiki/Elbow_method_(clustering)) to search for the best numbers of clusters.For visualizing the dataset I used [PCA(Principal Component Analysis)](https://en.wikipedia.org/wiki/Principal_component_analysis) for dimensionality reduction as the dataset features were large in number.In the end I used [Silhouette Score]() which is used to calculate the performance of clustering . It ranges from -1 to 1 and I got a score of 0.203.
3536

3637
## NLP(Natural Language Processing)
3738
- [Text Analytics]()

0 commit comments

Comments
 (0)