This is my 91st day of #100daysofcode and #python learning journey. Talking about today's progress I did write one blog and push the blog on GitHub. Did some code on random topic.
Like usual today also keep learning from Datacamp chapter Natural Language Processing regarding to the topic Word Tokenization with NLTK.
code
# Import necessary modules from nltk.tokenize import sent_tokenize from nltk.tokenize import word_tokenize # Split scene_one into sentences: sentences sentences = sent_tokenize(scene_one) # Use word_tokenize to tokenize the fourth sentence: tokenized_sent tokenized_sent = word_tokenize(sentences[3]) # Make a set of unique tokens in the entire scene: unique_tokens unique_tokens = set(word_tokenize(scene_one)) # Print the unique tokens result print(unique_tokens)
Day 91 Of #100daysofcode and #Python
— Durga Pokharel (@durgacodes) March 30, 2021
Word Tokenization with NLTK From https://t.co/b2X089pkqcDataCamp#WomenWhoCode #CodeNewbie #100DaysOfCode #DEVCommunity pic.twitter.com/xBLjPrnUT6
Top comments (0)