Day 91 Of 100DaysOfCode: Word Tokenization with NLTK
Durga Pokharel
Posted on March 30, 2021
This is my 91st day of #100daysofcode and #python learning journey. Talking about today's progress I did write one blog and push the blog on GitHub. Did some code on random topic.
Like usual today also keep learning from Datacamp chapter Natural Language Processing regarding to the topic Word Tokenization with NLTK.
code
# Import necessary modules
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
# Split scene_one into sentences: sentences
sentences = sent_tokenize(scene_one)
# Use word_tokenize to tokenize the fourth sentence: tokenized_sent
tokenized_sent = word_tokenize(sentences[3])
# Make a set of unique tokens in the entire scene: unique_tokens
unique_tokens = set(word_tokenize(scene_one))
# Print the unique tokens result
print(unique_tokens)
Day 91 Of #100daysofcode and #Python
— Durga Pokharel (@durgacodes) March 30, 2021
Word Tokenization with NLTK From https://t.co/b2X089pkqcDataCamp#WomenWhoCode #CodeNewbie #100DaysOfCode #DEVCommunity pic.twitter.com/xBLjPrnUT6
💖 💪 🙅 🚩
Durga Pokharel
Posted on March 30, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.