Day 91 Of 100DaysOfCode: Word Tokenization with NLTK

iamdurga

Durga Pokharel

Posted on March 30, 2021

Day 91 Of 100DaysOfCode: Word Tokenization with NLTK

This is my 91st day of #100daysofcode and #python learning journey. Talking about today's progress I did write one blog and push the blog on GitHub. Did some code on random topic.

Like usual today also keep learning from Datacamp chapter Natural Language Processing regarding to the topic Word Tokenization with NLTK.

code

# Import necessary modules
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize

# Split scene_one into sentences: sentences
sentences = sent_tokenize(scene_one)

# Use word_tokenize to tokenize the fourth sentence: tokenized_sent
tokenized_sent = word_tokenize(sentences[3])

# Make a set of unique tokens in the entire scene: unique_tokens
unique_tokens = set(word_tokenize(scene_one))

# Print the unique tokens result
print(unique_tokens)

Enter fullscreen mode Exit fullscreen mode

Day 91 Of #100daysofcode and #Python
Word Tokenization with NLTK From https://t.co/b2X089pkqcDataCamp#WomenWhoCode #CodeNewbie #100DaysOfCode #DEVCommunity pic.twitter.com/xBLjPrnUT6

— Durga Pokharel (@durgacodes) March 30, 2021
💖 💪 🙅 🚩
iamdurga
Durga Pokharel

Posted on March 30, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related