An overview of natural language processing and DL4J

Unstructured data can contain a huge amount of useful information. However, due to the complexities that can exist in processing and analyzing it, it’s often the case that practitioners avoid spending extra time and effort venturing outside of their comfort zones—i.e., outside of structured datasets—to analyze these unstructured goldmines. 
Ran Romano
Ran Romano
Co-founder & CPO at Qwak
June 23, 2022
Table of contents
An overview of natural language processing and DL4J

Unstructured data can contain a huge amount of useful information. However, due to the complexities that can exist in processing and analyzing it, it’s often the case that practitioners avoid spending extra time and effort venturing outside of their comfort zones—i.e., outside of structured datasets—to analyze these unstructured goldmines. 

Natural language processing (NLP) is a field within machine learning that focuses on using tools, techniques, and algorithms to process and understand natural language data, such as text and speech, which itself is inherently unstructured. In this article, we’re going to be looking at some of the basics of NLP and look at what ML teams can do with DeepLearning4J.

Components of NLP

There are two important components in NLP: natural language understanding (NLU) and natural language generation (NLG).

Natural language understanding enables the machine to understand and analyze human language by extracting metadata from content such as keywords, concepts, emotions, relations, and semantic roles. It’s mainly used in business applications to help businesses understand customer problems in both spoken and written form. NLU involves tasks like:

  • Mapping a given input into useful representation
  • Analyzing different aspects of language

Meanwhile, natural language generation acts as a translator that converts computerized data into natural language representation. NLG involves tasks like:

  • Text planning and text realization
  • Sentence planning

In short, NLU is the process of reading and interpreting language whereas NLG is the process of writing or generating language. The former produces non-linguistic outputs using natural language inputs while the latter constructs natural language outputs from non-linguistic inputs. 

Building an NLP pipeline

Building an NLP pipeline for ML applications involves several steps which we’ll summarize below. 

1. Sentence segmentation

Sentence segmentation is the first step in the NLP pipeline, and it’s used to break paragraphs into individual sentences. Consider this paragraph:

“Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so.”

Sentence segmentation would produce the following results:

  • Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks.
  • It is seen as a part of artificial intelligence.
  • Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so.

2. Word tokenization

Word Tonekizer is used to break the sentence into separate words or tokens. For example, “It is seen as a part of artificial intelligence.” becomes:

  • “It”, “is”, “seen”, “as”, “a”, “part”, “of”, “artificial”, “intelligence”, “.”

3. Stemming

Stemming is used to normalize words into their base or root form. For example, the words “intelligence” and “intelligently” stem from the single root word “intelligent”. The problem with stemming is that sometimes it produces the root word which may not have any meaning.  

4. Lemmatization

Lemmatization is quite similar to stemming. It’s used to group different inflected forms of a word, known as “lemma”. The main difference between stemming and lemmatization is that it produces the root word, which has a meaning.

5. Identifying stop words

In English, there are lots of words that appear frequently. Examples of these are, “is”, “and”, “the”, “it”, and “a”. NLP pipelines will flag these words as stop words, and they might be filtered out prior to any statistical analysis. 

For example:

  • “That is not how you do it.”

6. Dependency parsing

Dependency parsing is used to identify words n a sentence that are related to one another. 

7. POS tags

POS means “parts of speech”, which includes nouns, verbs, adverbs, and adjectives. It indicates how a word functions with its meaning as well as grammatically within a sentence. A word might have one or more POS based on the context of the sentence. 

For example:

  • Please can you Google something for me?

Although Google is commonly used as a verb, it is a proper noun. 

8. Named entity recognition

Named entity recognition detects named entities in speech or written text, such as a person’s name, the name of a place, or the title of a movie. 

For example:

  • Barack Obama was the President of the United States from 2009 to 2017. 

9. Chunking

Chunking collects the individual bits of information and groups them into larger sentences. 

Phases of NLP

Now that we’ve looked at the pieces of a typical NLP pipeline, let’s look at the five key phases of NLP before putting everything together in a DL4J example. These phases are:

  1. Lexical analysis—Lexical analysis scans the source code as a stream of characters and converts it into meaningful lexemes by dividing the whole text into paragraphs, sentences, and words.
  1. Syntactic analysis—Syntactic analysis checks grammar, word arrangements, and shows relationships that exist among the words.
  1. Semantic analysis—Semantic analysis looks at the literal meaning of words, phrases, and sentences.
  1. Discourse integration—Discourse integration depends on preceding sentences and invokes the meaning of sentences that follow it. 
  1. Pragmatic analysis—Pragmatic analysis applies a set of rules that characterize cooperative dialogues.

NLP in DeepLearning4J

Eclipse Deeplearning4J (DL4J) is a suite of tools for running deep learning on the Java virtual machine. DL4J is the only framework that enables ML teams to train models from Java while interoperating with the Python ecosystem through a mix of Python execution via DL4J’s CPython bindings, model import support, and interop of other runtimes such as TensorFlow Java and ONNX Runtime.

Use cases for DL4J include importing and retraining models (Pytorch, TensorFlow, Keras) models and deploying them in JVM microservice environments, mobile devices, IoT, and Apache Spark. If used effectively, DL4J can be a great compliment to ML teams’ Python environment for running models built-in Python and deployed to or packaged for other environments.

Although DL4J isn’t exactly comparable to more high-level tools such as Stanford’s CoreNLP, it does include some text processing tools at its core that are useful for NLP. Let’s take a look at these in more detail. 

SentenceIterator

Processing natural language involves many steps, the first of which is to iterate over your corpus to create a list of documents. These can be as short as a sentence or two or as long as an entire article. 

This is done using something known as a SentenceIterator, which looks like this: 

String filePath = new File(dataLocalPath,"raw_sentences.txt").getAbsolutePath();
SentenceIterator iter = new BasicLineIterator(filePath);

In this code, String is the path to the text file whereas SentenceIterator strips the white space before and after for each line. 

The SentenceIterator encapsulates a corpus of texts and organizes it. It’s responsible for feeding text piece by piece into a natural language processor and crating a selection of strings by segmenting a corpus. 

Tokenizer

A Tokenizer further segments the text at the level of single words, also alternatively as n-grams. ClearTK contains the underlying tokenizers, such as parts of speech and parse trees, which allow for both dependency and constituency parsing, such as that used by a recursive neural tensor network (RNTN).

A Tokenizer is created and wrapped by a TokenizerFactory. The default tokens are words separated by spaces. The tokenization process involves some ML to differentiate between ambiguous symbols such as periods that are both used to end sentences and also abbreviate words such as vs. and Dr. 

Both Tokenizers and SentenceIterators work with Preprocessors to deal with anomalies in messy text like Unicode, and to render such text, say, as lowercase characters uniformly.

public static void main(String[] args) throws Exception {

        dataLocalPath = DownloaderUtility.NLPDATA.Download()
        String filePath = new File(dataLocalPath,"raw_sentences.txt").getAbsolutePath();

        log.info("Load & Vectorize Sentences....");
        SentenceIterator iter = new BasicLineIterator(filePath);
        TokenizerFactory t = new DefaultTokenizerFactory();

        t.setTokenPreProcessor(new CommonPreprocessor());

Vocab

Each document must be tokenized to create a vocab, the set of words that are important to that corpus. These words are stored in the vocab cache, which contains statistics about a subset of words counted in the document. 

The line separating significant and insignificant words is mobile, but the basic premise behind distinguishing between the two groups is that words that occur only once or twice are hard to learn, and their presence represents unhelpful noise.

The vocab cache stores metadata for methods including Word2vec and Bag of Words, which treat words in radically different ways. For example, Word2vec creates representations of words in the form of vectors that are hundreds of coefficients long. These help neural networks predict the likelihood of a word appearing in any given context. 

Here’s a look at Word2vec configuration:

package org.deeplearning4j.examples.nlp.word2vec;


import org.deeplearning4j.examples.download.DownloaderUtility;
import org.deeplearning4j.models.word2vec.Word2Vec;
import org.deeplearning4j.text.sentenceiterator.BasicLineIterator;
import org.deeplearning4j.text.sentenceiterator.SentenceIterator;
import org.deeplearning4j.text.tokenization.tokenizer.preprocessor.CommonPreprocessor;
import org.deeplearning4j.text.tokenization.tokenizerfactory.DefaultTokenizerFactory;
import org.deeplearning4j.text.tokenization.tokenizerfactory.TokenizerFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.File;
import java.util.Collection;

public class Word2VecRawTextExample {

    private static Logger log = LoggerFactory.getLogger(Word2VecRawTextExample.class);

    public static String dataLocalPath;

    public static void main(String[] args) throws Exception {

        dataLocalPath = DownloaderUtility.NLPDATA.Download();
        // Gets Path to Text file
        String filePath = new File(dataLocalPath,"raw_sentences.txt").getAbsolutePath();

        log.info("Load & Vectorize Sentences....");
        SentenceIterator iter = new BasicLineIterator(filePath);
        TokenizerFactory t = new DefaultTokenizerFactory();

        t.setTokenPreProcessor(new CommonPreprocessor());

        log.info("Building model....");
        Word2Vec vec = new Word2Vec.Builder()
                .minWordFrequency(5)
                .iterations(1)
                .layerSize(100)
                .seed(42)
                .windowSize(5)
                .iterate(iter)
                .tokenizerFactory(t)
                .build();

        log.info("Fitting Word2Vec model....");
        vec.fit();

        log.info("Writing word vectors to text file....");

        log.info("Closest Words:");
        Collection lst = vec.wordsNearestSum("day", 10);
        log.info("10 Words closest to 'day': {}", lst);
    }
}

When word vectors are obtained, they can then be fed into a deep network for classification, prediction, sentiment analysis, and more.

Manage your NLP projects with Qwak

Want to build your own NLP project, train and evaluate your model in the cloud, and then send it to production, all from the same place? If so, Qwak has you covered! 

Qwak is the full-service machine learning platform that enables teams to take their models and transform them into well-engineered products. Our cloud-based platform removes the friction from ML development and deployment while enabling fast iterations, limitless scaling, and customizable infrastructure.

‍Want to find out more about how Qwak could help you deploy your ML models effectively? Get in touch for your free demo!

Chat with us to see the platform live and discover how we can help simplify your ML journey.

say goodbe to complex mlops with Qwak