Introduction to Topic Modeling in Python

PyTexas 2015

by Christine Doig


About me

Data Scientist at Continuum Analytics

Barcelona & Austin


About Continuum Analytics

Free Python distribution: Anaconda

Open source: conda, blaze, dask, bokeh, numba...

Proud sponsor of PyTexas, PyData, SciPy, PyCon, Europython...

We are hiring!

About this talk

  • Introduction
  • Topic Modeling
  • LDA Algorithm
  • Python libraries
  • Pipelines
  • Other algorithms
  • Additional resources

    Topic Modeling

    Topic Modeling Applications

    Building the NYT Recommendation Engine: From keywords over collaborative filtering to Collaborative Topic Modeling


  • A topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents [1]
  • Topic models are a suite of algorithms that uncover the hidden thematic structure in document collections. These algorithms help us develop new ways to search, browse and summarize large archives of texts [2]
  • Topic models provide a simple way to analyze large volumes of unlabeled text. A "topic" consists of a cluster of words that frequently occur together[3]





    LDA vs LDA

    LDA Plate notation

  • Parameters and variables

    Understanding LDA

    LDA algorithm

    Iterative algorithm

    1. Initialize parameters
    2. Initialize topic assignments randomly
    3. Iterate
      • For each word in each document:
      • Resample topic for word, given all other words and their current topic assignments
    4. Get results
    5. Evaluate model

    Initialize parameters

    Initialize topic assignments randomly


    Resample topic for word, given all other words and their current topic assignments

    Resample topic for word, given all other words and their current topic assignments

  • Which topics occur in this document?
  • Which topics like the word X?
  • Get results

    Evaluate model

    Hard: Unsupervised learning. No labels.


  • Word intrusion [1]: For each trained topic, take first ten words, substitute one of them with another, randomly chosen word (intruder!) and see whether a human can reliably tell which one it was. If so, the trained topic is topically coherent (good); if not, the topic has no discernible theme (bad) [2]

  • Topic intrusion: Subjects are shown the title and a snippet from a document. Along with the document they are presented with four topics. Three of those topics are the highest probability topics assigned to that document. The remaining intruder topic is chosen randomly from the other low-probability topics in the model [1]

  • [1] -
    [2] -

    Evaluate model


    Evaluate model


  • Cosine similarity: split each document into two parts, and check that topics of the first half are similar to topics of the second halves of different documents are mostly dissimilar

  • [1] -

    Evaluate model


    Evaluate model

    More Metrics [1]:

  • Size (# of tokens assigned)
  • Within-doc rank
  • Similarity to corpus-wide distribution
  • Locally-frequent words
  • Co-doc Coherence
  • [1] -

    Python libraries

    Python libraries

    Warning: Current LDA in scikit-learn refers to Linear Discriminant Analysis!

    [1] -

    [2] -


    import gensim
    # load id->word mapping (the dictionary)
    id2word = gensim.corpora.Dictionary.load_from_text('wiki_en_wordids.txt')
    # load corpus iterator
    mm = gensim.corpora.MmCorpus('')
    # extract 100 LDA topics, using 20 full passes, (batch mode) no online updates
    lda = gensim.models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=100, update_every=0, passes=20)

  • Graphlab

    import graphlab as gl
    docs = graphlab.SArray('')
    m = gl.topic_model.create(docs,
                              num_topics=20,       # number of topics
                              num_iterations=10,   # algorithm parameters
                              alpha=.01, beta=.1)  # hyperparameters

  • lda

    import lda
    X = lda.datasets.load_reuters()
    model = lda.LDA(n_topics=20, n_iter=1500, random_state=1)  # model.fit_transform(X) is also available

  • sklearn.decomposition.LatentDirichletAllocation

    from sklearn.decomposition import NMF, LatentDirichletAllocation
    X = ...
    lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5, random_state=0)

  • scikit-learn LDA example
  • Pipeline



    Vector Space


    Gensim Models

    Scikit-learn example

    from sklearn.decomposition import LatentDirichletAllocation
    from sklearn.datasets import fetch_20newsgroups
    # Initialize variables
    n_samples = 2000
    n_features = 1000
    n_topics = 10
    dataset = fetch_20newsgroups(shuffle=True, random_state=1,
                                 remove=('headers', 'footers', 'quotes'))
    data_samples =[:n_samples]
    # use tf feature for LDA model
    tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_features,
    tf = tf_vectorizer.fit_transform(data_samples)
    lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
                                    learning_method='online', learning_offset=50.,

  • Evaluation - Visualization






  • Resources

    IPython notebooks explaining Dirichlet Processes, HDPs, and Latent Dirichlet Allocation, Timothy Hopper

  • Visualizing Topic Models, Data Science Summit & Dato Conference 2015
  • Video, Ben Mabey
  • Questions?



    Twitter: ch_doig