12 Week 7: Unsupervised learning (word embedding)

This week we will be discussing a second form of “unsupervised” learning—word embeddings. If previous weeks allowed us to characterize the complexity of text, or cluster text by potential topical focus, word embeddings permit us a more expansive form of measurement. In essence, we are producing here a matrix representation of an entire corpus.

The reading by Pedro L. Rodriguez and Spirling (2022) provides an effective overview of the technical dimensions of this technique. The articles by Garg et al. (2018) and Kozlowski, Taddy, and Evans (2019) are two substantive articles that use word embeddings to provide insights into prejudice and bias as manifested in language over time.

Required reading:

  • Garg et al. (2018)
  • Kozlowski, Taddy, and Evans (2019)
  • Waller and Anderson (2021)

Further reading:

Slides: