research
  1. The Plausibility of Sampling as an Algorithmic Theory of Sentence Processing Jacob Louis Hoover, Morgan Sonderegger, Steven T. Piantadosi, and Timothy J. O’Donnell Open Mind: Discoveries in Cognitive Science. 350–391. Jul, 2023. [DOI] [link] [preprint] [code repository] [☛ surprisal explorer]
    bib
    @article{hoover-etal-2023-plausibility,
      title = {The Plausibility of Sampling as an Algorithmic Theory of Sentence Processing},
      author = {Hoover, Jacob Louis and Sonderegger, Morgan and Piantadosi, Steven T. and O'Donnell, Timothy J.},
      year = {2023},
      month = jul,
      journal = {Open Mind: Discoveries in Cognitive Science},
      volume = {7},
      pages = {350--391},
      issn = {2470-2986},
      doi = {10.1162/opmi_a_00086},
      url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10449406/},
      urldate = {2023-07-21},
      copyright = {All rights reserved},
      openaccess = {true},
      pmcid = {PMC10449406},
      myurl = {https://jahoo.shinyapps.io/surprisal_explorer/},
      myurltitle = {surprisal explorer},
      preprint = {https://osf.io/qjnpv}
    }
    abstract

    Words that are more surprising given context take longer to process. However, no incremental parsing algorithm has been shown to directly predict this phenomenon. In this work, we focus on a class of algorithms whose runtime does naturally scale in surprisal—those that involve repeatedly sampling from the prior. Our first contribution is to show that simple examples of such algorithms predict runtime to increase superlinearly with surprisal, and also predict variance in runtime to increase. These two predictions stand in contrast with literature on surprisal theory (Hale, 2001; Levy, 2008) which assumes that the expected processing cost increases linearly with surprisal, and makes no prediction about variance. In the second part of this paper, we conduct an empirical study of the relationship between surprisal and reading time, using a collection of modern language models to estimate surprisal. We find that with better language models, reading time increases superlinearly in surprisal, and also that variance increases. These results are consistent with the predictions of sampling-based algorithms.

  2. With Better Language Models, Processing Time Is Superlinear in Surprisal Jacob Louis Hoover, Morgan Sonderegger, Steven T. Piantadosi, and Timothy J. O’Donnell Poster at Architectures and Mechanisms for Language Processing (AMLaP 28). York, England. 6 Sep, 2022. [link] [poster]
    bib
    @misc{hoover-etal-2022-amlap,
      type = {Poster},
      title = {With Better Language Models, Processing Time Is Superlinear in Surprisal},
      author = {Hoover, Jacob Louis and Sonderegger, Morgan and Piantadosi, Steven T. and O'Donnell, Timothy J.},
      year = {2022},
      howpublished = {Poster at Architectures and  Mechanisms for Language Processing (AMLaP 28)},
      day = {6},
      month = sep,
      address = {{York, England}},
      url = {https://virtual.oxfordabstracts.com/#/event/3067/submission/297}
    }
  3. Linguistic Dependencies and Statistical Dependence Jacob Louis Hoover, Wenyu Du, Alessandro Sordoni, and Timothy J. O’Donnell In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2941–2963. Association for Computational Linguistics. Online and Punta Cana, Dominican Republic. Nov, 2021. [link] [poster] [slides] [code repository]
    bib
    @inproceedings{hoover-etal-2021-emnlp,
      address = {Online and Punta Cana, Dominican Republic},
      author = {Hoover, Jacob Louis and Du, Wenyu and Sordoni, Alessandro and O{'}Donnell, Timothy J.},
      booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
      month = nov,
      pages = {2941--2963},
      publisher = {Association for Computational Linguistics},
      title = {Linguistic Dependencies and Statistical Dependence},
      url = {https://aclanthology.org/2021.emnlp-main.234},
      year = {2021}
    }
    abstract

    Are pairs of words that tend to occur together also likely to stand in a linguistic dependency? This empirical question is motivated by a long history of literature in cognitive science, psycholinguistics, and NLP. In this work we contribute an extensive analysis of the relationship between linguistic dependencies and statistical dependence between words. Improving on previous work, we introduce the use of large pretrained language models to compute contextualized estimates of the pointwise mutual information between words (CPMI). For multiple models and languages, we extract dependency trees which maximize CPMI, and compare to gold standard linguistic dependencies. Overall, we find that CPMI dependencies achieve an unlabelled undirected attachment score of at most ≈0.5. While far above chance, and consistently above a non-contextualized PMI baseline, this score is generally comparable to a simple baseline formed by connecting adjacent words. We analyze which kinds of linguistic dependencies are best captured in CPMI dependencies, and also find marked differences between the estimates of the large pretrained language models, illustrating how their different training schemes affect the type of dependencies they capture.

  4. Accounting for Variation in Number Agreement in Icelandic Dative-Nominative Constructions Jacob Louis Hoover In Proceedings of the 38th West Coast Conference on Formal Linguistics. Ed. Rachel Soo, Una Y. Chow, Sander Nederveen. 231–241. Cascadilla Proceedings Project. Somerville, Mass., USA. Oct, 2021. [link] [handout] [pdf]
    bib
    @inproceedings{hoover-2020-icelandic,
      address = {Somerville, Mass., USA},
      author = {Hoover, Jacob Louis},
      booktitle = {Proceedings of the 38th West Coast Conference on Formal Linguistics},
      date-modified = {2022-10-09 22:26:21 -0400},
      editor = {Rachel Soo, Una Y. Chow, Sander Nederveen},
      month = oct,
      pages = {231--241},
      publisher = {Cascadilla Proceedings Project},
      title = {Accounting for Variation in Number Agreement in Icelandic Dative-Nominative Constructions},
      url = {http://www.lingref.com/cpp/wccfl/38/abstract3568.html},
      year = {2021}
    }
    abstract

    Icelandic dative-nominative constructions exhibit a syntactic hierarchy effect known as the Person Restriction: only third person nominatives may control agreement. In these constructions, there is variation between speakers in the extent to which the verb agrees with the nominative for number. Sigurðsson & Holmberg (2008) explain this variation as arising due to differences between varieties in the timing of subject raising, using a split phi-probe. This paper revises their approach, using the feature gluttony mechanism for Agree developed in Coon & Keine (2020), and a split phi-probe in which person probing precedes number probing. Within this framework, the observed variation can be captured by allowing variability two independent parameters: the timing of EPP subject raising, and the visibility of a number feature on dative DPs. The proposed mechanism describes the variation, including predicting the observed optional agreement in certain cases that previous literature had struggled to account for, and makes additional predictions about the differences between varieties in cases of syncretism within the verbal paradigm. An investigation into these predictions should allow this already well-studied area of Icelandic grammar to continue to be a useful test-case for crosslinguistic assumptions about the mechanism of Agree, and the status of dative arguments.