Matthew E Peters
Matthew E Peters
Spiffy AI, Allen Institute for Artificial Intelligence
Bestätigte E-Mail-Adresse bei
Zitiert von
Zitiert von
Deep contextualized word representations
ME Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, ...
NAACL, Best Paper, 2018
Longformer: The Long-Document Transformer
I Beltagy, ME Peters, A Cohan
arXiv preprint arXiv:2004.05150, 2020
AllenNLP: A Deep Semantic Natural Language Processing Platform
M Gardner, J Grus, M Neumann, O Tafjord, P Dasigi, N Liu, M Peters, ...
arXiv preprint arXiv:1803.07640, 2018
Semi-supervised sequence tagging with bidirectional language models
ME Peters, W Ammar, C Bhagavatula, R Power
arXiv preprint arXiv:1705.00108, 2017
Knowledge enhanced contextual word representations
ME Peters, M Neumann, RL Logan IV, R Schwartz, V Joshi, S Singh, ...
arXiv preprint arXiv:1909.04164, 2019
Linguistic Knowledge and Transferability of Contextual Representations
NF Liu, M Gardner, Y Belinkov, M Peters, NA Smith
arXiv preprint arXiv:1903.08855, 2019
Relationships between water vapor path and precipitation over the tropical oceans
CS Bretherton, ME Peters, LE Back
Journal of climate 17 (7), 1517-1528, 2004
Transfer Learning in Natural Language Processing
S Ruder, ME Peters, S Swayamdipta, T Wolf
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
M Peters, S Ruder, NA Smith
arXiv preprint arXiv:1903.05987, 2019
Construction of the Literature Graph in Semantic Scholar
W Ammar, D Groeneveld, C Bhagavatula, I Beltagy, M Crawford, ...
arXiv preprint arXiv:1805.02262, 2018
Dissecting Contextual Word Embeddings: Architecture and Representation
ME Peters, M Neumann, L Zettlemoyer, W Yih
arXiv preprint arXiv:1808.08949, 2018
Understanding the origin and analysis of sediment-charcoal records with a simulation model
PE Higuera, ME Peters, LB Brubaker, DG Gavin
Quaternary Science Reviews 26 (13-14), 1790-1809, 2007
Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling
RL Logan IV, NF Liu, ME Peters, M Gardner, S Singh
Adversarial filters of dataset biases
R Le Bras, S Swayamdipta, C Bhagavatula, R Zellers, M Peters, ...
International Conference on Machine Learning, 1078-1088, 2020
Quantifying the source area of macroscopic charcoal with a particle dispersal model
ME Peters, PE Higuera
Quaternary Research 67 (2), 304-310, 2007
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
A Ross, A Marasović, ME Peters
arXiv preprint arXiv:2012.13985, 2020
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
V Joshi, M Peters, M Hopkins
arXiv preprint arXiv:1805.06556, 2018
ATTEMPT: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts
A Asai, M Salehi, ME Peters, H Hajishirzi
arXiv preprint arXiv:2205.11961, 2022
Few-Shot Self-Rationalization with Natural Language Prompts
A Marasović, I Beltagy, D Downey, ME Peters
arXiv preprint arXiv:2111.08284, 2021
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
H Ivison, Y Wang, V Pyatkin, N Lambert, M Peters, P Dasigi, J Jang, ...
arXiv preprint arXiv:2311.10702, 2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20