Folgen
Irina Piontkovskaya
Irina Piontkovskaya
Huawei Noah's Ark Lab
Bestätigte E-Mail-Adresse bei huawei.com
Titel
Zitiert von
Zitiert von
Jahr
Revisiting mahalanobis distance for transformer-based out-of-domain detection
A Podolskiy, D Lipin, A Bout, E Artemova, I Piontkovskaya
Proceedings of the AAAI Conference on Artificial Intelligence 35 (15), 13675 …, 2021
652021
Pangu-{\Sigma}: Towards trillion parameter language model with sparse heterogeneous computing
X Ren, P Zhou, X Meng, X Huang, Y Wang, W Wang, P Li, X Zhang, ...
arXiv preprint arXiv:2303.10845, 2023
442023
Artificial text detection via examining the topology of attention maps
L Kushnareva, D Cherniavskii, V Mikhailov, E Artemova, S Barannikov, ...
arXiv preprint arXiv:2109.04825, 2021
372021
Intrinsic dimension estimation for robust detection of ai-generated texts
E Tulchinskii, K Kuznetsov, L Kushnareva, D Cherniavskii, S Nikolenko, ...
Advances in Neural Information Processing Systems 36, 2024
252024
Distributed fine-tuning of language models on private data
V Popov, M Kudinov, I Piontkovskaya, P Vytovtov, A Nevidomsky
International Conference on Learning Representations, 2018
172018
Acceptability judgements via examining the topology of attention maps
D Cherniavskii, E Tulchinskii, V Mikhailov, I Proskurina, L Kushnareva, ...
arXiv preprint arXiv:2205.09630, 2022
122022
Ask me anything in your native language
N Sorokin, D Abulkhanov, I Piontkovskaya, V Malykh
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
92022
SumTitles: a summarization dataset with low extractiveness
V Malykh, K Chernis, E Artemova, I Piontkovskaya
Proceedings of the 28th International Conference on Computational …, 2020
62020
Binary autoencoder for text modeling
R Baynazarov, I Piontkovskaya
Artificial Intelligence and Natural Language: 8th Conference, AINL 2019 …, 2019
62019
Pangu-∑: Towards trillion parameter language model with sparse heterogeneous computing
X Ren, P Zhou, X Meng, X Huang, Y Wang, W Wang, P Li, X Zhang, ...
arXiv preprint arXiv:2303.10845 10, 11-15, 2023
52023
GEC-DePenD: Non-autoregressive grammatical error correction with decoupled permutation and decoding
K Yakovlev, A Podolskiy, A Bout, S Nikolenko, I Piontkovskaya
arXiv preprint arXiv:2311.08191, 2023
42023
Template-based approach to zero-shot intent recognition
D Lamanov, P Burnyshev, E Artemova, V Malykh, A Bout, I Piontkovskaya
arXiv preprint arXiv:2206.10914, 2022
42022
A single example can improve zero-shot data generation
P Burnyshev, V Malykh, A Bout, E Artemova, I Piontkovskaya
arXiv preprint arXiv:2108.06991, 2021
42021
Multiple teacher distillation for robust and greener models
A Ilichev, N Sorokin, I Piontkovskaya, V Malykh
Proceedings of the International Conference on Recent Advances in Natural …, 2021
32021
InFoBERT: Zero-shot approach to natural language understanding using contextualized word embedding
P Burnyshev, A Bout, V Malykh, I Piontkovskaya
Proceedings of the International Conference on Recent Advances in Natural …, 2021
32021
Differentially private distributed learning for language modeling tasks
V Popov, M Kudinov, I Piontkovskaya, P Vytovtov, A Nevidomsky
arXiv preprint arXiv:1712.07473, 2017
32017
Sinkhorn Transformations for Single-Query Postprocessing in Text-Video Retrieval
K Yakovlev, G Polyakov, I Alimova, A Podolskiy, A Bout, S Nikolenko, ...
Proceedings of the 46th International ACM SIGIR Conference on Research and …, 2023
22023
Efficient Grammatical Error Correction Via Multi-Task Training and Optimized Training Schedule
A Bout, A Podolskiy, S Nikolenko, I Piontkovskaya
arXiv preprint arXiv:2311.11813, 2023
12023
Topological data analysis for speech processing
E Tulchinskii, K Kuznetsov, L Kushnareva, D Cherniavskii, S Barannikov, ...
arXiv preprint arXiv:2211.17223, 2022
12022
Betti numbers of attention graphs is all you really need
L Kushnareva, D Piontkovski, I Piontkovskaya
arXiv preprint arXiv:2207.01903, 2022
12022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20