Max Ryabinin
Max Ryabinin
Together AI
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
Flexgen: High-throughput generative inference of large language models with a single gpu
Y Sheng, L Zheng, B Yuan, Z Li, M Ryabinin, B Chen, P Liang, C Ré, ...
International Conference on Machine Learning, 31094-31116, 2023
Distributed Deep Learning in Open Collaborations
M Diskin*, A Bukhtiyarov*, M Ryabinin*, L Saulnier, Q Lhoest, A Sinitsin, ...
Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021
Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts
M Ryabinin, A Gusev
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 3659–3672, 2020
Petals: Collaborative inference and fine-tuning of large models
A Borzunov, D Baranchuk, T Dettmers, M Ryabinin, Y Belkada, ...
arXiv preprint arXiv:2209.01188, 2022
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
M Ryabinin*, E Gorbunov*, V Plokhotnyuk, G Pekhimenko
Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021
It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning
A Tikhonov*, M Ryabinin*
Findings of the ACL 2021, 3534–3546, 2021
Scaling Ensemble Distribution Distillation to Many Classes With Proxy Targets
M Ryabinin, A Malinin, M Gales
Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 2021
Distributed methods with compressed communication for solving variational inequalities, with theoretical guarantees
A Beznosikov, P Richtárik, M Diskin, M Ryabinin, A Gasnikov
Advances in Neural Information Processing Systems 35, 14013-14029, 2022
Secure Distributed Training at Scale
E Gorbunov, A Borzunov, M Diskin, M Ryabinin
International Conference on Machine Learning, 7679-7739, 2022
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
M Ryabinin, T Dettmers, M Diskin, A Borzunov
arXiv preprint arXiv:2301.11913, 2023
Distributed Inference and Fine-tuning of Large Language Models Over The Internet
A Borzunov, M Ryabinin, A Chumachenko, D Baranchuk, T Dettmers, ...
arXiv preprint arXiv:2312.08361, 2023
RuCoLA: Russian corpus of linguistic acceptability
V Mikhailov, T Shamardina, M Ryabinin, A Pestova, I Smurov, E Artemova
arXiv preprint arXiv:2210.12814, 2022
Training Transformers Together
A Borzunov, M Ryabinin, T Dettmers, Q Lhoest, L Saulnier, M Diskin, ...
Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track 176 …, 2022
Embedding Words in Non-Vector Space with Unsupervised Graph Learning
M Ryabinin, S Popov, L Prokhorenkova, E Voita
Empirical Methods in Natural Language Processing (EMNLP 2020), 7317–7331, 2020
Mind Your Format: Towards Consistent Evaluation of In-Context Learning Improvements
A Voronov, L Wolf, M Ryabinin
arXiv preprint arXiv:2401.06766, 2024
Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics
A Voronov, M Khoroshikh, A Babenko, M Ryabinin
Advances in Neural Information Processing Systems 36, 2024
Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding
Z Chen, A May, R Svirschevski, Y Huang, M Ryabinin, Z Jia, B Chen
arXiv preprint arXiv:2402.12374, 2024
Adaptive Prediction Time for Sequence Classification
M Ryabinin, E Lobacheva
SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices
R Svirschevski, A May, Z Chen, B Chen, Z Jia, M Ryabinin
arXiv preprint arXiv:2406.02532, 2024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20