算力平台:
示例动物园
以下是展示 Accelerate 的教程和脚本的非详尽列表。
官方 Accelerate 示例:
基本示例
这些示例展示了 Accelerate 的基本功能,是很好的起点
特定功能示例
这些示例展示了 Accelerate 框架提供的特定功能
完整示例
这些示例展示了在“特定功能示例”中展示的所有功能
- 完整的 NLP 示例
- 完整的计算机视觉示例
- 展示 SLURM、hydra 和框架的非常可扩展的视觉示例
- 因果语言模型微调示例
- 掩码语言模型微调示例
- 语音预训练示例
- 翻译微调示例
- 文本分类微调示例
- 语义分割微调示例
- 问答微调示例
- 束搜索问答微调示例
- 多项选择问答微调示例
- 命名实体识别微调示例
- 图像分类微调示例
- 摘要微调示例
- 如何使用 AWS SageMaker 集成 Accelerate 的端到端示例
- 各种 NLP 任务的 Megatron-LM 示例
集成示例
这些是与 Accelerate 集成的库的教程:
没有找到你的集成?请提交 PR 以包含它!
Amphion
Catalyst
DALLE2-pytorch
Diffusers
fastai
GradsFlow
imagen-pytorch
Kornia
PyTorch Accelerated
PyTorch3D
Stable-Dreamfusion
Tez
trlx
Comfy-UI
在科学中的应用
以下是使用 Accelerate 的论文的非详尽列表。
没有找到你的论文?请提交 PR 以包含它!
- Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, Omer Levy: “Pick-a-Pic: 用于文本到图像生成的用户偏好开放数据集”,2023; arXiv:2305.01569.
- Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim: “Plan-and-Solve 提示:通过大型语言模型改进零样本链式推理”,2023; arXiv:2305.04091.
- Arthur Câmara, Claudia Hauff: “移动文档:关于将文档移入内存以提高神经信息检索模型效率的研究”,2022; arXiv:2205.08343.
- Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang: “使用单个 GPU 高吞吐量生成大型语言模型的推理”,2023; arXiv:2303.06865.
- Peter Melchior, Yan Liang, ChangHoon Hahn, Andy Goulding: “自编码星系光谱 I:架构”,2022; arXiv:2211.07890.
- Jiaao Chen, Aston Zhang, Mu Li, Alex Smola, Diyi Yang: “使用软掩码噪声的更便宜且更好的扩散语言模型”,2023; arXiv:2304.04746.
- Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa: “Instruct-NeRF2NeRF:使用指令编辑 3D 场景”,2023; arXiv:2303.12789.
- Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi: “RealFusion:从单张图像重建任何物体的 360° 重建”,2023; arXiv:2302.10663.
- Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, Hongsheng Li: “更好地对齐文本到图像模型与人类偏好”,2023; arXiv:2303.14420.
- Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang: “HuggingGPT:使用 ChatGPT 和 HuggingFace 的朋友解决 AI 任务”,2023; arXiv:2303.17580.
- Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu Chen: “Z-LaVI:由视觉想象驱动的零样本语言求解器”,2022; arXiv:2210.12261.
- Sheng-Yen Chou, Pin-Yu Chen, Tsung-Yi Ho: “如何对扩散模型进行后门攻击?”,2022; arXiv:2212.05400.
- Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim: “让 2D 扩散模型了解 3D 一致性以实现鲁棒的文本到 3D 生成”,2023; arXiv:2303.07937.
- Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, Daniel Cohen-Or: “使用文本到图像扩散模型定位对象级别的形状变化”,2023; arXiv:2303.11306.
- Dídac Surís, Sachit Menon, Carl Vondrick: “ViperGPT:通过 Python 执行进行推理的视觉推理”,2023; arXiv:2303.08128.
- Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, Qifeng Chen: “FateZero:用于零样本文本视频编辑的注意力融合”,2023; arXiv:2303.09535.
- Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi: “NaturalProver:基于语言模型的有根据的数学证明生成”,2022; arXiv:2205.12910.
- Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure:文本引导的 3D 形状纹理化”,2023; arXiv:2302.01721.
- Puijin Cheng, Li Lin, Yijin Huang, Huaqing He, Wenhan Luo, Xiaoying Tang: “从退化中学习增强:用于眼底图像增强的扩散模型”,2023; arXiv:2303.04603.
- Shun Shao, Yftah Ziser, Shay Cohen: “从神经表示中擦除未对齐的属性”,2023; arXiv:2302.02997.
- Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo: “上下文中的指令学习”,2023; arXiv:2302.14691.
- Shikun Liu, Linxi Fan, Edward Johns, Zhiding Yu, Chaowei Xiao, Anima Anandkumar: “Prismer:具有专家集合的视觉语言模型”,2023; arXiv:2303.02506.
- Haoyu Chen, Zhihua Wang, Yang Yang, Qilin Sun, Kede Ma: “学习用于摄影图像的深度颜色差异度量”,2023; arXiv:2303.14964.
- Van-Hoang Le, Hongyu Zhang: “基于提示的少量学习日志解析”,2023; arXiv:2302.07435.
- Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui: “深度神经网络是否捕获了算术推理的组合性?”,2023; arXiv:2302.07866.
- Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, Prithviraj Ammanabrolu: “行为克隆变压器是神经符号推理器”,2022; arXiv:2210.07382.
- Martin Wessel, Tomáš Horych, Terry Ruas, Akiko Aizawa, Bela Gipp, Timo Spinde: “介绍 MBIB——第一个媒体偏见识别基准任务和数据集集合”,2023; arXiv:2304.13148. DOI: [https://dx.doi.org/10.1145/3539618.3591882 10.1145/3539618.3591882].
- Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or: “Attend-and-Excite:基于注意力的文本到图像扩散模型的语义引导”,2023; arXiv:2301.13826.
- Marcio Fonseca, Yftah Ziser, Shay B. Cohen: “在长文档的抽象总结中分解内容和预算决策”,2022; arXiv:2205.12486.
- Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or: “TEXTure:文本引导的 3D 形状纹理化”,2023; arXiv:2302.01721.
- Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, Yulia Tsvetkov: “基于模型的文本生成评估指标的盲点”,2022; arXiv:2212.10020.
- Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham: “上下文检索增强语言模型”,2023; arXiv:2302.00083.
- Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P. Xing, Hao Zhang: “MPCFormer:快速、高效且私有的 Transformer 推理”,2022; arXiv:2211.01452.
- Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, Jianfeng Gao: “GODEL:大规模目标导向对话的预训练”,2022; arXiv:2206.11309.
- Egil Rønningstad, Erik Velldal, Lilja Øvrelid: “实体级情感分析(ELSA):一项探索性任务调查”,2023,第 29 届计算语言学国际会议论文集,2022,第 6773-6783 页; arXiv:2304.14241.
- Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, Sergey Levine: “用于自然语言生成的离线强化学习与隐式语言 Q 学习”,2022; arXiv:2206.11871.
- Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig: “基于执行的开放域代码生成评估”,2022; arXiv:2212.10481.
- Minh-Long Luu, Zeyi Huang, Eric P. Xing, Yong Jae Lee, Haohan Wang: “通过随机梯度阈值化实现快速显著性引导的 Mix-up”,2022; arXiv:2212.04875.
- Jun Hao Liew, Hanshu Yan, Daquan Zhou, Jiashi Feng: “MagicMix:使用扩散模型的语义混合”,2022; arXiv:2210.16056.
- Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao: “LiST:轻量级提示自训练使参数高效的少样本学习器”,2021; arXiv:2110.06274.