EleutherAI

EleutherAI (/əˈlθər/[2]) is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI,[3] was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute.[4]

EleutherAI
Type of businessResearch co-operative
Founded3 July 2020 (2020-07-03)[1]
IndustryArtificial intelligence
ProductsGPT-Neo, GPT-NeoX, GPT-J, Fairseq, The Pile
URLeleuther.ai

History

EleutherAI began as a Discord server on July 7, 2020 under the tentative name "LibreAI" before rebranding to "EleutherAI" later that month,[5] in reference to eleutheria, an ancient greek term for liberty.[3]

On December 30, 2020, EleutherAI released The Pile, a curated dataset of diverse text for training large language models.[6] While the paper referenced the existence of the GPT-Neo models, the models themselves were not released until March 21, 2021.[7] According to a retrospective written several months later, the authors did not anticipate that "people would care so much about our 'small models.'"[1] On June 9, 2021, EleutherAI followed this up with GPT-J-6B, a six billion parameter language model that was again the largest open-source GPT-3-like model in the world.[8] These language models were released under the Apache 2.0 free software license and are considered to have "fueled an entirely new wave of startups".[4]

While EleutherAI initially turned down funding offers, preferring to use Google's TPU Research Cloud Program to source their compute,[9] by early 2021 they had accepted funding from CoreWeave (a small cloud computing company) and SpellML (a cloud infrastructure company) in the form of access to powerful GPU clusters that are necessary for large scale machine learning research. On Feb 10, 2022 they released GPT-NeoX-20B, a model similar to their prior work but scaled up thanks to the resources CoreWeave provided.[10]

In 2022, many EleutherAI members participated in the BigScience Research Workshop, working on projects including multitask finetuning,[11][12] training BLOOM,[13] and designing evaluation libraries.[14] Engineers at EleutherAI, Stability AI, and NVIDIA joined forces with biologists lead by Columbia University and Harvard University[15] to train OpenFold, an open-source replication of DeepMind's AlphaFold2.[16]

In early 2023, EleutherAI incorporated as a non-profit research institute run by Stella Biderman, Curtis Huebner, and Shivanshu Purohit.[4][17] This announcement came with the statement that EleutherAI's shift of focus away from training larger language models was part of a deliberate push towards doing work in interpretability, alignment, and scientific research.[17] While EleutherAI is still committed to promoting access to AI technologies, they feel that "there is substantially more interest in training and releasing LLMs than there once was," enabling them to focus on other projects.[18]

Research

According to their website, EleutherAI is a "decentralized grassroots collective of volunteer researchers, engineers, and developers focused on AI alignment, scaling, and open-source AI research".[19] While they do not sell any of their technologies as products, they publish the results of their research in academic venues, write blog posts detailing their ideas and methodologies, and provide trained models for anyone to use for free.

The Pile

The Pile is an 886 GB dataset designed for training large language models. It was originally developed to train EleutherAI's GPT-Neo models but has become widely used to train other models, including Microsoft's Megatron-Turing Natural Language Generation,[20][21] Meta AI's Open Pre-trained Transformers,[22] LLaMA,[23] and Galactica,[24] Stanford University's BioMedLM 2.7B,[25] the Beijing Academy of Artificial Intelligence's Chinese-Transformer-XL,[26] and Yandex's YaLM 100B.[27] Compared to other datasets, the Pile's main distinguishing features are that it is a curated selection of data chosen by researchers at EleutherAI to contain information they thought language models should learn and that it is the only such dataset that is thoroughly documented by the researchers who developed it.[28]

GPT models

EleutherAI's most prominent research relates to its work to train open-source large language models inspired by OpenAI's GPT-3.[29] EleutherAI's "GPT-Neo" model series has released 125 million, 1.3 billion, 2.7 billion, 6 billion, and 20 billion parameter models.

  • GPT-Neo (125M, 1.3B, 2.7B):[30] released in March 2021, it was the largest open-source GPT-3-style language model in the world at the time of release.
  • GPT-J (6B):[31] released in March 2021, it was the largest open-source GPT-3-style language model in the world at the time of release.[32]
  • GPT-NeoX (20B):[33] released in February 2022, it was the largest open-source language model in the world at the time of release.
  • Pythia (13B):[34] While prior models focused on scaling larger to close the gap with closed-sourced models like GPT-3, the Pythia model suite goes in another direction. The Pythia suite was designed to facilitate scientific research on the capabilities of and learning processes in large language models.[35] Featuring 154 partially trained model checkpoints, fully public training data, and the ability to reproduce the exact training order, Pythia enables research on verifiable training,[36] social biases,[37] memorization, [38] and more.[39]

While the overwhelming majority of large language models are trained in either English or Chinese, EleutherAI also trains language models in other languages, such as the Korean-language Polyglot-Ko.[40]

VQGAN-CLIP

An artificial intelligence art created with CLIP-Guided Diffusion, a text-to-image model created by Katherine Crowson of EleutherAI[41][42]

Following the release of DALL-E by OpenAI in January 2021, EleutherAI started working on text-to-image synthesis models. When OpenAI didn't release DALL-E publicly, EleutherAI's Katherine Crowson and digital artist Ryan Murdock developed a technique for using CLIP (another model developed by OpenAI) to convert regular image generation models into text-to-image synthesis ones.[43][44][45][46] Building on ideas dating back to Google's DeepDream,[47] they found their first major success combining CLIP with another publicly available model called VQGAN and the resulting model is called VQGAN-CLIP.[48] Crowson released the technology by tweeting notebooks demonstrating the technique that people could run for free without any special equipment.[49][50][51] This work was credited by Stability AI CEO Emad Mostaque as motivating the founding of Stability AI.[52]

Public reception

Praise

EleutherAI's work to democratize GPT-3 won the UNESCO Netexplo Global Innovation Award in 2021,[53] InfoWorld's Best of Open Source Software Award in 2021[54] and 2022,[55] was nominated for VentureBeat's AI Innovation Award in 2021.[56]

Gary Marcus, a cognitive scientist and noted critic of deep learning companies such as OpenAI and DeepMind,[57] has repeatedly[58][59] praised EleutherAI's dedication to open-source and transparent research.

Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, applauded EleutherAI's efforts to give more researchers the ability to audit and assess AI technology. "If models are open and if data sets are open, that'll enable much more of the critical research that's pointed out many of the flaws and harms associated with generative AI and that's often far too difficult to conduct."[60]

Criticism

Technology journalist Kyle Wiggers has raised concerns about whether EleutherAI is as independent as it claims, or "whether the involvement of commercially motivated ventures like Stability AI and Hugging Face — both of which are backed by substantial venture capital — might influence EleutherAI's research."[61]

References

  1. Leahy, Connor; Hallahan, Eric; Gao, Leo; Biderman, Stella (7 July 2021). "What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective". Archived from the original on 29 August 2023. Retrieved 1 March 2023.
  2. "Talk with Stella Biderman on The Pile, GPT-Neo and MTG". The Interference Podcast. 2 April 2021. Retrieved 26 March 2023.
  3. Smith, Craig (21 March 2022). "EleutherAI: When OpenAI Isn't Open Enough". IEEE Spectrum. IEEE. Archived from the original on 29 August 2023. Retrieved 8 August 2023.
  4. Wiggers, Kyle (2 March 2023). "Stability AI, Hugging Face and Canva back new AI research nonprofit". TechCrunch. Archived from the original on 29 August 2023. Retrieved 8 August 2023.
  5. Leahy, Connor; Hallahan, Eric; Gao, Leo; Biderman, Stella (7 July 2021). "What A Long, Strange Trip It's Been: EleutherAI One Year Retrospective". EleutherAI Blog. Archived from the original on 29 August 2023. Retrieved 14 April 2023.
  6. Gao, Leo; Biderman, Stella; Black, Sid; et al. (31 December 2020). The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv 2101.00027. arXiv:2101.00027.
  7. "GPT-3's free alternative GPT-Neo is something to be excited about". VentureBeat. 15 May 2021. Archived from the original on 9 March 2023. Retrieved 14 April 2023.
  8. "GPT-J-6B: An Introduction to the Largest Open Source GPT Model | Forefront". www.forefront.ai. Archived from the original on 9 March 2023. Retrieved 1 March 2023.
  9. "EleutherAI: When OpenAI Isn't Open Enough". IEEE Spectrum. Archived from the original on 21 March 2023. Retrieved 1 March 2023.
  10. Black, Sid; Biderman, Stella; Hallahan, Eric; et al. (14 April 2022). "GPT-NeoX-20B| An Open-Source Autoregressive Language Model". arXiv:2204.06745 [cs.CL].
  11. Sanh, Victor; et al. (2021). "Multitask Prompted Training Enables Zero-Shot Task Generalization". arXiv:2110.08207 [cs.LG].
  12. Muennighoff, Niklas; Wang, Thomas; Sutawika, Lintang; Roberts, Adam; Biderman, Stella; Teven Le Scao; M Saiful Bari; Shen, Sheng; Yong, Zheng-Xin; Schoelkopf, Hailey; Tang, Xiangru; Radev, Dragomir; Alham Fikri Aji; Almubarak, Khalid; Albanie, Samuel; Alyafeai, Zaid; Webson, Albert; Raff, Edward; Raffel, Colin (2022). "Crosslingual Generalization through Multitask Finetuning". arXiv:2211.01786 [cs.CL].
  13. Workshop, BigScience; et al. (2022). "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model". arXiv:2211.05100 [cs.CL].
  14. Workshop, BigScience; et al. (2022). "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model". arXiv:2211.05100 [cs.CL].
  15. https://cbirt.net/meet-openfold-reimplementing-alphafold2-to-illuminate-its-learning-mechanisms-and-generalization/
  16. https://wandb.ai/openfold/openfold/reports/Democratizing-AI-for-Biology-with-OpenFold--VmlldzoyODUyNDI4
  17. https://blog.eleuther.ai/year-two-preface/
  18. https://thenonprofittimes.com/technology/ai-research-lab-launches-open-source-research-nonprofit/
  19. "EleutherAI Website". EleutherAI. Archived from the original on 2 July 2021. Retrieved 1 July 2021.
  20. "Microsoft and Nvidia team up to train one of the world's largest language models". 11 October 2021. Archived from the original on 27 March 2023. Retrieved 8 March 2023.
  21. "AI: Megatron the Transformer, and its related language models". 24 September 2021. Archived from the original on 4 March 2023. Retrieved 8 March 2023.
  22. Zhang, Susan; Roller, Stephen; Goyal, Naman; Artetxe, Mikel; Chen, Moya; Chen, Shuohui; Dewan, Christopher; Diab, Mona; Li, Xian; Lin, Xi Victoria; Mihaylov, Todor; Ott, Myle; Shleifer, Sam; Shuster, Kurt; Simig, Daniel; Koura, Punit Singh; Sridhar, Anjali; Wang, Tianlu; Zettlemoyer, Luke (21 June 2022). "OPT: Open Pre-trained Transformer Language Models". arXiv:2205.01068 [cs.CL].
  23. Touvron, Hugo; Lavril, Thibaut; Izacard, Gautier; Grave, Edouard; Lample, Guillaume; et al. (27 February 2023). "LLaMA: Open and Efficient Foundation Language Models". arXiv:2302.13971 [cs.CL].
  24. Taylor, Ross; Kardas, Marcin; Cucurull, Guillem; Scialom, Thomas; Hartshorn, Anthony; Saravia, Elvis; Poulton, Andrew; Kerkez, Viktor; Stojnic, Robert (16 November 2022). "Galactica: A Large Language Model for Science". arXiv:2211.09085 [cs.CL].
  25. "Model Card for BioMedLM 2.7B". huggingface.co. Archived from the original on 5 June 2023. Retrieved 5 June 2023.
  26. Yuan, Sha; Zhao, Hanyu; Du, Zhengxiao; Ding, Ming; Liu, Xiao; Cen, Yukuo; Zou, Xu; Yang, Zhilin; Tang, Jie (1 January 2021). "WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models". AI Open. 2: 65–68. doi:10.1016/j.aiopen.2021.06.001. S2CID 236712622. Archived from the original on 9 July 2021. Retrieved 8 March 2023 via ScienceDirect.
  27. Grabovskiy, Ilya (2022). "Yandex publishes YaLM 100B, the largest GPT-like neural network in open source" (Press release). Yandex. Retrieved 5 June 2023.
  28. Khan, Mehtab; Hanna, Alex (13 September 2022). "The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability". SSRN 4217148. Archived from the original on 29 August 2023. Retrieved 8 March 2023 via papers.ssrn.com.
  29. "GPT-3's free alternative GPT-Neo is something to be excited about". 15 May 2021. Archived from the original on 9 March 2023. Retrieved 10 March 2023.
  30. Andonian, Alex; Biderman, Stella; Black, Sid; Gali, Preetham; Gao, Leo; Hallahan, Eric; Levy-Kramer, Josh; Leahy, Connor; Nestler, Lucas; Parker, Kip; Pieler, Michael; Purohit, Shivanshu; Songz, Tri; Phil, Wang; Weinbach, Samuel (13 August 2021). "GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch". Archived from the original on 13 March 2023. Retrieved 13 March 2023 via GitHub.
  31. "EleutherAI/gpt-j-6B · Hugging Face". huggingface.co. Archived from the original on 12 March 2023. Retrieved 10 March 2023.
  32. "GPT-J-6B: An Introduction to the Largest Open Source GPT Model | Forefront". www.forefront.ai. Archived from the original on 9 March 2023. Retrieved 1 March 2023.
  33. Black, Sidney; Biderman, Stella; Hallahan, Eric; et al. (1 May 2022). GPT-NeoX-20B: An Open-Source Autoregressive Language Model. Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models. Vol. Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models. pp. 95–136. Retrieved 19 December 2022.
  34. Biderman, Stella; Schoelkopf, Hailey; Anthony, Quentin; Bradley, Herbie; O'Brien, Kyle; Hallahan, Eric; Mohammad Aflah Khan; Purohit, Shivanshu; USVSN Sai Prashanth; Raff, Edward; Skowron, Aviya; Sutawika, Lintang; Oskar van der Wal (2023). "Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling". arXiv:2304.01373 [cs.CL].
  35. Biderman, Stella; Schoelkopf, Hailey; Anthony, Quentin; Bradley, Herbie; O'Brien, Kyle; Hallahan, Eric; Mohammad Aflah Khan; Purohit, Shivanshu; USVSN Sai Prashanth; Raff, Edward; Skowron, Aviya; Sutawika, Lintang; Oskar van der Wal (2023). "Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling". arXiv:2304.01373 [cs.CL].
  36. Choi, Dami; Shavit, Yonadav; Duvenaud, David (2023). "Tools for Verifying Neural Models' Training Data". arXiv:2307.00682 [cs.LG].
  37. Biderman, Stella; Schoelkopf, Hailey; Anthony, Quentin; Bradley, Herbie; O'Brien, Kyle; Hallahan, Eric; Mohammad Aflah Khan; Purohit, Shivanshu; USVSN Sai Prashanth; Raff, Edward; Skowron, Aviya; Sutawika, Lintang; Oskar van der Wal (2023). "Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling". arXiv:2304.01373 [cs.CL].
  38. Biderman, Stella; USVSN Sai Prashanth; Sutawika, Lintang; Schoelkopf, Hailey; Anthony, Quentin; Purohit, Shivanshu; Raff, Edward (2023). "Emergent and Predictable Memorization in Large Language Models". arXiv:2304.11158 [cs.CL].
  39. Gupta, Kshitij; Thérien, Benjamin; Ibrahim, Adam; Richter, Mats L.; Anthony, Quentin; Belilovsky, Eugene; Rish, Irina; Lesort, Timothée (2023). "Continual Pre-Training of Large Language Models: How to (Re)warm your model?". arXiv:2308.04014 [cs.CL].
  40. ""한국어기반 AI소스 공개합니다 마음껏 쓰세요"". 매일경제. 31 October 2022. Archived from the original on 26 April 2023. Retrieved 10 March 2023.
  41. "CLIP-Guided Diffusion". EleutherAI. Archived from the original on 29 August 2023. Retrieved 20 August 2023.
  42. "CLIP Guided Diffusion HQ 256x256.ipynb - Colaboratory". Google Colab. Archived from the original on 29 August 2023. Retrieved 20 August 2023.
  43. MIRANDA, LJ. "The Illustrated VQGAN". ljvmiranda921.github.io. Archived from the original on 20 March 2023. Retrieved 8 March 2023.
  44. "Inside The World of Uncanny AI Twitter Art". Nylon. Archived from the original on 29 August 2023. Retrieved 8 March 2023.
  45. "This AI Turns Movie Text Descriptions Into Abstract Posters". Yahoo Life. Archived from the original on 27 December 2022. Retrieved 8 March 2023.
  46. Quach, Katyanna. "A man spent a year in jail on a murder charge involving disputed AI evidence. Now the case has been dropped". www.theregister.com. Archived from the original on 8 March 2023. Retrieved 8 March 2023.
  47. "Alien Dreams: An Emerging Art Scene - ML@B Blog". Alien Dreams: An Emerging Art Scene - ML@B Blog. Archived from the original on 10 March 2023. Retrieved 8 March 2023.
  48. "VQGAN-CLIP". EleutherAI. Archived from the original on 20 August 2023. Retrieved 20 August 2023.
  49. "We asked an AI tool to 'paint' images of Australia. Critics say they're good enough to sell". 14 July 2021. Archived from the original on 7 March 2023. Retrieved 8 March 2023 via www.abc.net.au.
  50. Nataraj, Poornima (28 February 2022). "Online tools to create mind-blowing AI art". Analytics India Magazine. Archived from the original on 8 February 2023. Retrieved 8 March 2023.
  51. "Meet the Woman Making Viral Portraits of Mental Health on TikTok". www.vice.com. Archived from the original on 11 May 2023. Retrieved 8 March 2023.
  52. @EMostaque (2 March 2023). "Stability AI came out of @AiEleuther and we have been delighted to incubate it as the foundation was set up" (Tweet) via Twitter.
  53. "Request Rejected". Archived from the original on 16 October 2022. Retrieved 8 March 2023.
  54. Yegulalp, James R. Borck, Martin Heller, Andrew C. Oliver, Ian Pointer, Matthew Tyson and Serdar (18 October 2021). "The best open source software of 2021". InfoWorld. Archived from the original on 8 March 2023. Retrieved 8 March 2023.{{cite web}}: CS1 maint: multiple names: authors list (link)
  55. Yegulalp, James R. Borck, Martin Heller, Andrew C. Oliver, Ian Pointer, Isaac Sacolick, Matthew Tyson and Serdar (17 October 2022). "The best open source software of 2022". InfoWorld. Archived from the original on 8 March 2023. Retrieved 8 March 2023.{{cite web}}: CS1 maint: multiple names: authors list (link)
  56. "VentureBeat presents AI Innovation Awards nominees at Transform 2021". 16 July 2021. Archived from the original on 8 March 2023. Retrieved 8 March 2023.
  57. "What's next for AI: Gary Marcus talks about the journey toward robust artificial intelligence". ZDNET. Archived from the original on 1 March 2023. Retrieved 8 March 2023.
  58. @GaryMarcus (10 February 2022). "GPT-NeoX-20B, 20 billion parameter large language model made freely available to public, with candid report on strengths, limits, ecological costs, etc" (Tweet) via Twitter.
  59. @GaryMarcus (19 February 2022). "incredibly important result: "our results raise the question of how much [large language] models actually generalize beyond pretraining data"" (Tweet) via Twitter.
  60. Chowdhury, Meghmala (29 December 2022). "Will Powerful AI Disrupt Industries Once Thought to be Safe in 2023?". Analytics Insight. Archived from the original on 1 January 2023. Retrieved 6 April 2023.
  61. Wiggers, Kyle (2 March 2023). "Stability AI, Hugging Face and Canva back new AI research nonprofit". Archived from the original on 7 March 2023. Retrieved 8 March 2023.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.