Amazon is reportedly investing millions in the training of an ambitious large language model (LLM) named “Olympus,” aiming to compete with leading models from OpenAI and Alphabet. Sources familiar with the matter revealed that Olympus boasts 2 trillion parameters, potentially making it one of the largest models under development. For comparison, OpenAI’s GPT-4, considered one of the best models, has 1 trillion parameters. The project, led by Rohit Prasad, former head of Alexa, operates under utmost secrecy, with Amazon declining to comment on the details.
Rohit Prasad, now the head scientist of artificial general intelligence (AGI) at Amazon, has brought together researchers from Alexa AI and the Amazon science team to collaborate on training models. This initiative unifies AI efforts across the company, demonstrating a strategic move by Amazon to establish its presence in the evolving landscape of advanced AI models.
While Amazon has previously trained smaller models like Titan, Olympus represents a significant leap in scale. The company has also partnered with AI model startups such as Anthropic and AI21 Labs, making these models available to Amazon Web Services (AWS) users. Amazon’s focus on developing in-house models is seen as a strategic move to enhance its appeal on AWS, where enterprise clients seek access to top-performing models.
Despite the substantial computing power required for training larger AI models, Amazon believes that having proprietary models will enhance its offerings and competitiveness in the AI space. The company’s commitment to investing in LLMs and generative AI was underscored in an April earnings call, where executives announced increased investment in these areas while scaling back on fulfillment and transportation in the retail business. There is currently no specific timeline for the release of the new Olympus model.
+ There are no comments
Add yours