Jobs by JobLookup

‘Better than DeepSeek and OpenAI’: Alibaba touts open-source AI model that beats rivals Alibaba says its QwQ-32B AI model outperforms DeepSeek’s R1 in coding and problem solving while using fewer resources

 


Alibaba Group Holding on Thursday unveiled an open-source artificial intelligence (AI) reasoning model that it said surpassed the performance of DeepSeek’s R1, highlighting the Chinese technology giant’s robust AI capabilities across models and data-center infrastructure.

Following the launch of its QwQ-32B model, Alibaba’s Hong Kong-listed shares surged nearly 8.4 percent to close at HK$140.80 on Thursday. Alibaba owns the South China Morning Post.

Despite its relatively modest 32 billion parameters, Alibaba’s new model matched or outperformed DeepSeek’s R1, which boasts 671 billion parameters, in areas such as mathematics, coding and general problem-solving, according to a blog post by the team responsible for Alibaba’s Qwen family of AI models.

A smaller parameter count enables the model to operate with reduced computing resource requirements, facilitating wider adoption, according to the team.

The lean design of Alibaba’s model aligns with the views expressed by Alibaba chairman Joe Tsai in his recent column for the Post, where he emphasized that practical applications were key to maximizing intelligence in AI model development.

China’s Alibaba releases new AI model, said to outperform competitors Deepseek and OpenAI’s GPT-4o

The release of Alibaba’s latest reasoning model – a type of AI system designed to think, reflect and self-critique to solve complex problems – comes less than two months after DeepSeek’s R1 shook the global tech industry and stock markets in January.

It also coincides with a surge in AI adoption across China, with Alibaba announcing last month a plan to invest US$52 billion in cloud computing and AI infrastructure over the next three years, marking the largest-ever computing project financed by a single private business in the country.

Alibaba also said that QwQ-32B outperformed OpenAI’s o1-mini, which was built with 100 billion parameters. QwQ-32B is available on Hugging Face, the world’s largest open-source AI model community.

The Qwen team attributed the performance improvements of its new reasoning model to reinforcement learning techniques, similar to those used by DeepSeek in developing its R1 model.

These advancements “not only demonstrate the transformative potential of reinforcement learning but also pave the way for further innovations in the pursuit of artificial general intelligence”, the team said.

The launch follows remarks from Alibaba’s CEO, Eddie Wu Yongming, during a recent earnings call, where he said the company’s primary focus was to develop artificial general intelligence, which he defined as the point at which AI could achieve 80 per cent of human capabilities.

Post a Comment

Previous Post Next Post