Photo Credit: Getty Images

Leading AI firms, including OpenAI, Microsoft, and Meta, are turning to a technique known as "distillation" to build more cost-effective and efficient AI models. This shift, sparked by competition from China's DeepSeek, has intensified the global race for AI dominance while raising concerns about intellectual property and market stability. As a result, billions of dollars have been wiped from the market value of major U.S. tech firms.

 

Distillation involves training a smaller "student" model using knowledge extracted from a larger "teacher" model. This method significantly reduces costs and computing power while retaining much of the original AI's capabilities. "Distillation is quite magical," remarked Olivier Godement, head of product for OpenAI's platform. "It allows us to take a very large, smart model and create a much smaller, cheaper, and faster version optimized for specific tasks."

AI models such as OpenAI's GPT-4, Google's Gemini, and Meta's Llama require extensive computational resources, often costing hundreds of millions of dollars to train. By leveraging distillation, companies can provide AI services at a fraction of the cost, enabling usage on mobile devices and personal computers.

Microsoft, a key OpenAI investor, has already integrated GPT-4 into a series of smaller AI models, branded as Phi. However, OpenAI has raised allegations against DeepSeek, accusing the Chinese firm of distilling its proprietary models in violation of service agreements. DeepSeek has yet to issue a statement regarding these claims.

Despite its advantages, distillation comes with trade-offs. "If you make the models smaller, you inevitably reduce their capability," said Ahmed Awadallah, a researcher at Microsoft. Distilled models excel at narrow tasks such as email summarization but lack the broad functionality of their larger counterparts. Nonetheless, companies are eager to deploy them in applications like customer service chatbots and mobile software. "Anytime you can reduce costs while maintaining performance, it makes sense," explained David Cox, Vice President of AI Models at IBM Research.

The rise of distillation poses challenges for AI firms' revenue models. Smaller models cost less to develop and maintain, reducing the fees companies like OpenAI can charge. While large AI systems remain essential for complex tasks requiring high accuracy, the growing preference for distilled models among businesses indicates a potential shift in the industry's financial landscape.

Only registered members can post comments.

RECENT NEWS

AROUND THE CITIES