OpenAI spent billions to train GPT-4, and Meta spent hundreds of millions on LLaMA. Now, Deepseek has open-sourced its comparable v3 AI, which was trained for less than $6 million, without using expensive H100 chips. The entire process took only several weeks to months.
This cost and time frame are accessible to many private individuals. Does this mark the shift from AI models being developed by big corporations to a new era where powerful AIs are rapidly created by private individuals all over the world?
I think the ‘no moat’ memo still holds true. While big players are throwing huge amounts of money at AI, others are focusing on smarter, more cost-efficient models.
Models will keep getting smaller and smarter, which means we’ll see a huge boost in AI capabilities soon.
When someone figures out a breakthrough, it won’t be long before others catch up. It’s great for those of us watching the developments.