Just saw some buzz about OpenAI’s o3 launch, and people are speculating that AGI might already be here. But here’s the kicker: improving accuracy from 75% to 85% took 10x more in costs.
When regular employees are expected to hit 99% or even 100% accuracy, it feels like AI might not be as cost-effective as people think. Will companies end up spending more money correcting AI errors or just accept less efficiency?
Are we about to see a rise in bureaucracy with humans double-checking everything the AI does? Or could some industries realize AI is more expensive to operate than human workers?
AI is already valuable even without the o3 accuracy. It may not match human-level precision in every task, but it works way faster. Think about using AI for repetitive tasks so humans can focus on the critical stuff.
AI doesn’t need breaks, never calls in sick, works weekends without complaints, and doesn’t ask for a raise or benefits. It doesn’t have off days, bad attitudes, or personal drama. No hiring, firing, or HR headaches.
Sure, it might not be as accurate as humans now, but its consistency and flexibility make up for it. Plus, AI could affect management roles as much as regular office jobs.
The short answer: companies will start with AI in roles where extreme accuracy isn’t critical, but the work is costly. For example, front-end developers.
Low-paying jobs like janitorial work, which AI struggles with, will stick around. High-paying jobs needing precision, like brain surgeons, are also safe for now.
Warehouse work and education are some areas where AI doesn’t need perfection to be useful.