It seems unrealistic to expect that AGI (Artificial General Intelligence) will be made available to the public anytime soon after it is developed

Assuming AGI is developed in the US by organizations like OpenAI or Anthropic, a similar argument could be made if it were developed in another country, such as China. Given the immense power and implications of AGI, it’s more likely that the government would intervene to withhold its release, keeping it under tight control to prevent adversaries from developing their own versions and to secure a permanent strategic advantage. This would allow for incremental progress to be made publicly available while the government secretly pushes towards developing ASI (Artificial Superintelligence) in a manner akin to the Manhattan Project.

If recent years have taught us anything, it’s that society is not ready for the implications of publicly-accessible AGI. Many people are either too gullible to distinguish between AI and real humans or malicious enough to exploit AI to harm others—issues that are already evident with the narrow AI models we have today. Moreover, our cybersecurity infrastructure is still incredibly fragile, as highlighted by events like the CrowdStrike global IT outage caused by a single botched file in a routine update. Publicly accessible AGI could also lead to significant economic and societal upheaval.

As a society, we are simply NOT prepared for AGI, and to think that those in charge of national and economic security aren’t aware of this is delusional.

1 Like

I think it’s more about timing. Even if AGI is developed, it would likely be decades before it’s made available to the public. There’s too much at stake; economically, militarily, and socially; for any government to take the risk of a premature release.