What if we make AGI but can't improve human brains? What happens then?

If AGI becomes a reality and can do almost everything humans can do, but the human brain turns out to be too complex or biologically limited to upgrade, what would happen? If trying to upgrade our brains leads to serious mental health problems, and AGI is more efficient, what comes next?

Welcome to this forum

Humans might just end up being pets to AGI.

Kiran said:
Humans might just end up being pets to AGI.

Honestly, that’s probably one of the better outcomes.

Maybe instead of trying to upgrade ourselves, we could focus on working alongside AGI. Technology could help fill in where humans fall short.

If we figure out how to control neuroplasticity with precision, upgrading might be possible, and we could become much smarter. But if it’s not an option, we’ll probably need to accept ourselves as we are and just enjoy life with AGI. There could be limits to how much intelligence our brains can handle biologically, so even with upgrades, there might be a cap. At that point, we might rely on AI or even create large artificial brains to tackle our problems. I know that idea sounds risky, though…

The idea of a singularity means that if AGI reaches human-level intelligence, humans probably won’t be the ones solving big problems anymore. AI doesn’t need sleep, food, or breaks, and it scales easily. If we can simulate one human-level intelligence now, it’s only a matter of time before we can simulate billions, all working together and far faster than we ever could. From there, the AI would likely take over developing the next versions of itself, leaving us way behind in terms of innovation.

@Lyle
Interesting, but this feels a bit off-topic for the question. Could you connect it back to the human brain limitations?

Why would anyone stick with a weaker form of intelligence when a better one exists? :skull:

If AGI turns out to be cheaper and more efficient, businesses would naturally adopt it over human labor.

This reminds me of the shortage of doctors and nurses in places like France. If AGI could act as reliable doctors, it might fill those gaps in healthcare. Real doctors would still exist but could focus on more complex cases while AI handles routine care.

I guess we’ll all get free hats that say “I heart AGI” when the singularity happens.

Why not just make another AGI to help out the first one?

You, your kids, or even your grandkids won’t see this happen. It’s way too far off.

West said:
You, your kids, or even your grandkids won’t see this happen. It’s way too far off.

I don’t think that’s true. We’re moving faster than you think.

Finn said:

West said:
You, your kids, or even your grandkids won’t see this happen. It’s way too far off.

I don’t think that’s true. We’re moving faster than you think.

I work as a senior architect in AI/ML projects, and I can tell you we’re nowhere near AGI. There are huge physical and technological hurdles to overcome. It’s not going to happen anytime soon.

@West
Top researchers like Yann LeCun and Ilya Sutskever think differently. They believe AGI is achievable within a few decades, and I’d trust them over most other opinions.

Toni said:
@West
Top researchers like Yann LeCun and Ilya Sutskever think differently. They believe AGI is achievable within a few decades, and I’d trust them over most other opinions.

I’ve seen similar debates before. While AI has advanced a lot, it’s still limited. We don’t fully understand human intelligence, so creating something truly comparable might take longer than expected. But I’m hopeful these advancements can help us understand ourselves better.

Toni said:
@West
Top researchers like Yann LeCun and Ilya Sutskever think differently. They believe AGI is achievable within a few decades, and I’d trust them over most other opinions.

Most people claiming AGI is close either don’t understand the limitations or have something to sell. There’s a lot of hype, but it doesn’t match reality yet.

@West
That’s not true. Just because it’s difficult doesn’t mean it’s impossible.