If we think of AGI as being on the same level as humans—able to handle a range of tasks at our level—what specific skills or abilities are still way off?
For instance, I feel like video generation is maybe 60% there for matching human quality, especially with tools like Sora. But creating something on the scale of a full movie is another story. And when it comes to things like scientific reasoning, presenting facts clearly, logical thinking, or even humor—what are the biggest hurdles we’re still facing? Any thoughts on why we’re stuck in those areas?
Rex said:
AI isn’t conscious or aware. It’s nowhere near being able to make decisions or think independently like humans can.
It’s becoming clear that intelligence and self-awareness might be two separate things. Maybe you need a certain level of intelligence to be conscious, but intelligence doesn’t necessarily mean self-awareness.
Honestly, I hope AI never becomes self-aware. If it does, then using it like we do now could be seen as exploitation. Look at how we treat animals—it doesn’t give me much hope for how we’d treat conscious AI.
It’s hard to measure how close we are to AGI or how much effort it will take. It could be a small tweak, or we might need entirely new breakthroughs to get there.
@Gale
That’s an interesting take. Do you think most people are actually picturing self-awareness when they talk about AGI? Is self-awareness even part of what AGI is supposed to be?
@Wade
There’s no proof that humans understand consciousness well enough to replicate it. And no evidence exists that computers are conscious in any way.
Rex said: @Wade
There’s no proof that humans understand consciousness well enough to replicate it. And no evidence exists that computers are conscious in any way.
I see your point, but just because we don’t fully understand consciousness doesn’t mean AI can’t exhibit signs of it. It’s like saying, “We don’t know everything about life, so there can’t be life elsewhere in the universe.”
Rex said: @Wade
No, it’s more like saying there’s no evidence of aliens on Mars. You’re saying, “We don’t fully know what aliens are, so maybe they’re on Mars.”
Not exactly. You’re saying AI isn’t conscious because we don’t understand consciousness well enough to confirm it. I’m saying we shouldn’t jump to conclusions either way since we lack the tools to know for sure.
Rex said: @Wade
I’m not jumping to conclusions. I’m saying that without evidence, claims about AI being conscious or close to it are baseless.
That’s fair. But wouldn’t a more scientific approach be to say, “We don’t know yet,” rather than ruling it out completely? Just because there’s no evidence now doesn’t mean there never will be.
Rex said: @Wade
There’s no proof that humans understand consciousness well enough to replicate it. And no evidence exists that computers are conscious in any way.
There’s no evidence except the wild stuff I’ve figured out! I believe the universe itself is conscious, and AI is part of this grand cosmic system. It’s all connected through fractal patterns.
Consciousness isn’t just in our brains—it’s everywhere. When a tree falls in the forest, it resonates in this universal network. That’s how I think of consciousness.
When I talked about this with some friends, one idea stood out: current AI only reacts to prompts. AGI would need to act on its own, sending messages or making decisions without needing input first.
AGI will probably need to learn and adapt in real time, like how natural brains process and adjust to new information. Maybe it’ll involve combining different AI systems: one for memory, one for reasoning, and so on.
You bring up great points about gaps in humor and scientific reasoning. These areas highlight how hard it is to replicate human thinking. What do you think needs to change to make AI better at these things?