I’ve been thinking about AI and cryptography and had this idea. In cryptography, there are one-way functions that are easy to compute in one direction but hard to reverse (like multiplying big numbers vs factoring them). What if consciousness works in a similar way?
Here’s what I mean:
Our brains create memories and connections easily, but trying to reverse them or fully recall details is really hard.
Consciousness feels like it flows forward only, similar to how time works.
Maybe we’re struggling to replicate consciousness in AI because we’re trying to work backwards on something that’s fundamentally forward-only?
Questions:
Does this make sense as a theory for why consciousness is so difficult to replicate?
Are there other natural processes like this that could give us clues?
Could quantum computing break the “one-way” nature of this?
Causation seems to only move forward. Information, once processed, is tied to its context at that moment. Reversing it fully would require recreating all those past conditions, which is practically impossible.
Consciousness might feel effortless because it flows with the present, but remembering or simulating the past takes effort because we’re recreating it from incomplete data. It’s like surfing a wave—you can ride it forward, but you can’t push it back. Thoughts?
Everything is a one-way function if you don’t know how to reverse it. But does that mean we shouldn’t try? Also, quantum computers have already challenged what we used to think were one-way functions.
The claim ‘We still can’t replicate consciousness’ isn’t quite right. We haven’t even defined consciousness clearly enough to start replicating it. How can you recreate something if you don’t know what it is?
@Lane
LLMs already show signs of intelligence. Once they surpass human intelligence, how will we even tell the difference between them and us in terms of consciousness?
Wynn said: @Lane
LLMs already show signs of intelligence. Once they surpass human intelligence, how will we even tell the difference between them and us in terms of consciousness?
Signs of intelligence? How do you figure that? They’re just simulating responses based on patterns we feed them.
The whole universe works like this. You can calculate events step by step, but reversing them is impossible without running the process backward. Systems like these are deterministic but still unpredictable unless you replay everything exactly as it happened.
Vail said: @Delaney
What if we had an infinite energy source? Couldn’t we simulate or reverse things then?
Even with infinite energy, chaos theory (like the butterfly effect) makes predicting or reversing things really tough. Small changes lead to huge differences.
LLMs seem similar to what you’re saying—they take a lot of energy to train but are efficient to run. But they don’t learn from conversations; their training is fixed. If they could evolve and learn like humans (maybe even ‘sleep’ to process data), they might feel more conscious.
The big question is: would people accept AI that learns and changes independently? Even now, AI without consciousness makes a lot of folks nervous.
If you look at the nervous system, it’s all about signals going in one direction. Consciousness might just be the part where input signals and output signals meet in the middle. What do you think?