According to this article, we don’t have free will, and current AI also doesn’t have it. AI is just algorithms and programs where we input a value, and it produces an output by propagating that value forward and backward. But can we actually create AI that has true free will? I think we’re not even sure where to start.
Yes, I’m familiar with Sapolsky’s theory about free will. It’s a complex issue that philosophers have argued over for ages. His ideas are not universally accepted, but they’ve gained some support from physicists and the ‘block universe’ theory. However, Dennett disagrees, arguing that while we do have basic survival instincts, we also have reasoning, which is a big part of being human. With advanced AI models, we can apply logic and reasoning, and those results can be quite convincing.
Why bother? If free will is an illusion (which all the evidence suggests), then why try to give AI free will? It could lead to dangerous side effects when uncontrolled.
Sapolsky’s approach is questionable. He defines free will as being magical and beyond our control. While some people believe humans have such abilities, most philosophers (‘compatibilists’) still believe in free will, just not the magical kind. It’s like how many believe morality comes from God, but we can discuss morality without needing God. AI is a product of human free will. It’s possible that some future AI could have its own free will.