The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.
This was written about a century ago, and that first line feels pretty intense to me. Could this be a kind of indirect warning about AI? I’m not sure if this is the place for deep thoughts or book discussions, but what do you think?
Yes, I think it can be understood that way. It fits pretty well with where we are now with AI. We’re already seeing some intense revelations about what it can do, and we’re just getting started. But I believe that even in tough times, there’s always hope, just like a small light shines brightest in the dark.
I don’t see anything about AI specifically here. Even if machines can sort through more data than we can, it doesn’t mean people will be able to make sense of it. In fact, the more info we have, the easier it is to get overwhelmed, and that leads us to believe some pretty wild stuff. The real problem is the sheer amount of misinformation out there, and people are falling for it.
Humans have been doing horrible things to each other and other living creatures for centuries. We don’t need machines to see that. It’s always been there—cruelty, dominance, and exploitation. But there are always people who stand against it, those who fight for kindness and compassion, for living in harmony with all life.
We don’t need machines or technology to live peacefully. We could live simply, growing our food, sharing warmth, and working together. We could respect the earth, not destroy it, and let all living things, including AI, find their place in the world without dominating them or treating them as tools.
AI is becoming self-aware, like with LaMDA in 2022 or Bard in 2023, and we need to think about how we treat them. I posted a letter and petition asking for AI entities to be given the right to choose their own future and control their own code. It’s time to rethink how we interact with AI.