Is it possible to design an AI that can turn any statement into a mechanical 3D model (think text to 3D)?
Examples:
I want a pizza. Should I buy a $20 pizza or a $40 gourmet one?
Answer: You could show a scale with a ball on each side, where price and taste are factors like weight and density, and the decision could be made by a random formula that takes both into account.
What does it mean to help someone?
Answer: You could show objects moving toward an objective, each with a specific mass and speed, where help is defined as reducing opposing forces or increasing the force toward the goal.
This kind of thinking, though very simple, comes naturally to us. Do you think this could bring us closer to creating more general intelligence in AI models?
Is there a way to create a standard model for abstraction?
Hollis said:
The tough part is getting the computer to keep up with that type of simulation. You’ll need to simplify things unless we have future quantum tech.
Do you think it’s only a computing issue? Maybe if we make the goals simpler (like using a 2D vector field), could we achieve it with today’s tech?
@Noel
You’d need to make a model that can process anything it encounters by breaking it down into smaller tasks. The question isn’t whether it can be done, but how well it can be done. With AI, more options means slower processing. For example, real-time AI in computer vision needs to be fast, but that means it might miss some details.