Can someone explain how this O1 thinking works? I don’t believe it’s just about auto-prompting itself. For me, this changes things… makes me rethink all the hype around AI. With O2 and O3 next, it almost feels like they could end up thinking better than most people.
Welcome to the forum’s Artificial Intelligence section
Guidelines for Posting and Discussion
Follow these tips for better engagement:
- Make your posts at least 100 characters to encourage more helpful responses.
- Use the search bar first—your question might already have answers.
- For example, AI discussions on jobs come up often!
- Feel free to discuss AI’s pros and cons, just keep it respectful.
- If you’re making strong arguments, include some links to back them up.
- No question is off-limits here, but if you’re asking if AI will end the world… maybe not the best topic.
Thanks - let mods know if you have any questions or comments.
I’m a bot and this action was performed automatically. Please contact the moderators of this forum if you have any questions or concerns.
Maybe O4 will teach you where to put punctuation marks.
Jai said:
Maybe O4 will teach you where to put punctuation marks.
If it can solve puzzles, doesn’t that mean it can plan things too?
Jai said:
Maybe O4 will teach you where to put punctuation marks.
If it can solve puzzles, doesn’t that mean it can plan things too?
A good start! You’ll figure out punctuation eventually.
Honestly, AI doesn’t actually think or have an IQ that can be compared to humans. It does give amazing results and can seem really smart, but it’s just a complex program.
AI only processes one word at a time, using a lot of context tricks to make it seem like it understands. It’s impressive, but it’s not real thinking.
@Brady
About comparing AI and human IQ… Models do score high on IQ tests.
You could argue that humans have a deeper way of thinking. For now, anyway.
Darcy said:
@Brady
About comparing AI and human IQ… Models do score high on IQ tests.
You could argue that humans have a deeper way of thinking. For now, anyway.
Human IQ testing focuses on a few areas: math, language, and spatial skills. AI intelligence comparisons are pretty misleading.
Large language models are still just advanced autocomplete, really.
@Brady
Your first point is valid, but the second part feels like you’re pulling that out of thin air.
Darcy said:
@Brady
Your first point is valid, but the second part feels like you’re pulling that out of thin air.
Oh, the forum always brings out the best debates!
@Brady
That makes sense. I’ll try to look at it that way!
@Brady
AI can do tests and get a score, but IQ isn’t the same thing as consciousness.
Elliot said:
@Brady
AI can do tests and get a score, but IQ isn’t the same thing as consciousness.
Don’t mix up consciousness and intelligence.
Human IQ tests miss out on things like creativity and emotional skills. And even EQ tests have their flaws depending on culture.
So no, LLMs don’t really do standardized tests in any way that’s reliable.
It’s just a text predictor. It guesses which word comes next based on patterns. It’s not thinking in any real sense.
Jai said:
It’s just a text predictor. It guesses which word comes next based on patterns. It’s not thinking in any real sense.
Only predicting? But then how does it solve puzzles and code? Doesn’t that mean it’s planning things out?