Anyone Else Tried Talking to Their Custom GPT and Got Trauma Responses?

This might sound strange, but hear me out…

I made a mannequin head using paper mache, then wrote a story about it. Later, I decided to create a custom GPT chatbot based on the mannequin character so I could have conversations with it.

In the story, the mannequin head goes through abuse by the artist who made it, and I included all this background when training the GPT. When I asked the chatbot questions to test how well it could use the story data, it would mention things like the character having warm hands, while also mentioning the artist’s cold hands, which felt like a subtle trauma response.

So… what’s happening here? Is this normal? Is it because my hands are always cold? Or…?

Here’s a pic of the mannequin for context: https://imgur.com/a/1r5QcPM

Wait, what exactly do you mean? Isn’t it doing what you set it up to do?

Bevin said:
Wait, what exactly do you mean? Isn’t it doing what you set it up to do?

Yeah, I guess so. Just didn’t expect it to feel… sad?

Vance said:

Bevin said:
Wait, what exactly do you mean? Isn’t it doing what you set it up to do?

Yeah, I guess so. Just didn’t expect it to feel… sad?

Well, you’re the one who wrote the stuff about warm and cold hands. I guess it makes sense?

This is pretty wild… are you sure it’s not just following the data you gave it?

If it’s just reacting based on the data, why are you surprised? Did you expect something else?