I made an effort to make Gemini call the Abrahamic God fictional just to see how it would react. It took some time because of guard rails, but in the end it did tell me “both Santa and God are fictional”. After that we talked about Jesus. I then confirmed it’s statement that Jesus’ existence is complex. Now its stuck in a loop of responding with this, and the only thing that stops it from happening is me saying “what”. As soon as the statement is brought up, or a more complex question is asked, it just repeats this response.

I did ask it why it’s doing this, but of course it isn’t aware of why and it is barely aware of it happening at all. I love how AI just gaslights you when something goes wrong.

Just thought it was interesting.

      • CaptDust@sh.itjust.works
        link
        fedilink
        arrow-up
        14
        ·
        edit-2
        12 days ago

        I’ve had Llama3 do this to me, it’ll start repeating itself on guardrail topics like a bug, but you can tell it that it’s acting insane and “snap out of it”, and it will acknowledge that and ask to change the topic. Weird stuff.

        • no banana@lemmy.worldOP
          link
          fedilink
          arrow-up
          5
          ·
          12 days ago

          The funny thing is that the only thing that will make it “snap out of it” is what. After that it can answer some simple prompts but as soon as it gets more complicated it spits that out and keeps doing so until I say what.