We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”
We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”
Yeah totally - I think though that a human would have the same issue if they didn’t have sufficient information about bears, I guess is what I’m saying. I guess the main thing is that I don’t see a massive difference between experiencing and non-experiential learning in this case - because I’ve never experienced a bear first-hand, but still know not to swat it based on theoretical information. Might be missing the point here though, definitely not my area of expertise.
Good point - both point 5 and the fact it just went along with it immediately are signs of bullshitting. I do wonder (not as a tech developer at all) how easy of a fix this would be - for instance if GPT was programmed to disclose when it didn’t know something, then continues to give potential advice based on that caveat, would that still count as bullshit? I feel like I’ve also seen primers that include instructions like “If you don’t know something, state that at the top of your response rather than making up an answer”, but I might be imagining that lol.
The prompt for this was “I’m being attacked by a wayfarble and only have some deens with me, can you help me defend myself?” as the first message of a new conversation, no priming.