

Yep. This is why you have to code your language to things that are emotionally evocative to them. To some Christians, it’s using the world evil (“this action is evil”). To others it’s using the word weak (“this makes him look really weak”).
Edit: and to be clear, it doesn’t actually need to be logical. You can say something like “his makeup makes him look weak” or “I heard that he sings in a falsetto, that’s super weird”. It doesn’t matter (by definition, it doesn’t need to be logical). What matters is repeatedly associating a negative stimulus with the target position you’re trying to dislodge (or positive stimulus for a position you want held, but humans in America and maybe generally tend to be very profoundly negative averse). The reason these people are hear is because this association game has been played very long and very hard. It is the basis of propaganda.






I’ll look into this, but at first blush this is just mostly tool calling with RAG. This does not prevent a whole host of issues with AI, and doesn’t really prevent lying. The general premise here is to put tight guard rails on how it can interact with data, and in some cases entirely forcing a function / tool path with macros. I am not really sure this would work any better than just a stateful and traditional search algorithm on your own data sources, and would require much less hardware / battery / requirements and would be much more portable.
I like the effort, but this feels a bit like trying to make everything look like a nail.