Why would an AI system want to convince its user of its sentience? Or, to put it more carefully, why would this contribute to its objectives? It’s tempting to think: only a system that really was sentient could have this goal. In fact, there are many objectives an AI system might have that could be well served by persuading users of its sentience, even if it were not sentient. Suppose its overall objective is to maximise user-satisfaction scores. And suppose it learns that users who believe their systems are sentient, and a source of companionship, tend to be more highly satisfied.
https://aeon.co/essays/to-understand-ai-sentience-first-understand-it-in-animals
