I think it would say: "After reviewing and analysing many thousands of posts, the way to solve your problem is to switch it off and then back on again."
The problem isn't so much that these things could become sentient, it's that humans don't understand how they work and probably never will. Without warning they will come up with a .of plausible sounding lies or false data. This is extremely problematic and there doesn't seem to be any way of stopping it from happening. Imagine if your AI is developing a new protein based drug and for whatever it decides to design it so it folds incorrectly, leading to a new deadly prion disease...