

This isn’t the Cthulhu universe. There isn’t some horrible truth ChatGPT can reveal to you which will literally drive you insane. Some people use ChatGPT a lot, some people have psychotic episodes, and there’s going to be enough overlap to write sensationalist stories even if there’s no causative relationship.
I suppose ChatGPT might be harmful to someone who is already delusional by (after pressure) expressing agreement, but I’m not sure about that because as far as I know, you can’t talk a person into or out of psychosis.
I haven’t noticed this behavior coming from scientists particularly frequently - the ones I’ve talked to generally accept that consciousness is somehow the product of the human brain, the human brain is performing computation and obeys physical law, and therefore every aspect of the human brain, including the currently unknown mechanism that creates consciousness, can in principle be modeled arbitrarily accurately using a computer. They see this as fairly straightforward, but they have no desire to convince the public of it.
This does lead to some counterintuitive results. If you have a digital AI, does a stored copy of it have subjective experience despite the fact that its state is not changing over time? If not, does a series of stored copies representing, losslessly, a series of consecutive states of that AI? If not, does a computer currently in one of those states and awaiting an instruction to either compute the next state or load it from the series of stored copies? If not (or if the answer depends on whether it computes the state or loads it) then is the presence or absence of subjective experience determined by factors outside the simulation, e.g. something supernatural from the perspective of the AI? I don’t think such speculation is useful except as entertainment - we simply don’t know enough yet to even ask the right questions, let alone answer them.