The usual sequence of thoughts you have here is something like: "so-and-so," who's a well-respected expert, is concerned that the machines will become smart, they'll take over, they'll destroy us, something terrible will happen. They're an existential threat, whatever scary language there is. My feeling about that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn't optimal is the way it talks about an end of human agency.