When AI developers talk about the “alignment problem” they’re generally referring to the risk that the goals of super intelligent AI will diverge from those of humanity, potentially posing existential risks. But there’s another alignment problem that they don’t talk about, that’s even more clear and present: that the goals of AI owners could be at odds with the rest of humanity. Frank Herbert, the author of Dune, foresaw this back in the sixties. The problem with letting machines doing your thinking for you, is you’re actually letting the oligarchs do your thinking for you by proxy. And they may not have your best interests at heart, but rather, theirs.








Leave a comment