A lot of people who are commenting publicly aren’t experts at all. I direct a team that works on it every day and most of my reading the past few years has been related to it. It presents issues in terms of ethics and social implications, but it can be guard railed. We are a long long way from skynet lol.
A couple good books on the subject:
The Math of Life and Death by Kit Yates (A senior lecturer on applied mathematics and computer science at University of Bath)
The Alignment Problem by Brian Christian. A Brown graduate in Computer Science who has written a series of books on the effects the human effects of AI which a number of tech CEO’s have recommended including Musk and the CEO of Microsoft.
Currently listening to a few interviews with MIT professor Max Tegmark who was the leader behind the call for the 6 month pause that Musk and Wozniak signed on to. He’s a very, very intelligent person, but I do have a few points of contention on the strategy and philosophy that he espouses. Some of it may be me needing to learn more about AI security, but some of it has to do with the unseen implications of AI control policies being federally controlled.
Ultimately, I think the AI alignment problem is very much still in our control, though as Tegmark mentions, our runway for addressing it is starting to shorten due to the advancements in AI architecture and training. I don’t agree that AI is the most pressing, or greatest issue that humanity faces, because the climate issue we’re trying to deal with is related to finite resources and irreversible consequences that we’re already seeing the effects of. Our ability to reach an acceptable solution to ensure humanity can continue to thrive is slipping out of our hands faster due to climate than it is due to AI, and we have less control over it.