Artificial intelligence is becoming ever more sophisticated. Recent projects have produced robots that can identify items concealed in clutter or make flawless sand art, and we already have robots that can clean our floors autonomously. And if Amazon has their way, robots will soon be delivering packages to our door. So should we be afraid of our creations?
Elon Musk, Stephen Hawking, and many other industry leaders think so, and have signed their names onto an open letter from the Future of Life Institute (FLI). In the letter, they praise the advances we have made in artificial intelligence (AI) and how we are turning such discoveries into “economically valuable technologies”. But they also call to attention the need to maximize the “societal benefit” of AI, and to make sure that “our AI systems… do what we need them to do”. In a document attached to the letter, they present several examples of research directions we should pursue, ranging from short-term priorities like optimizing the economic impact of AI, to long-term priorities such as our control over AI.
But enough about them, what are your thoughts? Should we be scared for our future, or are we needlessly worrying? How do we make sure the robots of the future don’t turn their backs on us? How can we adapt our future robots safely, but ethically into our society? I invite you all to share your thoughts on artificial intelligence below!
In my personal opinion, we should definitely be afraid of AI, but not in a bad way. AI is certainly good and we can engineer them to do things better than we ever could. But we should also have regulatory guidelines on how we design them. I personally think AI should be created for discrete purposes. In other words, we shouldn’t start creating robots that are self-aware and have emotions and all, just because we can or want to prove that we can.
I don’t know if fear is necessarily the right word, but we should be cautious.