I was doing some research when I came across a catchy article “AI is far more dangerous than nukes”. For as long as I’ve worked in the automotive industry, I’ve have a strong interest in technology and IoT; therefore, this AI topic totally got my attention.
The article was written after an interview with Elon Musk at SXSW. During the conference, the Tesla and SpaceX CEO, more than once time emphasised the dangers of AI are far beyond those of nuclear weapons. Musk suggested doomsday could be right around the corner, if people continue to underestimate AI’s exponential improvements.
If a random person said to me that robots and automation would take over the world soon, I would tell them to stop watching so many Hollywood sci-fi movies. However, this is Elon Musk, the person at the forefront of self-driving technology… Oh well, he must have a valid reason.
The Silicon Valley billionaire claimed that he was very close to the cutting edge in AI and was frightening by its capabilities. To be fair, if you ever want to put a chip in a human brain then you should of course be worried. Not because AI could develop a mind its own and turn against us, but worried about the indescribable power held by those that create, develop and control the AI in the first place.
My point is AI is not dangerous; it is still a product of human design. AI does not have its own consciousness; it is designed to do what it is programmed to do. And if doomsday ever happens, we need to question the true purpose of those that created the AI in the first place.
This will be a huge challenge for digital marketers to approach customers and gaining their trust if customers are exposed to unclear information and refuse to use AI products. Therefore, I totally agree with Elon that AI development must be regulated for our safety.
What else can we do to overcome these challenges? Share your thoughts and ideas at the comments section below. I hope to have great further discussions.