It’s a good news-bad news scenario: Many IT teams, from government to agriculture to healthcare and everything in between, are receiving an abundance of threat notifications from their next-gen firewalls and operating systems. The cause? Suspicious malware lurking in an organization’s network.
- A cyber-attack launched on government sites throws mission-critical systems into disarray and affects public safety.
- A hostage situation gets out of hand and law enforcement wants to get up close to the suspect without risking the lives of the hostages or officers.
In both scenarios, government and law enforcement organizations are looking at artificial intelligence or AI as means to solve critical issues.
In the summer of 2017, two behemoths of technology had a very public debate over whether Artificial Intelligence – or AI, as it is popularly known – could spell doom for humanity.
Elon Musk of Telsa and Space X has pontificated that AI needs an “alarm bell” and proves an “existential risk for human civilization.” As such, Musk is calling for proactive regulations so that AI doesn’t destroy humanity — such as AI-powered robots killing or enslaving humans or, at the very least, replacing human jobs.
This bleak attitude toward AI prompted Mark Zuckerberg of Facebook to retort that Musk’s view is “irresponsible” and accused (Musk’s) doomsday fears as unnecessary negativity, according to The Atlantic Monthly.