This article makes some interesting points about how understanding the process of malevolent AI development is the best way the technology community can combat future issues with poorly or maliciously designed AI. While not a step-by-step on converting your Raspberry Pi and toaster oven into a Terminator, it does point out some red flags to be aware of that could open up R&D of new technologies to potential abuse. I know the idea of malicious machines might seem far fetched to some, but I couldn’t help but be reminded of the rise of malware, which is probably the closest example to date of technology being maliciously used against people (or at least our data). History shows that malware started as benign pranks more than anything during the early days of the Internet, only to escalate to things like the ransomware we see today.
In effort to learn from history, I ask this shamelessly impossible question: What would our current security posture look like had there been more aggressive research on the impact of malware from the beginning?
Before letting commercialism drive technology, I ask that the Technology Community (myself included) continue to look offensively at what we are producing and it's potential misuses BEFORE putting it on the market, be it AI, IoT, or space travel.