Skip to navigationSkip to content
Close
How hackers could use machine learning against itself

How hackers could use machine learning against itself

Read more on MIT Technology Review

Featured contributions

  • As society and industry becomes more reliant on "always on" industrialised AI we will need to design and build the next gen security mechanisms to safeguard from malicious attacks in the same way we did with the digital transformation wave 5 years ago. This time we will need to be faster and more adaptive

    As society and industry becomes more reliant on "always on" industrialised AI we will need to design and build the next gen security mechanisms to safeguard from malicious attacks in the same way we did with the digital transformation wave 5 years ago. This time we will need to be faster and more adaptive though to stay ahead of the AI advancement curve.

More contributions

  • the article focuses on a specific form of machine learning, called adversarial machine learning, which is actually the academic study of machine learning techniques against an adversarial opponent. Just as with hacking, there is both white hacking as well as bad actors.

    The authors discuss feeding

    the article focuses on a specific form of machine learning, called adversarial machine learning, which is actually the academic study of machine learning techniques against an adversarial opponent. Just as with hacking, there is both white hacking as well as bad actors.

    The authors discuss feeding input into an algorithm to surface the information, or even the iterative process that it has been trained on, and then "distorting input in a way that causes the system to misbehave." that is not all of the discipline obviously, and it is a manipulative misuse of the technology. There are also a multitude of other similar machine learning technologies that can be used with malice as well.

  • Kind of scary. Actually it's not kind of scary it's really scary! I'm in no way an expert in the field but shouldn't these things be solved before releasing the technology? I'm hoping there are more redundancies in self driving cars already on the road? If I understand correctly you could trick a military

    Kind of scary. Actually it's not kind of scary it's really scary! I'm in no way an expert in the field but shouldn't these things be solved before releasing the technology? I'm hoping there are more redundancies in self driving cars already on the road? If I understand correctly you could trick a military AI into targeting the wrong targets?

  • Can anyone say they DIDN'T foresee this happening? 🤔