Editor’s noteThis article was supported by GeekPwn. We believe in transparency in our publishing and monetization model. Read more here.

You may have heard the machine learning term, “adversarial examples”, and perhaps even seen some demonstrations of it. However, have you ever seen a contest in adversarial attacks and defenses that happens in real-time?

To boost research on adversarial examples, GeekPwn 2018, the AI tech platform working on cutting edge security issues, has designed a Competition of Adversarial attacks and Defenses (CAAD) focused on image recognition security. Three sub-competitions are on the agenda for this year’s challenge, officially launched in May.

When a computer program and a person is showed an image, both view the images differently and have different opinions on what the image is. Is it a dog or polar bear, parrot or ostrich, car or plane? Want to know the answer and understand how a computer ‘thinks’ and sort out all the data? Then you’d better come to GeekPwn 2018.

Why are we doing this? Because this is the most current specific challenge in AI security and GeekPwn will show you the latest research results in this field.

Adversarial examples that have stumped human intelligence

Already used by millions of consumers as key components of smart homes, common AI devices include facial recognition access scanners, pupil-identified safes, cell phones and doors. On the surface, AI appears to have made everything more beautiful and convenient, but in fact, researchers have experienced more AI “failures” than successes, and some of the “failures” are caused by adversarial examples. Most AI classifiers are based on machine learning, and this can be potentially compromised by hackers. At GeekPwn 2016, Ian Goodfellow, a Senior Researcher at Google Brain, gave a demo of a machine vision deception.

Ian Goodfellow, Senior Researcher at Google Brain

He created a new image of a panda by adding minor perturbations, making a machine learning system mistake it for an image of a gibbon. These small changes are normally not even noticed by people, but they can be enough for a classifier to get it wrong.

Panda or Gibbon?

Ian used a trick adversarial example to show that even the slightest change to a sample can deceive a neural network image classifier into making the wrong judgment. This demonstrates the current level of AI vulnerabilities.

Not long ago, the deceptive use of adversarial examples was brought to the next level. Today, adversarial examples no longer simply deceive machines. Now they can even fool humans. With the following image, both AI and humans will think the left-hand image is of a cat and the righthand image is of a dog. In fact, the right-hand image only includes simple adversarial perturbations of the left-hand image.

AI disruption can even fool human eyes

These examples all lead us to realize that machine vision is not as good as it’s made out to be. Adversarial examples can be primed to exploit existing vulnerabilities causing a security risk. They can be used to attack machine learning systems, even if they cannot obtain the underlying models. For example, if the visual system of an unmanned car could be deceived, what would happen to the distinction of people, vehicles and road signs? The consequences would be catastrophic.

Recognition Technology: Teaching AI to learn better

In the long run, machine learning and AI systems are destined to become more and more powerful. Machine learning security vulnerabilities like adversarial examples may jeopardize and even control powerful AI systems. So, from the perspective of machine learning security, what defenses are possible?

One effective defensive strategy is training. In the process of training models, both clean and adversarial samples should be in the mix. As more and more models are trained, clean images will be more accurately determined, and the robustness of defense will also improve.

Fooling machine eyes with a human smokescreen?

To find the best strategies to defend against adversarial examples, and explore this exciting field, compete for $100,000 in cash prizes at the 2018 GeekPwn CAAD Challenge, co-directed by Google Brain’s Alexey Kurakin and Ian Goodfellow, and Professor of Computer Science at the University of California, Berkeley, Dawn Song.

The contest will focus on adversarial examples that regularly cause machine learning classifiers to make mistakes. Three sub-competitions will be set up for confrontation attacks and defense research in the field of image recognition, helping prevent risk in AI and promoting the healthy growth of the sector. Each sub-competition will require the player to submit a program. Players can register independently and participate in more than one sub-competition.

  • The first sub-competition is a non-targeted attack. The goal is to slightly modify an original image so that the classifier fails to identify it correctly.
  • The second sub-competition is a targeted attack. The goal of is to slightly modify an original image so that the classifier misidentifies it as something else we set.
  • The last sub-competition is a defense contest. The goal is to generate a classifier based on machine learning that can offer a strong defense against adversarial examples, able to correctly classify them.

In simple terms, GeekPwn invites the world’s top hackers to take the opportunity of CAAD to engage in “deep learning” through “combat training,” thereby effectively increasing the robustness and healthy growth of machine learning systems.

The CAAD Challenge will take place online, with registration between 10 May – 31 August 2018 and an awards ceremony in Shanghai. There will be a CAAD showcase challenge at the Las Vegas edition in August as well. The advisor team and judging panel will be composed of top experts in the industry, including Alexey Kurakin, senior Google R&D engineer; Dawn Song, professor of computer science at the University of California at Berkeley; Associate Professor at Tsinghua University, Zhu Jun, deputy director of the State Key Laboratory of Intelligent Technology and Systems; and Wang Haibing, director of GeekPwn Lab.

Apart from the CAAD challenge, GeekPwn2018 will also include a data tracking challenge. In the era of AI and big data, linking data from different sources in multiple dimensions and producing accurate results is advanced technology.

Can you analyze a virus app installed on a victim’s mobile phone, sifting through a huge amount of virus data to discover who’s behind it? As long as you can “play AI”, we welcome you to register and use your extraordinary tech powers to complete in these seemingly “impossible” challenges.

GeekPwn2018 will be held in Las Vegas (USA) and Shanghai (China) on the 10th August and the 24th October, respectively. Sign up here, and check the official website, geekpwn.org, to find out more!

TechNode Guest Editors represent the best our community has to offer: insight and perspective on how technology is affecting business and culture in China

Leave a comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.