Prof. Dr. Konrad Rieck: "Attacking Code Stylometry with Adversarial Learning"

23. Jan.

Im Rah­men des For­schungs­kol­lo­qui­ums Wirt­schafts­in­for­ma­tik und Data Sci­ence re­fe­riert Herr Prof. Dr. Konrad Rieck vom Institut für Systemsicherheit, TU Braunschweig, über "Attacking Code Stylometry with Adversarial Learning".

 

Da­tum und Ort:  23. Ja­nu­ar 2020     12.15 Uhr     Raum C 40.255

Inhalt:

Source code is a very rich representation of a program.  Often, the code contains stylistic patterns that can be used for identifying the developer, a task referred to as code stylometry. Methods for code stylometry have made remarkable progress in the last years and enable spotting individuals in thousands of developers with high accuracy. But are these methods reliable and resistant against forgeries? --- In this talk, we attack code stylometry. We exploit that methods for stylometry rest on machine learning and thus can be deceived by adversarial examples of source code. To this end, our attacks perform a series of semantics-preserving code transformations that mislead learning-based approaches but appear plausible to a human. Our attacks enable arbitrarily transforming and imitating the coding style of developers. As a result, we demonstrate that current methods for code stylometry are not reliable and should not be used in practice.