No Thumbnail Available
Files
JENNES_13622001_2023.pdf
Embargoed access from 2025-09-01 - Adobe PDF
- 4.17 MB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- Executable packing, the process of modifying a binary while keeping it self-contained and equivalent in behavior, is used as a tool in many different applications. It can reduce the size of executables but also make reverse engineering of the code more difficult. This is a problem in many applications, perhaps most notably in malware analysis. Researchers have therefore developed tools and techniques to automatically detect and sometimes unpack these binaries, often relying on static file analysis for performance reasons. These tools and especially those based on machine learning, have achieved great results for packed executable detection. However, the inherent adversarial aspect of the field is often overlooked. Considering that the authors of packed executables actively try to evade detection and further analysis of their code, a practical packing detection tool must be robust against adversarial attacks. Yet, very few researchers have studied the attacks that could be conducted against their packing detection tools. For this reason, many of the existing packing detectors intuitively seem vulnerable to evasion attacks. The evasion of machine learning models has been a fast-growing field in recent years, with applications in multiple domains. For example, many studies try to develop ways to evade malware detectors. There is an opportunity to borrow techniques from adversarial learning, and in particular, those already operating on executables, and to adapt them to evade packing detectors. In this way, researchers in packing detection could be provided with a tool to test the vulnerability of their model during the design phase. This master thesis experiments with adversarial learning against static packing detection techniques. New functionalities are added to Packing Box, an experimental toolkit for the detection of packed executables that was developed in the master thesis “Experimental Toolkit for Studying Executable Packing - Analysis of the State-of-the-Art Packing Detection Techniques”. Our contributions to the framework include the implementation of a tool to alter executables in various ways that can fool packing detectors and new visualizations of the effects of these alterations. Experiments are conducted to show the usage and potential of this tool. The effects of basic alterations are studied, and practical alterations are defined that successfully break the most common packing detectors in the wild, as well as machine learning-based detectors.