Employee Profile

Leon Moonen

Adjunct Professor - Department of Data Science and Analytics


Al-Bataineh, Omar; Moonen, Leon & Vidziunas, Linas (2024)

Extending the range of bugs that automated program repair can handle

Journal of Systems and Software, 209 Doi: 10.1016/j.jss.2023.111918 - Full text in research archive

Hort, Max; Grishina, Anastasiia & Moonen, Leon (2023)

An Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Code

ESEM, 2023 (red.). Proceedings of the 2023 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

Grishina, Anastasiia; Hort, Max & Moonen, Leon (2023)

The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification

Chandra, Satish; Blincoe, Kelly & Tonella, Paolo (red.). ESEC/FSE 2023: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering

The use of modern Natural Language Processing (NLP) techniques has shown to be beneficial for software engineering tasks, such as vulnerability detection and type inference. However, training deep NLP models requires significant computational resources. This paper explores techniques that aim at achieving the best usage of resources and available information in these models. We propose a generic approach, EarlyBIRD, to build composite representations of code from the early layers of a pre-trained transformer model. We empirically investigate the viability of this approach on the CodeBERT model by comparing the performance of 12 strategies for creating composite representations with the standard practice of only using the last encoder layer. Our evaluation on four datasets shows that several early layer combinations yield better performance on defect detection, and some combinations improve multi-class classification. More specifically, we obtain a +2 average improvement of detection accuracy on Devign with only 3 out of 12 layers of CodeBERT and a 3.3x speed-up of fine-tuning. These findings show that early layers can be used to obtain better results using the same resources, as well as to reduce resource usage during fine-tuning and inference.

Liventsev, Vadim; Grishina, Anastasiia, Härmä, Aki & Moonen, Leon (2023)

Fully Autonomous Programming with Large Language Models

Silva, Sara & Paquete, Luis (red.). GECCO '23: Proceedings of the Genetic and Evolutionary Computation Conference

Current approaches to program synthesis with Large Language Models (LLMs) exhibit a "near miss syndrome": they tend to generate programs that semantically resemble the correct answer (as measured by text similarity metrics or human evaluation), but achieve a low or even zero accuracy as measured by unit tests due to small imperfections, such as the wrong input or output format. This calls for an approach known as Synthesize, Execute, Debug (SED), whereby a draft of the solution is generated first, followed by a program repair phase addressing the failed tests. To effectively apply this approach to instruction-driven LLMs, one needs to determine which prompts perform best as instructions for LLMs, as well as strike a balance between repairing unsuccessful programs and replacing them with newly generated ones. We explore these trade-offs empirically, comparing replace-focused, repair-focused, and hybrid debug strategies, as well as different template-based and model-based prompt-generation techniques. We use OpenAI Codex as the LLM and Program Synthesis Benchmark 2 as a database of problem descriptions and tests for evaluation. The resulting framework outperforms both conventional usage of Codex without the repair phase and traditional genetic programming approaches.

Malik, Sehrish; Naqvi, Moeen & Moonen, Leon (2023)

CHESS: A Framework for Evaluation of Self-adaptive Systems based on Chaos Engineering

Nunes Rodrigues, Genaína & Pérez, Diego (red.). SEAMS '23: Proceedings of the 18th Symposium on Software Engineering for Adaptive and Self-Managing Systems

There is an increasing need to assess the correct behavior of self-adaptive and self-healing systems due to their adoption in critical and highly dynamic environments. However, there is a lack of systematic evaluation methods for self-adaptive and self-healing systems. We proposed CHESS, a novel approach to address this gap by evaluating self-adaptive and self-healing systems through fault injection based on chaos engineering (CE).The artifact presented in this paper provides an extensive overview of the use of CHESS through two microservice-based case studies: a smart office case study and an existing demo application called Yelb. It comes with a managing system service, a self-monitoring service, as well as five fault injection scenarios covering infrastructure faults and functional faults. Each of these components can be easily extended or replaced to adopt the CHESS approach to a new case study, help explore its promises and limitations, and identify directions for future research.

Høst, Anders Mølmen; Lison, Pierre & Moonen, Leon (2023)

Constructing a Knowledge Graph from Textual Descriptions of Software Vulnerabilities in the National Vulnerability Database

Alumäe, Tanel & Fishel, Mark (red.). Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)

Knowledge graphs have shown promise for several cybersecurity tasks, such as vulnerability assessment and threat analysis. In this work, we present a new method for constructing a vulnerability knowledge graph from information in the National Vulnerability Database (NVD). Our approach combines named entity recognition (NER), relation extraction (RE), and entity prediction using a combination of neural models, heuristic rules, and knowledge graph embeddings. We demonstrate how our method helps to fix missing entities in knowledge graphs used for cybersecurity and evaluate the performance.

Yamashita, Aiko Amparo Fallas & Moonen, Leon (2013)

Surveying Developer Knowledge and Interest in Code Smells through Online Freelance Marketplaces

Sadowski, Caitlin & Begel, Andrew (red.). Proceedings from 2nd International Workshop on User Evaluations for Software Engineering Researchers (USER 2013)

Yamashita, Aiko Amparo Fallas & Moonen, Leon (2013)

To what extent can maintenance problems be predicted by code smell detection? - An empirical study

Information and Software Technology, 55(12), s. 2223- 2242. Doi: 10.1016/j.infsof.2013.08.002

Yamashita, Aiko; Moonen, Leon, Mens, Tom & Tahir, Amjed (1)

Report on the First International Workshop on Technical Debt Analytics (TDA 2016)

CEUR Workshop Proceedings [Kronikk]

Academic Degrees
Year Academic Department Degree
2002 University of Amsterdam, the Netherlands PhD
Work Experience
Year Employer Job Title
2021 - Present BI Norwegian Business School Researcher