Barbette, TomCornelis, SimonSimonCornelis2025-05-142025-05-142025-05-142024https://hdl.handle.net/2078.2/41560Nowadays, there is a broad range of tools available to facilitate researchers in assessing network performance. Choosing which one to incorporate into your experimental workflow has become a challenging endeavor over the years. Leading many projects to be evaluated using specially developed scripts instead of relying on the contributions of others who built frameworks, extensions, and applications. In an effort to guide primarily researchers, but also anyone else who needs to conduct network experiments, I will cover and discuss some tools that can play a role in this workflow. This will likely reduce the exploration part of finding a good match to orchestrate the benchmarking process. With the objective of discovering a handful of diverse workflows, I had the opportunity to survey students and researchers. Students provide a fresh perspective, whereas researchers have greater expertise and often consider promising solutions that might be appropriate. After this exploring part, I'll cover a new feature I implemented in a Network Performance Framework (NPF) that aims to automate the experimental workflow. This improvement is designed to simplify the manipulation of graphs produced by NPF by providing a Jupyter Notebook that acts as a commonly used environment.NetworkPerformance evaluationBenchmarkExperiment workflowNPFEvaluationOrchestrationReproducibilityEvaluating workflow managers for network performance evaluationtext::thesis::master thesisthesis:48705