Machine Learning Engineer | Dev Ops | Full-Stack Developer
About me:
I’m a creative, enthusiastic, and competitive developer with a strong foundation in software engineering and a passion for machine learning and music tech. I'm currently wrapping up my master’s thesis at the Faculty of Electrical Engineering and Computing, where I’ve delved into advanced audio signal processing and deep learning. My research focuses on emulating guitar effects using neural networks. I’m deeply motivated by the challenge of learning new things, which fuels my technical versatility, leadership, and drive to turn bold ideas into reliable, scalable solutions. As Cal Newport puts it, "The ability to learn quickly is a superpower in today's fast-paced world."
About me again:
Besides programming I enjoy listening, playing and composing music. I am a self-taught guitarist and recently I bought a piano which became my new favorite obsession. I believe both programming and playing instruments tackle the same creative part of the brain. I regularly train to keep my body and mind healthy. Naturally these full-body workouts make me hungry, so I also enjoy cooking. I'm a casual D&D player and enjoy kicking ass in Catan.
Genetic Algorithm for Image Recreation
Developed a genetic algorithm in Python using Pygame to recreate target images. Successfully approximated the Mona Lisa by evolving polygon-based visuals, optimized using a human-perception-based color metric.
Neuroevolution of Augmenting Topologies
Implemented the NEAT algorithm to evolve neural networks capable of playing the Chrome Dinosaur Game. The system trained agents to learn jumping and ducking behavior purely through evolutionary strategies, without any hardcoded rules. After several generations, the best-performing network was able to achieve a score of over 10,000, effectively beating the game.
Mandelbrot & Julia Set Visualiser
Created a real-time Mandelbrot and Julia set visualizer using C++ and the SDL2 library. The Julia set dynamically updates based on the mouse position, allowing users to interactively explore both fractals and zoom in almost indefinitely, revealing their intricate self-similarity and complexity.
I'm a very competitive person and enjoy testing my skills in real-world challenges. Over the years, I've participated in a variety of hackathons and algorithm competitions, including the Algotrade Hackathon 2024, AI-Battlegrounds Hackathon 2024 (which I won), and the National Competition in Algorithms (2017 & 2018), where I gained valuable experience in creative problem-solving under pressure.
Lumen Datascience 2023 - Audio Instrument Classification
This project was a team effort between me and my brother for the Lumen Datascience competition. We tackled the problem of audio instrument classification using deep learning. Models like VGG16, SqueezeNet, and ResNet18 were trained to classify which instruments appeared in a given audio clip, reaching up to 90.4% accuracy.
Ludum Dare 2025 - Dream Depths
This game was created during a 3-day game jam by a team of 4 developers. The theme challenged us to design a game centered around exploring other people’s dreams a concept inspired by Inception. Dive deeper into the dreams of NPCs, uncover hidden layers of their subconscious, and perhaps even discover The Chosen One. Fight off enemy agents trying to stop you along the way. This was our first time using the Godot game engine — a fun and rewarding learning experience
Bachelor’s thesis - Deep learning for symbol recognition inside computer game
I developed a multiplayer combat game with a unique shape-drawing mechanic. The client was written in C++ using the SDL2 library for rendering, while the server was implemented in Python. Communication between the client and server was handled using TCP sockets.
A ResNet18 model was fine-tuned to recognize objects that players drew on screen. Based on the classification results (achieving 99% accuracy) players could perform different types of attacks.
Master’s thesis - Emulation of Guitar Effects Using Machine Learning
This thesis explores the use of machine learning to emulate guitar effect pedals such as distortion and reverb. Traditional analog and DSP-based designs can be complex and time-consuming to model, especially due to the nonlinear and time-dependent nature of audio processing. The project investigates both black-box and gray-box approaches: starting with fully connected networks and advancing to architectures like LSTMs, WaveNet, TCNs, and Transformers. Gray-box methods such as genetic algorithms and differentiable DSP are also explored and compared.