I'm a software engineer, specialized in backend solutions, serverless, cloud and distributed systems. I wrangle microservices for a living.
Software Engineer
Brought on board to aid in the migration of our monolithic Django repository to AWS. After redeveloping the backend (price scraping, price arithmetics, price delivery, client reports), we deployed it as a collection of serverless functions, lambdas and ETL glue jobs. Some highlights:
- Developed a new bespoke web scraper (beautifulsoup, pyquery, requests, MongoDB), in order to improve observability, scalability and accuracy of scraped data. This boosted our scraping throughput up to 4x and overall scraped data health.
- Redeveloped the internal pricing engine (pandas/numpy/scipy). In order to speed up the new price delivery cycle from once a day to once every 6 hours.
- Built new ELK pipelines and dashboards. This increased the quality of our clients' pricing reports by exposing new pricing metrics, reinforcing our clients' and confidence in our SaaS.
- Developed and maintained third party integrations with our stack (e.g. Shopify and Magento).
- Improved our procedures from an SRE/devops point of view. Introduced postmortems and exposed new performance and health metrics to be used for SLAs and SLOs.
- Championed the adoption of standard software development practices, such as code reviews, pytest, TDD, CI/CD and retros. This vastly increase the tech team's productivity and stakeholders' visibility.
Senior Software Engineer
As part of the Production Engineering team, my role revolved around developing and maintaining the servers/APIs of our 'private cloud' infrastructure. This has been a formative role, as it honed my skills in design patterns, distributed systems, server-client communication, databases and more. During my tenure at Imagination I've taken part in an array of wildly different projects (backend, frontend, AI/ML and even devops/SRE and unix administration). Some of my favourite highlights include:
- Developed an in-house alternative to Ansible to control a large fleet of servers, abstracting away bare-metal and providing a simple API to execute various workloads.
- Applied machine learning concepts to our job scheduler in order to better predict job resource utilisation.
- Integrated Docker into our job scheduling engine, enabling hardware engineers to migrate and run even their more exotic jobs on our grid engine.
- Led development of a custom NFS disk crawler solution. The crawler was capable of multi-processed filesystem traversal in lighting times (benchmarked to crawl through 50TBs of NFS storage in ~30 minutes). The tech stack comprised of Redis, Dramatiq and MongoDB.
- Designed and implemented the monitoring and 'billing' dashboards of our private cloud.
- Improved observability of our APIs through tools for our internal compute engine (InfluxDB, Grafana, React, MongoDB).