Are you someone who loves working in a hands-on environment without getting stuck in meetings all day? We’re looking for a lead data engineer to take on the challenge of making hands-on strategic decisions: changing the ETL scripts from SQL Server to Airflow, evaluating the current team composition or testing the viability of a new Apache project in a production environment, for example. If delivering value to millions of customers and real-world application of big data tools and techniques makes you happy then we want to talk to you.
What does an average week look like?
We like to keep a healthy balance between focus and variety. At the start of the day, you grab a coffee together with a team of big data engineers who build open-source solutions that serve millions of customers. You optimize the time spent developing, but when the time asks for it you discuss with the Analytics Translator if everyone is still heading in the right direction. On Friday the variety kicks in, you’ll get together to discuss new developments in the field with your colleagues, incredibly smart data engineers, and facilitate or participate in sharing knowledge through training.
- Work together with Xomnians to lead the design and implementation of large scale big data projects for our clients
- Shape our big data capabilities, especially engineering, together with the lead data scientist and Chief Technical Officer
- Refine our existing data engineering talent programs
- Provide training to our talents on topics of your expertise (e.g. Kafka, Spark, Airflow)
- Host events (e.g. VR gaming) for team building
Professional, passionate about big data with a healthy business understanding. You are a problem solver, proactive and you like to share knowledge. You test out new ideas, automate as much as possible and think about how to make your life, and the life of the people around you easier. You like to question why we do the things we currently do but are also able to find pragmatic solutions to complex problems.
- 3+ Years work experience as a hands-on big data engineer
- Master/Bachelor degree in computer science, computer engineering or equivalent
- Experience with one of the major cloud environments (GCP, AWS, Azure)
- Extensive knowledge in at least one of the following fields: setting up a data pipeline, deployment of machine learning models, designing a big data infrastructure or leading software engineering projects
- Experience with Big Data Tools/Platforms (e.g. Spark, Kafka, Airflow, Hadoop)
- Fluent in a programming language applicable to Data Engineering (e.g. Scala, Python, Java)
- Knowledge of handling different project lifecycles (e.g staging, production)
- Knowledge of agile workflows and how it can be applied to Data Engineering