In this role as Data Infrastructure Engineer you will be responsible for the design, build and the technical support of our Cloud solutions built on Google Cloud Platform and Azure. You will be responsible for building the Data Infrastructure for Data & Analytics services, which includes Business Data and Machine Data.
As a member of the IT Big Data &
Analytics team you are part of a team of best-in-class engineers, organized in
Agile teams. Our mission is to translate the business challenges into Data
& Analytics solutions. You will be part of one of our 50 (!) DevOps teams,
working together in a Scaled Agile (SAFe) environment. Our teams are a mix of
young talent and senior specialists. We share a mission to deliver business
value for our stakeholders using modern data analytics technology. Next to
that, you will closely work together with the Cloud Centre of Excellence
(GCP & Azure), to ensure maximum reusability and adherence to security
Our client is a crucial player in a fast moving semiconductor industry, which means change constantly around. You will work with your team to capture the constant and dynamic refinement of business priorities and data & analytics user stories. You are looked upon, and rewarded for, being able constantly switch between business value and technical implementation. Together with your team you will work constantly to drive ASML’s ambition by building a modern Data & Analytics platform, using modern Cloud components and leveraging the Agile mindset fully.
Wat verwachten we van jou?
You have a strong software engineering background and for example a MSc. in Computer Science or equivalent. You should have at least 2 - 4 years of experience in working in an DevOps / Cloud Engineering role, preferably working in a highly complex environment like ours. You work with Cloud Platforms, like MS Azure and Google Cloud, on a daily basis. You should not only feel very comfortable with some (not all!) technologies below, but you should be able to take your colleagues along on driving Cloud native services. Big plus is you have experience with enabling Data Science & Data Engineering use cases!
You should recognize yourself in:
- Having experience in and be passionate about working with Big Data technologies:
- Distributed data processing technologies like Spark and Kafka
- Experience with data storage at scale: HDFS, HBase, Druid, Cassandra for example
- Being fluent in at least one programming languages, like Python, Julia, R, Scala, Java
- Having experience with productization and software system automation: CI/CD, Configuration management, Logging, alerting and monitoring, Kubernetes & Docker
Wat kun jij van ons verwachten?
- A competitive salary based on your experience and education
- Good secondary conditions such as 25 holidays, Flexible work hours and 8% holiday allowance
- Courses to develop yourself professionally and personally
- Discount on your healthcare and referral bonusses
Voordelen van solliciteren via Trinamics
- Keuze uit meer dan 400+ technische vacatures.
- Je profiteert van ons grote netwerk aan bedrijven.
- Eenmaal aan het werk, blijven we met je in contact.
- Altijd een persoonlijke consultant voor ál jouw vragen.