About the Client
We are one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world.
About the Role
As an Application Engineer, you'll be at the forefront of implementing business services using Python on our Azure Datalake platform. You'll design and develop backend applications that enable our customers to run their accounting processes efficiently while generating data for critical reporting needs. This is a true DevOps position where you'll own your code from development through production deployment.
The team builds on a lakehouse architecture using Azure Databricks, Azure Data Factory and Azure Data Lake (Delta Lake) with Python being the main programming language and PySpark is used for large scale data engineering.
Responsibilities
Design and develop data transformation services and data flows using Python/PySpark
Translate business requirements into technical specifications and implement robust solutions
Architect end-to-end applications that combine data elements from multiple domains and systems
Break down complex work into manageable tasks, estimate timelines, and plan delivery
Contribute to the design and implementation of robust application architecture
Implement high levels of test automation to ensure quality and reliability
Conduct peer code reviews and ensure adherence to best practices
Collaborate with stakeholders to understand requirements and advise on efficient solutions
Apply Agile Scrum and DevOps methodologies throughout the development lifecycle
Support operations during quarterly closing cycles
Ensure technical designs meet quality requirements for performance, reliability, and availability
Requirements
Bachelor's or master's degree in a relevant quantitative field (Computer Science, Mathematics, Engineering or equivalent)
6+ years of experience in designing and implementing reliable, scalable software through the full development lifecycle
Strong Python proficiency with hands-on experience in PySpark and SQL
Profound experience with Spark-based data lake technologies including Azure Databricks, Azure Data Factory, and Azure Data Lake
Expertise working with relational databases like Oracle/Postgres
Experience with integration technologies like REST/SOAP APIs
Nice to Have Skills
Experience with test-driven development practices
Knowledge of financial or accounting systems
Cloud platform experience beyond Azure
Experience working in international teams
Agile/Scrum certification