About the Client
Our client is a provider of IT solutions, systems, hardware and software services in banking sector. It operates data centres, bank systems, POS terminals and HW (servers and PC) and is responsible for development, implementation, support and servicing of banking software and its operations in Central and Eastern Europe.
About the Role
We are looking for experienced Databricks Data Engineer for our team. Group Data Acquisition department is reliable and professional IT partner for business customers to fulfill the group BI, regulatory and non-regulatory analytical needs. It is designed as the primary integration point to acquire all relevant data sources (group data systems, local AT banks, CEE countries) for its further processing and analysis to support our business partners and users.
Responsibilities
Team responsibilities:
- Building the Data Lake and other Databricks-based solutions to grow to maximum extent in terms of the usability and benefit for the bank
- Building the know-how of the Cloud technologies to develop cloud-based solutions
- Acting as a support for data lake consumers with their free-form analytics, data discovery needs and data-science activities
- Acquiring the data from the respective local AT banks, CEE countries and various group source systems
- Being the first DQ gate ensuring the data fulfill the predefined data constraints and rules
Individual responsibilities:
Integrate and implement ETL process for the various source systems for both batch and streaming data integrations within DataBricks on MS Azure
Contribute to the architecture and design of how to build and extend our Databricks based Data Lake
Collaborate with business and IT users in the area of Machine Learning, Data Science, BI and DWH
Identify and implement various innovations/improvements bringing the stability, robustness and future extendibility of the solution
Requirements
- Passionate person when it comes to the data and technical disciplines
- Reliable person that drives the topics on their own with full responsibility for the quality and efficiency of the final outcome
- Knowledge of Cloud or Hadoop technologies
- Proficiency in Python and SQL
- Experience with data ingestion procedures
- Strong analytical skills with keen attention to detail
- Not mandatory but nice to have: Experience with DataOps or MLOps implementations
- Fluent English (written and also spoken communication is required)
Nice to Have Skills
n/a