Return to search

Data Engineering

Other

Full Time
Employer Listing
IT

The primary function of the data engineering role is to deliver high-quality data to business and end users across the entire Specialist Bank. This can be achieved either through self-service data products or by collaborating closely with the Analytics squad to provide modelled data warehouses for reporting and analytics purposes. The The role primarily operates in cloud environments, particularly in Azure, but also involves on-premises environments. The long-term goal being to migrate all on-premises systems to the cloud and eventually decommission them. The Data Engineering squad falls under Global Channel Data and Insights team. In addition to working closely with the Analytics squad, the candidate will collaborate with Architecture, DevOps, Database Administrators, all other Data Engineering squads and upstream Application teams. Key Responsibilities: Maintain and enhance existing SQL Server workloads (stored procedures, views, ETL processes). Extract and rationalize business logic embedded in T-SQL, SSIS packages, SSRS reports, and SSAS models. Refactor SQL-based transformations into scalable Spark-based ELT pipelines in Fabric. Translate dimensional models into modern Lakehouse structures. Support phased migration and controlled decommissioning of legacy assets. Translating architecture designs into technical implementations. Building dynamic meta-data driven data ingestion patterns using SQL Server SSIS, Azure Data Factory and Databricks. Building and maintaining the Enterprise Data Warehouse (using Kimball methodology) Building and maintaining business focused data products and data marts. Building and maintaining Azure Analysis Services databases and cubes. Ad hoc data analysis and ‘data wrangling' using Azure Synapse Analytics and Databricks. Implementing and delivering to all stages of the data engineering using Investec Data Experience (IDX) templates into ODP (One data platform). Develop,implement, and maintain relevant documentation, guidelines, checklists, and policies to promote continuous integration, ensure and improve data security, and reduce the possibility of “human error . Sharing support and operational duties within the team. Working closely with end-users to understand their business and their data requirements. Core skills and Knowledge: Knowledge of distributed data mesh deployment methodology. 6+ years of experience as a data engineer. Experience in building robust and performant ETL processes. Excellent data analysis and exploration using T-SQL Demonstrated expertise in developing and optimising large-scale data processing applications using Apache Spark. Proficient in writing complex Spark jobs to process and analyse vast datasets efficiently. Skilled in leveraging Pandas for data manipulation and analysis tasks. Capable of transforming raw data into actionable insights by utilising Pandas' powerful data structures and functions. Extensive experience with SQL Server and SSIS Knowledge and experience of data warehouse modelling methodologies (Data Vault 2.0, Kimball) Experience in Azure – one or more of the following: Azure Data Factory, Databricks, Azure Synapse Analytics, ADLS Gen2 Understanding and experience in Azure bicep templates for automating deployment of pipelines. Python and SQL (stored procedures, functions, RDBMS architecture) Understanding of systems development, project management approaches. Build and maintain Analysis Services databases and cubes (both multidimensional and tabular) Experience in using source control, preferably GIT. Basic knowledge in Infrastructure as Code (IaC) - scripting and automation via Azure CLI, PowerShell, Bash, Bicep and JSON ARM and Bicep templates. Understanding and experience of DevOps deployment pipelines Effective communication and collaboration skills to work in cross-functional teams, participate in code reviews, and communicate technical concepts to non-technical stakeholders. Willingness to learn new technologies, keep up with industry trends, and adapt to evolving development practices and tools. Understanding of GitHub copilot or Codex tools Understanding of prompts and contexts Basic understanding of markdown files for custom agents and agent skills.

Seeker Insight

to see extended details such as date listed.

View more details at...