Back to results

Support Centre

Tech Lead - Data Engineering

Location:
London
Hours Per Week:
40
Vacancy Type:
Permanent
22 Jan 2021
Job Description
Technical Lead - Data Engineering

We are the biggest UK Homeware Retailer and the largest adopter of AWS Serverless in Europe. We have recently transformed our digital platform using the latest technology to build highly scalable, performant cloud-based data infrastructure, and we now have an exciting opportunity for another Data Engineer to join our rapidly growing agile team.

This is a technical leadership practitioner role. You will lead and coach technical best practice within a squad and will understand and help to define the overall technical strategy and the application of technical standards across the domain.

You will have excellent depth and breadth in skills. The depth will be in a core skill. The breadth means that you have sufficient capability in other skills to allow the team to deliver greater value and increased flow.

You will stay on top of tech trends, experimenting and learning. You will drive the use of innovative technologies, learning from members of our Communities.

You are an expert in Lean and Agile tools, techniques and ways of working, and how they can be continuously improved, fostering a culture of experimentation and learning. You will coach, mentor and train colleagues, driving continuous improvement and adoption of new techniques.

You will work closely with other technical leads, engineers and architects to shape the future of the data domain. You will mentor and coach the squads engineering capability, understanding development areas for individuals and helping shape personal development plans with the Principal Engineer.


You will be joining our Data Insight, Science and Engineering Team. Your primary focus will be on building, expanding and optimising our data pipelines. You will develop high performance data products to further enable our data driven approach. You will support the
improvement of our data self-service capability, building the technology to allow users to access the data they need on demand.

You will be a part of a highly-skilled, self-organising team building forward-thinking solutions and creating new capabilities to support multiple, cross-functional teams. We are continuously looking to further improve our technology stack, data quality and reliability, and your vision and ambition will contribute to shaping our solutions toward data-driven decisions across the business.

You will be working to define AWS cloud based infrastructure as code using CI/CD practices, developing high quality code in python, and designing data solutions that align with business goals. The ideal candidate is self-directed, comfortable with challenging and leading on best practice, and able to adapt to regularly shifting business requirements and occasional ambiguity.

This is a fast-paced hands-on role, and would be well-suited to someone who loves coding, clean design, clean architecture and using the latest tools and technology to tackle constantly evolving business and tech challenges.

Responsibilities 
● Create and maintain optimal data pipeline architecture
● Assemble large, complex data sets that meet functional / non-functional business requirements
● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
● Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
● Work with stakeholders, including Analytics and BI reporting teams, to assist with data-related technical issues and delivery
● Work with data and analytics experts to strive for greater functionality in our data systems

Experience required 
We are looking for a candidate with experience in a Data Engineer role, you should also have hands-on experience in most of the following key areas:
● Object-oriented/object function scripting languages in Python
● Building and optimising ETL / ELT data pipelines
● Advanced working SQL knowledge and experience working with relational databases, as well as familiarity with one or more cloud-based data warehouses such as Snowflake, Redshift, BigQuery
● Experience using noSQL databases such as DocumentDB or MongoDB
● Working with AWS cloud services in production (Cloudformation, API Gateway, AWS Lambda, Step Functions, SSM, SNS, SQS, Firehose, S3, EMR/Glue, SageMaker etc)
● Working with raw data, structured, semi-structured and unstructured data
● Experience combining large disconnected datasets using relevant tools/frameworks such as Spark
● Experience of source control, Continuous Integration, Delivery and Deployment through CI Pipelines
● Supporting and working with BI and Analytics teams in a dynamic environment
● Able to collaborate and effectively pair program with other engineers
● Strong analytical skills and problem-solving skills
● Knowledge of Scrum, Kanban or other agile frameworks is beneficial, but not Required
 
If the opportunity to be part of shaping and transforming Dunelm’s Digital presence excites you, please apply for our immediate attention!