Back
Apply now

Senior Data Engineer

As one of the UK’s leading digital retailers, Sainsbury’s Argos generates a lot of data and we are always looking for better ways to dig into this data to find new insights and new information that will allow us to improve our understanding of our customers’ needs and to optimise every aspect of our business.

The next stage of our journey is to make use of the very latest Big Data tools from the Hadoop ecosystem to accelerate our ability to understand our ever-increasing amounts of data.

To help with this journey, we are looking for a Senior Data Engineer with recent experience in Big Data development, but also experience with traditional data and data warehousing environments.

The role requires someone who can provide design leadership for the existing Data Engineers and to help guide their work, providing technical assistance where needed and someone who can review and verify the quality of the delivered work.

The role also requires combining existing internal and external sources of data to construct higher order data structures that allow specialist Reporting, Analytic and Machine Learning teams to generate actionable intelligence for the wider business.

As well as bringing technical skills, the ideal candidate is someone who can work with people throughout the company. This person needs to work with source system teams to ensure a full understanding of the data being loaded and to build relationships with the consumers of the data to ensure that we are creating the right data structures for their needs.

Technologies in our stack for this role:

Cloudera Hadoop (Hive, Impala, Spark, HBase, Sqoop, Flume), Talend, Python, Scala and Java. Snowflake, Kafka, Postgres, Jenkins, AWS and Linux.

Team task:

To develop the company’s Big Data Platform so that it becomes the trusted source of data for the entire organisation.

Challenges we're excited about tackling:

Building a holistic view of our customers that helps the Business understand their needs.
Supporting the Data Science communities and Machine Learning teams by supplying cleansed, productionised data feeds.

We are looking for people with:

Exposure to HDFS, YARN, Hive, Impala, Sqoop and Flume
Exposure to the Cloudera stack is a bonus
Experience using Apache Kafka as a streaming data source
Programming experience in Python or Java and Linux Shell scripting
Data Loading with ETL tools such as Talend or Informatica
Cloud computing knowledge with AWS, including working with S3 and EMR.
Traditional database development experience with one or more of the major databases - SQL Server, Oracle, Teradata or MySQL
Knowledge of data Warehousing and the creation of OLAP Dimensions and Facts from OLTP data sources
Experience with data cleansing, data enrichment and reconciliation with source systems
Data modelling with experience in creating optimal data structures
Query performance tuning
An understanding of Data Security, Encryption and GDPR
Experience working within an Agile development team

Working environment:

Flexible approach to working hours, including ability to work from home when required. Casual dress-code. Good social environment – “Super Social Friday”. Close to all the amenities and transport link in Victoria – London.

Benefits:

Bonus – 20%
Private Healthcare
Holidays – 24 with the option to buy up to 5 days per year
Company pension
Discount at Argos, Sainsbury’s and Habitat
Sharesave scheme – Risk-free way to buy shares at a discounted rate
Share Purchase Plan
Childcare vouchers
Cycle to work scheme
Season Ticket Loan


Apply now