The challenge – As an enterprise producing millions of appliances each year and counting more than 50.000 employees, Electrolux is already generating massive amounts of data in hundreds of systems across the globe from sales to sensors in factory robots or smart home appliances. And with an explosion of IoT devices just around the corner, these data volumes are growing.

The team – The Electrolux Global Data Science team supports business functions across the entire company and helps them turning raw data into insights and actions. The team is a key enabler to support the company’s digital transformation. It is the center of excellence for data handling and advanced analytics and it is also responsible for the Electrolux global data platform.

The role – As a Data Engineer you are expected to engage in either a platform engineering role (doing solution design and implementation of stream processing, ETL and pipeline automation toolchains) or a delivery engineering role (configuring data flows from source systems to the data lake and/or various lakeshore access layer stacks).

What you will be doing:

  • Work across the entire platform tech stack used in the Electrolux Global Data Platform.
  • Leverage your understanding of software architecture and software design patterns to design and maintain the tools and technologies used by the team.
  • Write maintainable, scalable and future proof code/technologies
  • Identify and contribute to the best practices and design-patterns in the data related to data engineering and data asset management.
  • Work in established project management frameworks including Scrum (Agile/Kanban), Atlassian Stack (Jira, Confluence, Bitbucket etc.)

Required experience:

  • Solid programming experience in Scala, Python with good practices such as writing maintainable code, etc.
  • Experience with DevOps and automate software development processes. Comfortable with technologies such as Jenkins, Docker, Kubernetes, etc.
  • Solid experience of working in cloud (preferably Azure, but AWS/GCP also relevant).
  • Good and broad knowledge of computer security (certificates, web authentication, etc.)
  • Experience with web development
  • Comfortable with Linux/Bash and scripting.
  • Good understanding of data structures and database technologies.
  • Strong personal drive, fast learner, technology agnostic, pragmatic.
  • Ability to manage multiple, changing priorities while working effectively in a joint team.
  • Passionate about big data and machine learning technology
  • Fluent in English, any other language is a plus.
  • B.Sc. in Computer Science (or equivalent). Or M.Sc. in Computer Science (or equivalent)

Preferred experience

  • Azure Databricks or Apache Spark.
  • Experience of architecture/cloud solution design.
  • M.Sc. in Computer Science (or equivalent).
  • Experience of using data ingestion/engineering tools (both code-based and drag and drop ETL, data wrangling, data quality, warehousing, etc.).

Tagged as: agile, AI, Analytics, Azure, C++, Data, Databricks, Docker, ETL, Excel, Go, Kubernetes, Machine Learning, Python, R, Scala, Spark

Source:

Job Overview
We use cookies to improve your experience on our website. By browsing this website, you agree to our use of cookies.

Sign in

Sign Up

Forgotten Password

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about the latest Data Science career insights!

Share