Data Engineer - Tlalnepantla, México - The Clorox Company

The Clorox Company
The Clorox Company
Empresa verificada
Tlalnepantla, México

hace 2 semanas

Rodrigo Fernández

Publicado por:

Rodrigo Fernández

Reclutador de talento para beBee


Descripción

At Clorox, we champion people to be well and thrive by doing the right thing, putting people at the center, and playing to win.

Led by our IGNITE strategy, we build brands that make a positive difference in people's lives around the world.

And we know that success requires head, heart, AND guts — all three, every day — coming together to work simpler, faster, bolder, and more inclusively.

Interested? Join us to

IgniteYourCareer


Your role at Clorox:


Clorox's Enterprise Data Strategy and Operations team is seeking a talented Data Engineer to work in our Data Engineering squad and perform the Data & Analytics pipeline development work that will equip, enable, and empower our business with high-value enterprise data assets.


We are looking for a Data Engineer, who is responsible for ensuring the availability and quality of data needed for analysis and business transactions.

This includes data integration with, acquisition, cleansing, harmonization and transforming raw data into curated datasets for data science, data discovery, and BI/analytics.

Responsible for developing, constructing, testing, and maintaining data sets and scalable data processing systems.

Plays a critical role in creating and analyzing deliverables to provide critical content to enable fact-based decision making, facilitation and achieving successful collaboration with the supported stakeholders and have aligned relationships to drive business results and delivery of projects.

Works very closely with different stakeholder teams to develop and maintain business process artifacts and aid project discovery as well as delivering the project to support and ensure supportability.

Analyzes, designs, and develops best practice business changes through technology solutions.


In this role, you will:

  • Design and build data pipelines to schedule & orchestrate a variety of tasks such as extracting, cleanse, transforming, enrich & loading data as per the business needs.
  • Build the infrastructure required for optimal ETL/ELT of data from a wide variety of data sources using SQL and Azure, GCP 'big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into various KPI metrics.
  • Developing and maintaining complex data ETL/ELT Pipelines, data models and standards for various data Integration & data warehousing projects from source to sinks
  • Develop scalable, secure, and optimized data transformation pipelines and integrate them with downstream sinks
  • Work with DataOps team for code migration and continuous improvements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for greater scalability, etc.
  • Work with Senior Data Engineers, onsite/offshore data engineers and data analysts
  • Work with architecture and data governance teams to create solution designs and engage in data roadmap.

What we look for:

  • 4+ years of technology experience, with at least 1+ in a solution designing role and 3+ in Data engineering.
  • Excellent collaboration, communication, and visual design skills
  • Handson experience working with large data sets with handson technology skills to design and build robust Big Data solutions using Azure/GCP/AWS Big data services
  • Strong analytical and problemsolving skills with a proven record of accomplishment in gathering and documenting comprehensive business requirements.
  • Handson experience in data modeling and database design involving any combination of
  • Data warehousing, Business Intelligence systems and tools
  • Hands on Experience with connecting and integrating to at least one of the platforms
  • Google Cloud, Microsoft Azure, Amazon AWS
  • Experience in Modern Cloud Data Warehouses like Azure Synapse, Amazon Red Shift, Google Big Query, Snowflake etc.
  • Experience in Python, Pyspark, Databricks, SparkSQL development frameworks
  • Handson experience in Cloud Data Infrastructure related to Data Ingestion, Data Processing, Cloud Data Warehouse, Data Monitoring, Data Orchestration
  • Webservice and API integration experience using REST API, JSON,
  • Handson experience in writing ANSI SQL, T-SQL, PLSQL or HIVE SQL based complex queries and data aggregation and transformation
  • Experience working in Agile or SAFE Agile team structures
  • Bachelor's degree in Computer Science, Computer Engineering, Systems Design Engineering or Mathematics or equivalent working experience.
Additional preferred qualifications

  • Knowledge on MPP architectures and parallel query processing engines like Delta Lake, Presto, Azure Synapse, Dremio etc.
  • Understanding of Logical Data Warehousing, Modern Data Lakehouse Architectures
  • Knowledge on Legacy data tools like Oracle Databases, ODI, OBIEE, Informatica, Apex etc.
  • Working with low code analytics and data preparation tools like Alteryx, Trifacta, Power Apps etc.
  • Knowledge on Governance tools related to Data Cataloging (Alation

Más ofertas de trabajo de The Clorox Company