Job Details
Nationality Requirement: Malaysia
Job Description
Own It :
Architect and implement scalable data solutions using AWS services like S3, Glue, Redshift, Lambda, and Step Functions.
Develop and maintain robust ETL/ELT pipelines that transform raw data into clean, structured, and production-ready datasets.
Ensure the reliability and high performance of all data solutions from design to operation.
Build Trust :
Establish and enforce strong data governance, security, and quality controls across all pipelines and datasets.
Write clean, well-documented code and champion best practices in testing, automation, and version control.
Consistently engage with internal counterparts in solving issues, fostering strong working relationships.
One Team :
Collaborate closely with analysts, business stakeholders, and IT teams to understand data needs and deliver user-friendly solutions.
Contribute to a knowledge-sharing culture and help elevate the team's technical standards.
Develop positive working relationships with customers, partners, and other department staff.
Move Fast :
Continuously refine and optimize data models (e.g., star schemas) to boost reporting efficiency.
Proactively identify and resolve performance bottlenecks and inefficiencies in data workflows.
Adapt, simplify, and act quickly based on dynamic business needs and market changes.
Enjoy working in a fast-moving transformation journey, embracing change and driving progress.
Drive Innovation :
Recommend and experiment with new tools or AWS services to enhance engineering efficiency and data capabilities, leading to more robust and future-proof data solutions.
Automate manual processes and champion continuous improvement in data engineering practices, freeing up time for strategic initiatives and accelerating delivery.
Promote a culture of continuous learning and innovation within the team.
Delight the Customers :
Build reliable datasets and pipelines that empower stakeholders to self-serve insights.
Translate business questions into effective data solutions that drive better decision-making and improved customer outcomes.
Requirements :
Minimum 8-10 years in data engineering, with at least 5 years in cloud-based environments (preferably AWS).
Experience building and managing data models and pipelines for large-scale data environments.
Proficiency in AWS tools: S3, Glue, Redshift, Lambda, Step Functions.
Strong SQL and Python skills for data transformation and automation.
Hands-on experience with various data sources, including structured and unstructured data.
Solid grasp of data warehousing concepts, data modeling, and modern ELT/ETL practices.
Strong problem-solving skills and ability to work independently within a cross-functional team.
Ability to work independently while fostering teamwork and collaboration across various business units.
Proven ability to thrive in a fast-paced, dynamic, and transformative environment, embracing change and driving progress.
Strategic thinker with strong analytical skills and robust problem-solving abilities.
Excellent communication and negotiation skills, capable of engaging effectively with diverse stakeholders.
Preferred requirements :
Experience building data warehouses for a large enterprise.
Experience with Power BI, Snowflake on AWS, or Databricks.
Familiarity with DevOps practices such as CI/CD.
Understanding of AWS-based data security and governance best practices.