Associate Director, Data Engineering

  • Job Category
    Information Technology
  • Job level
  • Contract type
  • Location
  • Salary
    S$9000 - S$13000

Job Description


  • Work with the Engineering team and in collaboration with Aon’s business teams to define and deliver data solutions that enable and enhance Aon’s Data & Analytic capabilities.
  • Design, build and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, monitoring and observability for data pipelines.
  • Design the Data infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Work with stakeholders including the Product Owner, Data Science and Development teams to assist with data-related technical issues and support their data infrastructure needs.
  • Contribute to the logical/physical design of our data lake and the development of all data assets, and downstream warehouses/marts.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Maintain up to date Technical and Operational documentation.


  • Bachelor’s Degree in Computer Science, Information Technology or equivalent.
  • 5+ years’ experience in Big Data technology (i.e. Hadoop, Hive, Spark or other real-time streaming)
  • Knowledge of architecting large-scale data infrastructure in the cloud platform (i.e. EMR, RDS, Elasticsearch, Redshift, Kinesis, Athena)
  • Good knowledge of Hadoop ecosystem (HDFS, Hive, Impala, Kafka, Spark, etc.), and associated tools.
  • Proficient at reading, profiling, parsing, transforming, cleansing and integrating data from various sources (structured and unstructured).
  • Experience designing Horizontally Scalable Event-Driven Big Data Architecture
  • Experience with big data ingestion and workflow management tools: Hadoop, Spark, Kafka, Airflow, StreamSets, etc.
  • Experience with relational SQL and NoSQL databases (Postgres, MongoDB, Redis, etc.)
  • Programming in Python, Scala, Java for automation and data processing.
  • Experience working with bash scripting, Docker, Kubernetes and Linux environments
  • Experience with Cloudera CDP will be an added advantage
  • Knowledge of Agile Scrum and Continuous Delivery practices is desirable
  • Awareness of Data Security, Data Governance and Service Operations processes, with an ability to deliver on these key non-functional requirements.
  • Possess excellent communication skills, sharp analytical abilities, able to think critically of the current system in terms of growth and stability

Closing on 10 Dec 2021