Data Engineer

NTUC INCOME INSURANCE CO-OPERATIVE LTD
  • Job category
    Consulting, Information Technology, Insurance, Professional Services, Others
  • Job level
    Manager
  • Contract type
    Permanent, Full Time
  • Location
    Central
  • Salary
    S$5000 - S$9500

Job Description

2 words best describe the Data Analytics journey in Income – Incremental & Impactful.

The fact that the Analytics team has grown to a 10 member team in just 4 years, is one of the reflections of this journey. To be “a data driven organization” is one of the key themes on which the organisation aims to achieve Goal 2025. Consequently the role of advanced analytics has to scale up to be able to deliver significant business outcomes.


We are looking for a talented Data Engineer that will focus on :

Design, develop, and operate data ingestion and integration pipelines to provide high quality datasets for analytical and machine learning use-cases

Collaborate with other data engineers, analysts, data scientists, product specialists, and other stakeholders to build well-crafted, pragmatic and elegant engineering solutions.

Recommend and implement ways to improve data reliability, efficiency, and quality

Manage existing runs and deployment of ML model pipelines

Driving enterprise data foundation requirements of Data Warehousing, Data Lake

Acquiring, storing, governing and processing large datasets of structured/unstructured data

Communicate with users, other technical teams, and management to collect requirements, identify tasks, provide estimates and meet production deadlines


Qualifications

At least 4 years of experience in data engineering with relevant experience in big data ecosystem

A bachelor's degree in Computer Science or equivalent

You are passionate about technology and are always looking to improve yourself

Interested in being the bridge between engineering and analytics


Knowledgeable about system design, data structure and algorithms

Good knowledge of big data technology landscape and concepts related to distributed storage and computing

Experience with big data processing tools such as Spark, MapReduce, etc.

Experience with batch and ETL jobs to ingest and process data

Experience with Data Warehouses such as Redshift, BigQuery, Snowflake, etc.

Experience with Cloud environments such as AWS, GCP, Azure

Experience with other NoSQL databases such as Elasticsearch, DynamoDB, Cassandra, etc.

Programming experience with SQL, Python, Java, Scala

Experience with event sourcing systems such as Kafka, Kinesis and the associated APIs such as Kafka Connect, Kafka Streams, KCL, Spark Structured Streaming, etc.

Experience or willingness to work on DevOps practises such as infrastructure-as-code, data-pipeline-as-code

High-level understanding of Data-science model development topics such as training and deployment


Closing on 19 May 2021

orview more job listings from this company