Job Requirements:
- 7+ years of data engineering or related database engineering experience in production scale platforms.
- Experience must include data mining and development of ETL processes, distributed data architectures and big data processing technologies such as Spark, EMR, Hadoop and Hive.
- working with Kafka stack for real time frameworks, collection, and processing real time data.
Specific skills:
1) Experience in GitHub Actions and CICD framework.
2) Knowledge of AWS technologies such as Lambda, S3, EC2, Redshift, Glue, Athena, Lake formation and step functions.
3) Experience with Talend, SQL, T-SQL, Postgres and relational and multi-dimensional databases.
4) Expereience in Python or similar scripting language.
5) Knowledge of Agile software development lifecycle.
6) Knowledge of data warehousing concepts and data modeling.