Minimum Qualifications
4+ years of experience in Data Integration working proficiency in SQL and NoSQL Databases.
4+ years of experience in Programming using Scala / Python / PySpark / Java etc.
4+ years of experience working with Hadoop or other Cloud Data platforms (ex: Snowflake).
Experience in building CI/CD Pipelines and Unix scripting.
Demonstrated ability to quickly learn new/open-source technologies to stay current in the Data Engineering world.
Experience in developing software using agile methodologies such as Scrum/Kanban.
Bachelor's degree in a related field or equivalent experience.
Preferred Qualifications
Experience in building batch and streaming pipelines using Sqoop, Kafka, Pulsar and/or Spark.
Experience in storage using HDFS / AWS S3 / Azure ADLS etc.
Experience in Orchestration and Scheduling using Oozie / Airflow / AWS Glue etc.
Experience in Data transformations using PySpark / dbt etc.
Experience in open-source projects leveraging collaboration tools like GitHub.