Your day to day:
Technical Expertise: Deep understanding of data engineering principles, data modeling, ETL processes, and data warehousing/data lakes. Proficiency in SQL and relevant programming languages (e.g., Python, Java).
Data Pipeline Tools: Experience with data pipeline tools such as Apache Airflow, Apache Kafka, or similar technologies, experience with UrbanCode Velocity is a plus.
Cloud Platforms: Familiarity with cloud platforms such as AWS, Azure, or GCP, and their data services.
Problem-Solving Skills: Strong analytical and problem-solving skills, with the ability to troubleshoot data issues and optimize data pipelines.
Attention to Detail: Meticulous approach to data quality and accuracy, ensuring data integrity throughout the pipeline.
Communication Skills: Ability to communicate technical concepts clearly and collaborate effectively with cross-functional teams.
What you need to bring:
5+ years of experience in data engineering, with a focus on data pipeline development and data architecture.
Experience using IBM UrbanCode Velocity or similar enterprise-scale release management applications
Familiarity with data visualization tools (e.g., Tableau, Looker, powerBI) and analytics frameworks.
Strong Proficiency in SQL and experience with big data technologies (e.g., Hadoop, Spark, Kafka).
Proficiency with at least one programming language (eg, Java)
Experience with cloud platforms (e.g., AWS, GCP, Azure) and data storage solutions (e.g., Redshift, BigQuery, Snowflake).
Customer-Centric Mindset with a track record of synthesizing customer insights, constructing product roadmap, demonstrated ability to empathize with the developers, and influence developers and stakeholders at all levels of the organization.
Excellent problem-solving skills and the ability to work collaboratively in a fast-paced environment.