👨🏻‍💻 postech.work

Data Architect

Cognizant • 🌐 In Person

In Person Posted 3 days, 3 hours ago

Job Description

About the role

As a Data Architect, you will make an impact by designing and implementing modern data solutions that empower business insights and innovation. You will be a valued member of the Data Engineering team and work collaboratively with cross-functional stakeholders, including cloud engineers, data scientists, and business leaders.

In this role, you will

Architect and build scalable data systems on major cloud platforms (AWS, GCP, Azure)

Design and implement data lakes, lakehouses, and data hubs, ensuring robust ingestion, governance, and observability

Develop and optimize data pipelines for both batch and streaming data using tools such as Kafka, Flink, and Spark

Lead data modeling and transformation initiatives using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow)

Champion data governance, quality, privacy, and security best practices throughout the data lifecycle

Work model

We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 2 days a week in a client or Cognizant office. Regardless of your working arrangement, we are here to support a healthy work-life balance through our various wellbeing programs.

The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.

What you need to have to be considered

Experience developing data systems on AWS, GCP, or Azure

Hands-on experience building modern data architectures (data lakes, lakehouses, data hubs)

Proficiency with data ingestion tools (Kafka, AWS Glue) and storage formats (Iceberg, Parquet)

Experience developing data pipelines with streaming architectures and tools (Kafka, Flink)

Expertise in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow)

Strong background with Spark for data transformation, including streaming, performance tuning, and debugging with Spark UI

Advanced programming skills in Python, Java, Scala, or similar languages; expert-level proficiency in SQL

Demonstrated experience in DevOps practices, including code management, CI/CD, and deployment strategies

Strong background in data governance, including data quality, privacy, and security for data product development and consumption

Advanced English is required.

These will help you stand out

Deep experience with data modeling concepts such as Slowly Changing Dimensions (SCD) and schema evolution

Demonstrated ability to lead cross-functional teams in data architecture projects

Experience with data observability and monitoring solutions

Familiarity with regulatory compliance requirements for data management

Excellent communication skills in English, with the ability to present complex technical concepts to non-technical stakeholders

We're excited to meet people who share our mission and can make an impact in a variety of ways. Don't hesitate to apply, even if you only meet the minimum requirements listed. Think about your transferable experiences and unique skills that make you stand out as someone who can bring new and exciting things to this role.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.