About Us
At
TCBS
we are redefining how financial services are delivered — using technology to create smarter, faster, and more inclusive financial solutions. Data is at the core of everything we do, from personalized financial products to real-time fraud detection. We're looking for a
Data Engineer
with a passion for building scalable and secure data infrastructure to help power our next generation of fintech products.
Your Role
As a
Data Engineer
, you will work closely with data scientists, backend engineers, and product teams to build and optimize our end-to-end data platform. You’ll be responsible for architecting and maintaining robust, scalable, and high-performance
data pipelines and lakehouse infrastructure
to support our analytics, risk models, and customer experiences.
Key Responsibilities
Design and develop
ETL/ELT pipelines
for structured and unstructured financial data across internal and third-party systems.
Develop and manage large-scale
data infrastructure
using:
Apache Spark
,
Apache Kafka
,
Apache Flink
, and
HDFS
Implement
data lakehouse architecture
using
AWS
services such as:
S3
,
Glue
,
EMR
,
Athena
,
Kinesis
,
Redshift
, and
Lambda
Collaborate with analytics, product, and engineering teams to deliver reliable and scalable data solutions for reporting and real-time applications.
Collaborate with analytics, ML, and engineering teams to integrate data products into core financial systems.
Monitor and improve the performance and reliability of pipelines and data services.
Required Qualifications
3+ years
of experience as a Data Engineer or in a similar backend/data-focused engineering role.
Proficient in
Python
, with working knowledge of
Java
or
Scala
.
Solid understanding of
distributed data processing
and
streaming architectures
.
Strong experience with
big data tools
(Spark, Kafka, Flink, Hive, etc.).
Production experience with
AWS data stack
(S3, Glue, EMR, Kinesis, Redshift, Athena).
Strong SQL skills and experience with data modeling and data warehousing concepts.
Familiarity with CI/CD, Docker, Git, and automation in data workflows.
Nice to Have
Experience with
open data lakehouse
formats:
Apache Iceberg
,
Delta Lake
,
Apache Hudi
Hands-on experience with
Databricks
or
Snowflake
Familiarity with
data cataloging
,
data lineage
, and
governance
tools (e.g., AWS Glue, Apache Atlas, Great Expectations).
Fintech domain knowledge (payments, risk, lending, compliance) is a strong plus.
What You’ll Get
Competitive salary with performance-based bonuses.
Opportunity to work on cutting-edge data architecture in a high-growth fintech environment.
Ownership and influence in a data-driven organization.
Learning budget and access to leading cloud and data tools.