About Terminal
Terminal is Plaid for Telematics in commercial trucking. Companies building the next generation of insurance products, financial services and fleet software for trucking use our Universal API to access GPS data, speeding data and vehicle stats. We are a fast-growing, venture-backed startup supported by top investors including Y Combinator, Golden Ventures and Wayfinder Ventures. Our exceptionally talented team is based in Toronto, Canada.
For more info, check out our website:
https://withterminal.com
Note: This role is only available to students able to relocate to Toronto/GTA for the full term
About The Role
We're looking for an engineer who's excited about building scalable data platforms and learning how to tackle complex backend challenges. This is an opportunity to work on real production systems, contributing to the data platform that powers Terminal's API and handles everything from data streaming and storage to analytics at petabyte scale.
You'll work closely with our senior software engineers, contributing to projects that directly impact how we process and deliver high-volume telematics data to our customers. This is a hands-on role where you'll gain exposure to production systems, modern data engineering tools, and large-scale distributed architectures.
What You Will Do
Contribute to projects focused on data replication, storage, enrichment, and reporting capabilities
Help build and optimize streaming and batch data pipelines that support our core product and API
Work on scalable storage solutions for handling petabytes of IoT and time-series data
Assist in developing and maintaining real-time data systems to ingest growing data volumes
Support implementation of distributed tracing, data lineage and observability patterns
Participate in code reviews and learn best practices for writing clean, maintainable code in Java and Python
Collaborate with cross-functional teams to understand requirements and deliver solutions
Gain hands-on experience with modern data infrastructure and cloud technologies
The ideal student will have
Availability for co-op/internship of at least 6 months full-time and on-site at our Toronto office
Strong programming fundamentals in Java or Python (or demonstrated ability to learn quickly)
Understanding of data structures, algorithms, and system design basics
Coursework or project experience with databases, distributed systems, or data processing
Curiosity about large-scale data systems and real-time processing
Strong problem-solving skills and eagerness to learn
Ability to work collaboratively in a team environment
Nice-to-have:
-
Currently pursuing a Master’s degree in Computer Science or a related field
-
Prior internship or co-op experience
-
Personal or academic projects involving data pipelines or stream processing
-
Exposure to cloud platforms (AWS, GCP, or Azure)
-
Familiarity with SQL and database concepts
-
Interest in or exposure to technologies like Kafka, Flink, Spark, or similar
Tech Stack (you'll Learn And Work With)
Languages: Java, Python
Framework: Springboot
Storage: AWS S3, AWS DynamoDB, Apache Iceberg, Redis
Streaming: AWS Kinesis, Apache Kafka, Apache Flink
ETL: AWS Glue, Apache Spark
Serverless: AWS SQS, AWS EventBridge, AWS Lambda and Step Functions.
Infrastructure as Code: AWS CDK
CI/CD: GitHub Actions
Benefits
Strong compensation, paid time off + statutory holidays
Brand new MacBook and computer equipment
In-person culture with an office located in downtown Toronto