Role
We are seeking a skilled and detail-oriented engineer to join our dynamic team. In this role, you will be responsible for the development, maintenance, and optimization of real-time systems that enable data collection, processing, analysis, visualization, and alerting. You will work closely with data engineers, software developers, and data analysts to ensure the smooth operation of our data acquisition infrastructure. The ideal candidate has a strong background in web scraping, data extraction, and system monitoring, along with a passion for solving complex technical challenges.
Responsibilities
Design, develop, and maintain web crawlers and data extraction pipelines to collect data from online sources.
Implement and manage monitoring solutions to ensure system observability and enable proactive incident response.
Design and deploy infrastructure for high availability, scalability, and efficiency to support data-intensive workflows.
Create and maintain comprehensive technical documentation, including system specifications, operational procedures, and troubleshooting guides.
Stay current with emerging trends and technologies in web scraping, data extraction, and distributed systems.
Automate repetitive tasks and enhance system reliability through scripting and tool development.
Requirements
Bachelor’s degree in computer science, Information Technology, or related field.
Proven experience in web scraping, data extraction, and crawler development using tools such as Scrapy, BeautifulSoup, Selenium, or similar frameworks.
Strong programming skills in Python.
Familiarity with web technologies (HTML, CSS, JavaScript, AJAX) and RESTful APIs.
Understanding of network protocols, IP management, and proxy services.
Strong problem-solving skills and the ability to work independently or as part of a team.
Excellent communication skills and the ability to document technical processes clearly.
Preferred Qualifications
Experience with distributed systems, cloud platforms (e.g., AWS, Azure, GCP), and containerization technologies (e.g., Docker, Kubernetes).
Experience with monitoring tools (e.g., Prometheus, Grafana).
Knowledge of database systems (SQL, NoSQL) and data storage solutions.
Knowledge of data privacy regulations and ethical considerations in web scraping.
Familiarity with version control systems (e.g., Git) and CI/CD pipelines.
Location of work:
Hong Kong Science Park
Salary:
HK$21,000 – HK$26,000 per month
Benefits
Competitive salary
Annual leave and group medical insurance
Join a rapidly growing FinTech company with cutting-edge AI solutions and a supportive team culture
Unlimited snacks and beverages in the office
Assistance to apply for an IANG VISA if applicable
Other:
Fresh graduates are welcome