Job Type
Full-time
Description
Propio is on a mission to make communication accessible to everyone. As a leader in real-time interpretation and multilingual language services, we connect people with the information they need across language, culture, and modality. We’re committed to building high powered solutions to enhance interpreter workflows, automate multilingual insights, and scale communication quality across industries.
The Data Engineer will play a key role in designing, developing, and maintaining our data infrastructure. This position requires a blend of technical skills, analytical thinking, and the ability to collaborate with cross-functional teams to support data-driven decision-making processes.
Key Responsibilities:
- Design, implement, and optimize data pipelines for efficient data processing and analysis.
- Develop and maintain ETL (Extract, Transform, Load) processes to ensure data accuracy and availability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Ensure the scalability, reliability, and performance of data systems.
- Implement and maintain data security and privacy measures.
- Troubleshoot and resolve data-related issues and anomalies.
- Continuously improve and document data engineering processes and best practices.
- Act as a subject matter expert and strategic advisor on data-related initiatives, ensuring alignment with organizational objectives
- Perform ad hoc data analytics and reporting to meet emergent business requirements.
- Develop and optimize complex SQL queries and stored procedures for data extraction and modeling.
Requirements
Qualifications:
- Bachelor’s Degree in Computer Science, Data Science, Mathematics, Statistics, or a related field; or equivalent work experience.
- 3+ years of experience in a comparable data engineering role.
- Proven experience with designing and building scalable data pipelines and ETL processes.
- Experience with cloud platforms (AWS, Azure, Google Cloud) and their data services.
- Proficiency in SQL and experience with relational databases (e.g., MySQL, MSSQL, PostgreSQL).
- Strong programming skills in Python.
- Experience with big data technologies (Spark and PySpark).
- Familiarity with Cloud data warehousing solutions (e.g., Databricks, Snowflake, Azure).
- Knowledge of data modeling, data warehousing, and data lake concepts.