What's your preference?
Job Description
- Req#: 21791ffe-e64c-492b-831c-582215f760cb
- Design, build, and optimize scalable ETL/ELT pipelines to facilitate seamless data ingestion and transformation processes.
- Develop and maintain data models to enable self-service analytics and reporting across the organization.
- Optimize database performance in PostgreSQL, ensuring efficient data storage, retrieval, and query execution.
- Implement and enhance search capabilities using NoSQL technologies like ElasticSearch or Solr to improve data discovery.
- Collaborate with data analysts to create insightful dashboards that support data-driven decision-making.
- Ensure data quality, governance, and security by adhering to best practices in cloud-based data environments.
- Monitor and troubleshoot issues within data pipelines, focusing on optimizing efficiency and reliability.
- Work closely with software engineers and product teams to integrate data solutions into operational workflows and product development.
- 5+ years of experience in data engineering or a similar role, with a proven track record of designing scalable data solutions.
- Expertise in PostgreSQL, including database management, query optimization, and performance tuning.
- Hands-on experience with AWS cloud services such as S3, Lambda, Glue, Redshift, and IAM.
- Proficiency in data warehousing technologies like Snowflake, Redshift, or BigQuery for cloud-based data storage and analysis.
- Strong skills in data transformation, modeling, and building efficient ETL/ELT pipelines.
- Experience with data visualization tools like Mode, Looker, Tableau, or Hex to support analytics and reporting.
- Knowledge of ElasticSearch or Solr for implementing search indexing and query capabilities.
- Proficiency in SQL and Python, with experience in automation, scripting, and workflow orchestration (e.g., Airflow).
- Understanding of CI/CD pipelines, infrastructure-as-code principles, and cloud-based deployment practices.
- Strong analytical and problem-solving abilities, with a passion for leveraging data-driven insights to inform decisions.
- Nice-to-Have: Experience with streaming data solutions like Kafka or Kinesis, knowledge of machine learning pipelines, and familiarity with data privacy regulations such as GDPR or CCPA.
- Your application will be reviewed for possible next steps by the Hiring Manager.
- If you meet eligibility requirements, the next step would be a video screen with a member of the PeopleOps team for about thirty (30) minutes.
- If warranted, the next step would be a video interview with our CTO for forty-five (45) minutes.
- If warranted, the next step would be a video panel interview with key stakeholders at PadSplit for one-and-a-half (1.5) hour.
- The panel interview will require a candidate to work on a technical assessment where you will showcase your engineering skills to the panel for discussion.
- If warranted, then we move to offer!
- Fully remote position - we swear!
- Competitive compensation package including an equity incentive plan
- National medical, dental, and vision healthcare plans
- Company provided life insurance policy
- Optional accidental insurances, FSA, and DCFSA benefits
- Unlimited paid-time (PTO) policy with eleven (11) company-observed holidays
- 401(k) plan
- Twelve (12) weeks of paid time off for both birth and non-birth parents
- The opportunity to do what you love at a company that is at the forefront of solving the affordable housing crisis
The Role We Need:
PadSplit is hiring for a Data Engineer to build and maintain scalable data infrastructure that drives analytics, reporting, and decision-making across the organization. This role is critical to optimizing data pipelines, ensuring data reliability, and enabling cross-functional teams to unlock valuable insights in a remote, high-growth environment.
The Person We Are Looking For:
PadSplit is looking for a highly skilled Data Engineer with expertise in building and maintaining scalable data infrastructure using tools like PostgreSQL, AWS, Snowflake, and dbt. The ideal candidate is a collaborative problem-solver who is eager to optimize data pipelines, enhance query performance, and drive reliable, data-informed decision-making across the organization.
\n
Here’s What You’ll Do Day-to-Day:
Here’s What You’ll Need to Be Successful:
The Interview Process:
Compensation, Benefits, and Perks:
\n$135,000 - $175,000 a yearCompensation is based on the role's scope, market benchmarks, the person's expertise and experience, and the impact of their contributions to our business goals.\nAbout the company
PadSplit offers clean, modern furnished rooms for rent to community members at an affordable weekly fee, including utilities, WiFi, laundry access, 24/7 telemedicine service and more. See available homes today.