Req#: 5281Employer Industry: Data Storage and AI Solutions
Why consider this job opportunity:
- Opportunity for career advancement and growth within the organization
- Work remotely with a flexible schedule
- Chance to make a significant impact in the rapidly evolving field of AI and data management
- Collaborative and innovative work environment that values engineering excellence
- Competitive salary and potential for performance-based bonuses
What to Expect (Job Responsibilities):
- Design and implement optimized execution layers and data query engines for high-performance data workflows
- Develop internal systems for high-throughput data access and transformation using various data formats
- Build and tune execution plans to optimize performance for large-scale AI and analytics workloads
- Collaborate with cross-functional teams to deliver integrated, end-to-end data solutions
- Contribute to open-source ecosystems and stay updated with advancements in open data technologies
What is Required (Qualifications):
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- 8+ years of experience in software development, with 5+ years in distributed systems or big data technologies
- Expert-level knowledge of Java, Python, and SQL
- Deep understanding of file formats such as Parquet, ORC, and Avro
- Experience with Apache Spark and distributed query engines
How to Stand Out (Preferred Qualifications):
- Hands-on experience with Apache Iceberg and/or Delta Lake
- Background in real-time data streaming using tools like Apache Kafka
- Previous contributions to open-source projects; committer status is a plus
- Proven ability to lead complex technical initiatives and mentor junior engineers
#DataStorage #AI #SoftwareEngineering #RemoteWork #CareerOpportunity