Here at Lightricks, an award-winning mobile app startup, we’re in the process of scaling up our backend systems in order to meet the growth of our user base. We are looking for a savvy Data Platform Engineer to join our growing team of data platform experts. The team is responsible for providing high performance, real-time technological solutions to the various data consumers within the company, including BI analysts, marketing analysts, product managers, data scientists and more.
The ideal candidate is an experienced data pipeline builder or infrastructure engineer who enjoys building near real-time data systems; someone who is excited by the prospect of optimizing and re-designing our company’s data architecture to support our next generation of products and data initiatives.
We’re looking for someone who is resourceful, bright, proactive, who works well both independently and as a part of a team, and who is passionate about what they are doing.
- Build and own the infrastructure and tools required for processing and storing of data from a wide variety of data sources including mobile apps and internal & external data producers.
- Design and build processes that integrate a wide range of edge systems across multiple cloud providers and hand held platforms reliably.
- Assist data consumers across Lightricks in defining requirements and refining existing processes using a wide variety of tools.
- Define and own the systems and processes that deliver data to the data-warehouse and maintain these systems as well as the data-warehouse itself.
- Identify bottlenecks in internal data processes and work to resolve them by automating processes, optimizing data delivery and storage, re-designing infrastructure for greater scalability, etc.
- Integrate new tools and solutions that can empower the company’s data consumers and help assimilate them.
- 2+ years hands-on experience designing, building and optimizing data-intensive systems.
- 2+ years experience managing payloads running on a major cloud provider.
- 5+ years of experience developing Scala and/or Python.
- Experience with containers (Docker, Kubernetes) based environments.
- Experience supporting and working with cross-functional teams in rapidly changing stacks and environments.
- Functional programming and streaming systems experience, including hands-on experience designing, building and optimizing stream processing (Flink, Storm, Spark Streaming etc.) or functional systems.
- Experience with SQL, data modeling, and relational database as well as a deep understanding of ETL and data warehousing concepts, methodologies, and frameworks.
- Experience working with data-science and research teams or equivalent MLOps tooling experience.