Blend makes the process of getting a loan simpler, faster, and safer. With its industry-leading digital lending platform, Blend helps financial institutions like Wells Fargo and U.S. Bank increase productivity and deliver exceptional customer experiences. The company processes nearly $2 billion in loans daily, helping millions of consumers get into homes and gain access to the capital they need to lead better lives.
At Blend, we’re dedicated to improving lending. We’re an enterprise technology company, but our product affects the most important purchase most people will make in their lifetime—their home. For home buyers, our product means a clear, guided path to a new home. For lenders, it means modern, easy-to-use tools that let employees spend their time helping customers, rather than on repetitive, manual tasks. By aligning and modernizing this archaic industry, we believe everybody wins.
We're looking for a Data Engineer who is driven to solve hard problems— the harder, the better. We’re motivated by the fact that our product won’t just affect the lives of a few people in the Bay Area— it affects people all over the U.S., not to mention a foundational part of the U.S. economy. As a Data Engineer, you can define how we instrument our data infrastructure to influence the entire industry. Your contributions to Blend’s data architecture and infrastructure will shape the company’s ability to innovate in the consumer finance space.
Our ideal Data Engineer has hands-on experience building and architecting data pipelines and distributed systems. Using this expertise, you will work both independently and collaboratively with Engineers, Product Managers, and Analysts to appropriately prioritize, execute, and innovate on the organization’s data needs.
Founded in 2012 by former Palantir leaders, we’re currently backed by Founders Fund, Andreessen Horowitz, Temasek, General Atlantic and other prominent investors, and are growing quickly!
- At least 2+ years of relevant industry experience
- Shipped several large scale projects with multiple dependencies across teams
- Recent accomplishments working with relational as well as NoSQL data stores, methods, and approaches (logging, columnar, star and snowflake, dimensional modeling)
- Experience in ETL development strongly preferred
- Experience working with highly-sensitive data a plus
- Experience and interest in working with state-of-the-art data technologies: Hive, Redshift, Snowflake, or other data warehouses strongly preferred
- Spark, Presto, Hadoop or other query engines a plus
- Kafka, Logstash, Spark Streaming or other stream processing technologies a plus
- Experience with Python or Go, Docker, and SQL required; Typescript a plus
- Experience with AWS or GCP devops a plusDeep understanding of the infrastructure that powers large scale analytics and machine learning systems a plus
- Ability to communicate effectively within and across teams
- Design, build, and maintain data pipelines from the ground up that instrument new functionality such as automatic metrics generation, pipeline auditing, and data repopulation
- Develop and scale out reporting solutions, delivering actionable insights in an intelligent manner to enable a data-driven decision-making culture for internal and external clients
- Drive and lead complex, large projects independently, partnering with product managers and other stakeholders to understand business requirements and craft technical solutions
- Ensure accuracy, completeness, and consistency of data
- Maximize the impact of our infrastructure
- Partner with other engineers, analysts, and product managers to build systems for effective data exploration and consumption
- Ensure that all data components are designed and implemented in compliance with our information security requirements
- Working in a fast-paced environment where people are valued