Blend helps lenders maximize their digital agility. Our digital lending platform is used by Wells Fargo, U.S. Bank, and other leading financial institutions to increase customer acquisition, improve productivity, and accelerate the delivery of any banking product across every channel. We process more than $3 billion in mortgages and consumer loans daily, helping millions of consumers get into homes and gain access to the capital they need to lead better lives.
We're looking for a Data Engineer who is driven to solve hard problems— the harder, the better. We’re motivated by the fact that our product won’t just affect the lives of a few people in the Bay Area— it affects people all over the U.S., not to mention a foundational part of the U.S. economy. As a Data Engineer, you can define how we instrument our data infrastructure to influence the entire industry. Your contributions to Blend’s data architecture and infrastructure will shape the company’s ability to innovate in the consumer finance space.
Our ideal Data Engineer has hands-on experience building and architecting data pipelines and distributed systems. Using this expertise, you will work both independently and collaboratively with Engineers, Product Managers, and Analysts to appropriately prioritize, execute, and innovate on the organization’s data needs.
Who you are:
- At least 2+ years of relevant industry experience
- Shipped several large scale projects with multiple dependencies across teams
- Recent accomplishments working with relational as well as NoSQL data stores, methods, and approaches (logging, columnar, star and snowflake, dimensional modeling)
- Experience in ETL development strongly preferred
- Experience working with highly-sensitive data a plus
- Experience and interest in working with state-of-the-art data technologies: Hive, Redshift, Snowflake, or other data warehouses strongly preferred
- Spark, Presto, Hadoop or other query engines a plus
- Kafka, Logstash, Spark Streaming or other stream processing technologies a plus
- Experience with Python or Go, Docker, and SQL required; Typescript a plus
- Experience with AWS or GCP devops a plusDeep understanding of the infrastructure that powers large scale analytics and machine learning systems a plus
- Ability to communicate effectively within and across teams
How you'll contribute:
- Design, build, and maintain data pipelines from the ground up that instrument new functionality such as automatic metrics generation, pipeline auditing, and data repopulation
- Develop and scale out reporting solutions, delivering actionable insights in an intelligent manner to enable a data-driven decision-making culture for internal and external clients
- Drive and lead complex, large projects independently, partnering with product managers and other stakeholders to understand business requirements and craft technical solutions
- Ensure accuracy, completeness, and consistency of data
- Maximize the impact of our infrastructure
- Partner with other engineers, analysts, and product managers to build systems for effective data exploration and consumption
- Ensure that all data components are designed and implemented in compliance with our information security requirements
- Working in a fast-paced environment where people are valued
Benefits and Perks:
- Meaningful equity and a 401(k) plan
- Comprehensive health benefits
- Sponsored gym memberships, ClassPass credits, or wellness stipend.
- Lunch, dinner, snacks, and Pizza Fridays
- On-site meditation, yoga, and massages
- Flexible work schedule, with open vacation policy
- 4 months of paid parental or personal leave
- Convenient location, with parking programs, and flexible commuter options