8+ years of relevant industry experience. Comfortable with Python Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with AWS cloud services: EC2, RDS, ECS, S3 Experience with database architecture Technical leadership with respect to design, implementation and overseeing complex technical processes. Building and designing large-scale applications Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Willingness to get your hands dirty, understand a new problem deeply, and build things from scratch when they don't already exist Undergraduate degree in Computer Science, Computer Engineering, or similar disciplines from rigorous academic institutionsPagaya is a leading next-generation asset management firm founded in 2015. Combining advanced technology and financial expertise, we use artificial intelligence and state-of-the-art algorithms to uncover exceptional, low-risk high-yield opportunities in alternative credit for investors. Pagaya actively invests in US consumer credit assets.
The Pagaya team is comprised of over 100 professionals in New York and Tel Aviv with expertise in artificial intelligence, data rich alternative assets and asset management. The team manages over $1.7 billion in assets on behalf of institutional investors around the world.
Pagaya just completed its Series D financing of over $100M led by a prominent sovereign wealth fund.
Software is fundamental to research. From the humanities to physics, biology to archaeology, software plays a vital role in generating results. The Data Engineering team is a cross-functional team responsible for data integration, monitoring, and quality. This includes automating data monitoring, alerting, fetching, and checking across various stages of data transformation and projection. The team serves vital functions to aid and advocate all departments with quality data.
Strong technical leadership in supervising, planning, and delivering highly complex projects and Maintenance requests consisting of a large number of deliverables, and complex dependencies. Build data architecture for ingestion, processing, and surfacing of data for large-scale applications Extract data from one database and load it into another Use many different scripting languages, understanding the nuances and benefits of each, to combined systems Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Lead the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for all ETL's Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Create and maintain optimal data pipeline architecture Work with other members of the data team, including data architects, data analysts, and data scientists Key Takeaways:
Use the tools and languages that are best suited to the job - Complete flexibility to problem-solving with novelty and creativity encouraged Open source projects and frameworks recommended Be around very bright and lovely people It's all about results - working hours are not the focus Your intellectual curiosity and hard work contributions will be welcome to our culture of knowledge sharing, transparency, and shared fun and achievement Provide education and documentation enabling fellow team members to maximize technical resources Contribute to our software engineering culture of writing correct, maintainable, elegant and testable code