Lead Data Scientist

New York, New York, Remote Data Science Posted October 13, 2023
#214921
Apply Now

Client: Entertainment Company
Location: Hybrid 2 days a week onsite in either SAN FRAN OR NYC
Compensation: $75-$100 hr W2 commensurate with experience and an estimate provided by JBC

This is an Individual Contributor role in content recommendations. You will be expected to lead recommendation and personalization algorithm research, development, and optimization for product areas, and to coordinate requirements and manage stakeholder expectations with Product, Engineering, and Editorial teams. You will be expected to help meet KPIs for product areas and to set and meet deadlines for external and internally facing tools, such as offline evaluation tools for pre-production algorithms.

Responsibilities:
-Algorithm development and maintenance: Utilize cutting edge machine learning methods to develop algorithms for personalization, recommendation, and other predictive systems; maintain algorithms deployed to production and be the point person in explaining
methodologies to technical and non-technical teams
-Analysis and Algorithm Optimization: Perform deep dive analysis on app interactions and user profiles as they relate to algorithm output in order to drive improvements in key personalization metrics
-MVP development: Develop innovative machine learning products to be used for new production features or downstream by production algorithms
-Development Best Practices: Maintain existing and establish new algorithm development, testing, and deployment standards
-Collaborate with product and business stakeholders: Identify and define new
personalization opportunities and work with other data teams to improve how we do data collection, experimentation and analysis

Qualifications:
-Production experience with developing content recommendation algorithms at scale
-Production experience with graph based models (e.g. node2vec)
-Experience building and deploying full stack ML pipelines: data extraction, data mining, model training, feature development, testing, and deployment
-Experience with graph-based data workflows such as Apache Airflow
-Experience engineering big-data solutions using technologies like EMR, S3, Spark, Databricks
-Familiar with metadata management, data lineage, and principles of data governance
-Experience loading and querying cloud-hosted databases such as Snowflake
-Familiarity with automated deployment, AWS infrastructure, Docker or similar containers

 

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!