Engineering | Los Angeles, CA, United States
Scopely is searching for a maven Sr. Data/Infrastructure Engineer who knows how to build out tubing that enables the transport/processing/normalization of hundreds of GB of data per day. Cloud hosting, external API consumption, data ingestion and batch processing services like Hadoop are your focus if this is the right position for you.
What will you do?
- Build an A/B testing service to help project revenue outcomes of potential games
- Automate the import of data from a variety of sources (ad providers, for example) into a singular data source. (Technical: RESTful API, AWS, Command Line Code)
- Aggregate, normalize and process data and work with Product Managers in an effort to gain perspective on user behavior and monetization strategy. (Technical: Python, RedShift, MySQL, Hadoop)
- Produce automated high-level reports, dashboards and visualizations for many teams at Scopely including Revenue Operations and Product Management. (Technical: Pandas/Python ---> Tableau, d3)
- Create the infrastructure to drive an ad-mediation service for determining optimal ad-selection for certain users via application of intelligent algorithms (Technical: Green field)
What do you need?
- Batch processing service experience via Hadoop or similar proprietary variants (Voldermort, etc.)
- Python or Perl or Shell Scripting programming experience (OO)
- SQL Mastery. Inner and outer joins, windowing functions, you should know it all.
- Experience with third-party API integration.
- AWS or similar experience. Big plusses for S3/Redshift experience.
- Experience in large throughput environments.