Engineering | Los Angeles, CA, United States
We are searching for a maven Sr. Data/Infrastructure Engineer who knows how to build out tubing that enables the transport/processing/normalization of hundreds of GB of data per day. Cloud hosting, external API consumption, data ingestion and batch processing services like Hadoop will be your bread and butter if this is the right position for you.
What would you be doing?
- Building an A/B testing service to help project revenue outcomes of potential games
- Automating the import of data from a variety of sources (ad providers, for example) into a singular data source. (Technical: RESTful API, AWS, Command Line Code)
- Aggregating, normalizing and processing data and working with Product Managers in an effort to gain perspective on user behavior and monetization strategy. (Technical: Python, RedShift, MySQL, Hadoop)
- Producing automated high-level reports, dashboards and visualizations for many teams at Scopely including Revenue Operations and Product Management. (Technical: Pandas/Python ---> Tableau, d3)
- Creating an ad-mediation service for determining optimal ad-selection for certain users via application of intelligent algorithms (Technical: Green field)
What would you need to get the job done?
- Batch processing service experience via Hadoop or similar proprietary variants (Voldermort, etc.)
- Python or Perl or Shell Scripting programming experience (OO)
- SQL Mastery. Inner and outer joins, windowing functions, you should know it all.
- Experience with third-party API integration.
- AWS or similar experience. Big plusses for S3/Redshift experience.
- Experience in large throughput environments.