Big Data Engineer
Information Technology | Santa Monica, CA, United States

The role of the Big Data Engineer will be a key member of the EdgeCast Engineering/R&D Team. This individual will be responsible for creating, maintaining and optimizing complex software and systems that are distributed globally in all of the EdgeCast datacenters running on 1000s of servers. The systems created are mission critical and fault tolerant solutions that handle customer’s web application traffic, and are monitored and maintained by EdgeCast's 24/7 network operation center.

You must be an experienced, self-motivated top tier software engineer with proven problem solving abilities. You will be part of the core engineering team and working in a fast paced environment to create and invent solutions that makes the internet faster, more secure, scalable and reliable. EdgeCast highly values technical expertise as this is a critical component of our product offerings. This role will offer a unique opportunity to work on Big Data platforms like Elasticsearch, Hadoop, Redis and Spark. 


  • Design, build and deploy highly confidential projects involving truly cutting edge technology and massive clusters of servers
  • Build new and enhancing existing applications using C/C++
  • Research and analyze application behaviors and improving performance and stability
  • Work within our global network to optimize applications for linear scaling
  • Create test cases and monitoring tools for any changes to both new and existing applications
  • Provide tier 3 engineering support to troubleshoot complex problems


  • BS degree in Computer Science / Engineering or a related field, or equivalent experience
  • MS or PHD from a top tier school highly preferred
  • Thorough understanding of Linux filesystems
  • Server applications: nginx/lighttpd and prompt
  • Experience with MongoDB administration
  • Expert level understanding of C/C++
  • 5 years expertise in one of these scripting technologies: Python, Perl, bash
  • Deep understanding of Internet protocols including TCP/IP and HTTP
  • Deep understanding of multi-threaded and shared resource programming
  • Experience with the complete software development life cycle, from requirements to design, implementation, testing, and release
  • Ability to work on multiple projects at a time in a fast paced environment
  • Knowledge of Operating System internals (memory management, scheduling, TCP/IP stack)
  • 3 years experience programming against Hadoop clusters (ideally CDH)
  • 5 years expertise in Java with a focus on writing Map/Reduce jobs
  • Operational experience running Hadoop in production (performance tuning)