Skip Navigation
The Walt Disney Company: Be Part of the Story

Be Part of the Story

Customer Solutions Engineering Data Engineer II

Apply Now Apply Later Current 21CF Employees and Freelance / Contractors Apply Here Job ID 763796BR Location Burbank, California, United States Business Direct-to-Consumer and International Date posted Jul. 28, 2020

Job Summary:

Direct-to-Consumer-and-International (DTCI) is Disney’s global, multiplatform media, product, technology, and distribution organization for world-class content. DTCI is comprised of Disney’s international media and direct-to-consumer businesses globally, including all Disney networks outside of the US, the ESPN+, D+ and the Company’s ownership stake in Hulu. The DTCI Tech organization focuses on providing ground-breaking innovation and driving the strategic use and development of technology to power the DTCI business, deliver world-class products and experiences to consumers around the world. The Data Platform group within DTCI Tech, is charged with developing a consolidated data platform that supports DTCI business stakeholders with the means to access, activate and analyze all of Disney’s consumer data in a secure and compliant way.

The Customer Solutions Engineering team is responsible for product adoption, onboarding, development, and sustainment support for the Data Platform suite of products.
  • Technical Leadership - Primary technical resource for onboarding new products to the Data Platform
  • Build cool things – Build scalable analytics solution, including data processing, storage, and serving large-scale data through batch and stream, analytics for both behavioral & ad revenue through digital & non-digital channels
  • Harness curiosity – Change the way how we think, act, and utilize our data by performing exploratory and quantitative analytics, data mining, and discovery
  • Innovate and inspire – Think of new ways to help make our data platform more scalable, resilient and reliable and then work across our team to put your ideas into action
  • Think at scale - Lead the transformation of a peta-byte scale batch-based processing platform to a near real-time streaming platform using technologies such as Apache Kafka, Cassandra, Spark and other open source frameworks
  • Have pride – Ensure performance isn’t our weakness by implementing and refining robust data processing using Python, Java, Scala and other database technologies such as RedShift or Snowflake
  • Grow with us – Help us stay ahead of the curve by working closely with data architects, stream processing specialists, API developers, our DevOps team, and analysts to design systems which can scale elastically in ways which make other groups jealous
  • Lead and coach – Mentor other software engineers by developing re-usable frameworks. Review design and code produced by other engineers
  • Build and Support – Embrace the DevOps mentality to build, deploy and support applications in cloud with minimal help from other teams
  • Experiment - Drive and maintain a culture of quality, innovation and experimentation

Responsibilities:

  • Have 2+ years of experience developing data driven application using mix of languages (Java, Scala, Python, SQL etc.) and open source frameworks to implement data ingest, processing, and analytics technologies
  • Data and API ninja –You are also very handy with big data framework such as Hadoop, Apache Spark, No-SQL systems such as Cassandra or DynamoDB, Streaming technologies such as Apache Kafka; Understand reactive programming and dependency injection such as Spring to develop REST services
  • Have a technology toolbox – Hands on experience with newer technologies relevant to the data space such as Spark, Airflow, Apache Druid, Snowflake (or any other OLAP databases)
  • Cloud First - Plenty of experience with developing and deploying in a cloud native environment preferably AWS cloud
  • Embrace ML – Work with data scientists to operationalize machine learning models and build apps to make use of power of machine learning.
  • Problem solver – Enjoy new and meaningful technology or business challenges which require you to think and respond quickly
  • Passion and creativity – Are passionate about data, technology, & creative innovation
  • Strive for Operational Excellence - Continually strives to deliver operational excellence by leveraging the deep understanding of the end to end use cases to assist the teams with test automation, creation of synthetic tests and production support
  • Ongoing support - Provides documentation, training and support for teams managing the platforms
  • Process improvements - Assist in the continual evolution of the team's processes and metrics to make the team more efficient, and increase the value-add to the organizations we support


Basic Qualifications:

  • 2+ years of relevant software development experience
  • 2+ years of cloud application architecture and engineering
  • Experience developing data driven application
  • Excellent communication (written and verbal); ability to communicate with other developers as well as business units and internal stakeholders
  • Basic Knowledge and understanding of Application Lifecycle Management, Software Development and Agile Development/Testing practices and methodologies
  • Basic Knowledge of common performance issues, as well as a solid working knowledge of logs, monitoring tools, and work with developers to determine a root cause
  • Demonstrated ability to work independently as well as part of a cross-functional team
  • Ability to work and participate in a team with open and collaborative style of communications
  • Must be a proactive, self-starter

Preferred Qualifications:

  • Bachelors/Masters in Computer Science or similar is preferred
  • Knowledge of consumer identity and big data platforms
  • Experience with Content Personalization/Recommendation, Audience Segmentation for Linear to Digital Ad Sales, and/or Analytics
  • Experience with open source such as Spring, Hadoop, Spark, Kafka, Druid, Kubernetes
  • Experience in working with Data Scientists to operationalize machine learning models

Responsibilities:

  • Have 2+ years of experience developing data driven application using mix of languages (Java, Scala, Python, SQL etc.) and open source frameworks to implement data ingest, processing, and analytics technologies
  • Data and API ninja –You are also very handy with big data framework such as Hadoop, Apache Spark, No-SQL systems such as Cassandra or DynamoDB, Streaming technologies such as Apache Kafka; Understand reactive programming and dependency injection such as Spring to develop REST services
  • Have a technology toolbox – Hands on experience with newer technologies relevant to the data space such as Spark, Airflow, Apache Druid, Snowflake (or any other OLAP databases)
  • Cloud First - Plenty of experience with developing and deploying in a cloud native environment preferably AWS cloud
  • Embrace ML – Work with data scientists to operationalize machine learning models and build apps to make use of power of machine learning.
  • Problem solver – Enjoy new and meaningful technology or business challenges which require you to think and respond quickly
  • Passion and creativity – Are passionate about data, technology, & creative innovation
  • Strive for Operational Excellence - Continually strives to deliver operational excellence by leveraging the deep understanding of the end to end use cases to assist the teams with test automation, creation of synthetic tests and production support
  • Ongoing support - Provides documentation, training and support for teams managing the platforms
  • Process improvements - Assist in the continual evolution of the team's processes and metrics to make the team more efficient, and increase the value-add to the organizations we support

Basic Qualifications:

  • 2+ years of relevant software development experience
  • 2+ years of cloud application architecture and engineering
  • Experience developing data driven application
  • Excellent communication (written and verbal); ability to communicate with other developers as well as business units and internal stakeholders
  • Basic Knowledge and understanding of Application Lifecycle Management, Software Development and Agile Development/Testing practices and methodologies
  • Basic Knowledge of common performance issues, as well as a solid working knowledge of logs, monitoring tools, and work with developers to determine a root cause
  • Demonstrated ability to work independently as well as part of a cross-functional team
  • Ability to work and participate in a team with open and collaborative style of communications
  • Must be a proactive, self-starter

Preferred Qualifications:

  • Bachelors/Masters in Computer Science or similar is preferred
  • Knowledge of consumer identity and big data platforms
  • Experience with Content Personalization/Recommendation, Audience Segmentation for Linear to Digital Ad Sales, and/or Analytics
  • Experience with open source such as Spring, Hadoop, Spark, Kafka, Druid, Kubernetes
  • Experience in working with Data Scientists to operationalize machine learning models


About Direct-to-Consumer and International:

About The Walt Disney Company:

The Walt Disney Company, together with its subsidiaries and affiliates, is a leading diversified international family entertainment and media enterprise with the following business segments: media networks, parks and resorts, studio entertainment, consumer products and interactive media. From humble beginnings as a cartoon studio in the 1920s to its preeminent name in the entertainment industry today, Disney proudly continues its legacy of creating world-class stories and experiences for every member of the family. Disney’s stories, characters and experiences reach consumers and guests from every corner of the globe. With operations in more than 40 countries, our employees and cast members work together to create entertainment experiences that are both universally and locally cherished.

Apply Now Apply Later