Find Remote Jobs Worldwide & Work From Anywhere

G5 company's all remote jobs

Skills: python git data-structures amazon-web-services linux

Who You Are: 

You are an enthusiastic and capable Data Engineer who is excited to join a growing team of data and analytics experts who are central to the suite of products and services G5 offers. You have experience with expanding and optimizing data and data pipeline architecture, as well as optimizing data flow and collection for use by various teams across a software company. Ideally, you are an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. 

Here at G5, the Data Engineer will support our traditional software developers, data scientists, and business analysts on data initiatives and will ensure high standards for data availability and fidelity are met. Your ability to self-direct and ease in which you support data needs of multiple teams, systems, and products make you a great fit for this opportunity.  In addition, you are excited by the prospect of optimizing, or even re-designing, G5’s data architecture to support the next generation of cutting edge products and data initiatives.

Does this sound like you? If so, apply today and let’s start the conversation!

 What You’ll Do:

  • Create and maintain resilient data pipeline architectures.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with stakeholders including the Product, Data Science, and other Software Engineering teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across multiple data centers and regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our products.
  • Assist in the deployment and maintenance of machine learning and statistical models as ingestible, usable, and actionable products that scale and are highly available.
  • Build and maintain reporting infrastructure, including but not limited to data warehousing, ETL 


Skills & requirements

What Experience You Have: 

  • 5-7 years of professional software development and/or data engineering experience working with teams
  • Experience with engineering best practices: TDD, CI and Scrum
  • Degree or applicable experience in computer science, software engineering, or related field

What Skills You Have:

  • Python, Java, Ruby/Rails, javascript, and/or other general purpose programming languages
  • SQL, NoSQL, BQSQL, postgres, mysql, graphql
  • Airflow, data pipelines, similar technologies
  • Data Warehousing, ETL, star schema
  • Looker, LookML
  • Git, github

What Skills You May Have (not required, but nice)

  • GCP, including GCS, cloudSQL, Big Query, cloud composer, dataproc, GKE
  • AWS, including lambdas, step functions, dynamodb, cloud formation, cloud watch, RDS, EC2, S3
  • Linux, bash
  • Docker, Kubernetes
  • Magpie, Jupyter
  • Spark, hive, hiveSQL
  • Machine learning concepts and best practices