Job Detail

Analytics Data Developer

  • Job typeJob type: Remote
  • Job Duration03 to 06 months
  • Project LevelBasic Level
  • Project deadlineExpired

Project detail

The Opportunity

    • At Symend, we have a Data Analytics Platform with Snowflake as the repository and Looker as the self-serve layer. We collect data using the EL+T pattern by bringing data in its original format up to the doorstep of Snowflake, loading the data in, and then running transformations on the data so that it moves from raw/bronze to cleansed/silver to conformed/gold format. In the gold zone, we build data models that empower reporting, exploratory analysis and discovery of insights by the analysis and science teams that service the clients (and by the clients themselves).
    • You will join the Analytics Platform Backend team, a team critical to the build, maintenance and continuous improvement of the data platform. This includes the components of the platform, the data pipelines to collect / store / transform the data, and the data models themselves.
    • You’ll use good SDLC processes: git branching, keeping documents and JIRA tasks updated, commenting your code, writing unit tests, building the code itself, testing it, peer-reviewing others’ code, etc.
    • You’ll consider and incorporate considerations of security, integrity, data quality and governance. You’re thinking not just about solving the problem, but solving it in a way that is scalable and sustainable for the long-term. Sometimes, you’ll need to write an MVP or rapid release of your solution; this is encouraged as long as the long-term solution is documented in JIRA and plugged into a future sprint.

This role is a good fit for you if:

    • You’re the kind of person that likes a good challenge, is ready to roll up your sleeves and dig into the problem and can pump out some code that makes data dance for you.
    • You take pride in the readable code you write, add comments to make sure others can follow it and you’re ready to step up to help when it goes sideways.
    • You’re cool with inheriting code that you didn’t write. You empathize that someone in the history of this code had to balance their own constraints; it is what it is and you’re here to help make it better.
    • You appreciate the weekly mix of building solutions towards a roadmap alongside root cause analysis of problems as they arise.
    • You’re part of a team, so you know how to balance independent work with team work and you keep your team of your progress.
    • You have a very collaborative attitude and you want to learn as you go!

What You’ll Be Doing:

      • Build data ingest, propagation and transformation code in DBT, but thinking about reusability and data modeling;
      • Validate (and sometimes propose) data models to support analysis;
      • Diligence with git branching with regular commits and pushes;
      • Diligence in keeping documents and JIRA tasks updated with comments and caveats;
      • Write JIRAs for opportunities for improvement (or where you took a shortcut and need to clean up the TODO in a future sprint);
      • Write legible code, using descriptive variable names and comments so that any of your peers can pick up your code without significant knowledge transfer;
      • Write unit tests, develop code that will pass those tests, and test the code yourself;
      • Perform code reviews and be comfortable with your code being reviewed by others;
      • Adhere to the design standards that exist, and apply new design standards as they are produced;
      • Perpetually look for areas of improvement with a focus on data quality, integrity and governance;
      • Perform root cause analyses on bugs and on long-running queries and stages in the data pipelines;
      • Perform data mapping activities to describe source data, target data and the high-level or detailed transformations that need to occur

Requirements:

    • 2 years of experience in the Snowflake cloud platform;
    • 2 years of experience with agile sprints;
    • 5 years of experience as an integration developer in a SQL Server environment with Visual Studio and git branching patterns;
    • Experience with data modelling patterns (eg. ODS, Kimball, 3NF, OLTP vs. OLAP);
    • Advanced SQL skills;
    • Experience with Looker would be an asset;
    • Experience with DBT would be an asset;
    • Experience in python-based data development would be an asset;
    • Collaborative with a can-do attitude;
    • Strong familiarity with the Snowflake environment;
    • Able to work independently within the scope of the role, with a good understanding of when to escalate;
    • Ready and willing to share what you’ve produced;
    • Takes pride the work done, willing to accept technical criticism of work done, and willing to explain / defend the choices in the work done (eg. by describing the scenario considered);
    • Problem-solving aptitude; eager to learn new technologies and new patterns;
    • BSc/BA in Computer Science, Engineering or relevant field would be an asset.
All positions require background screening. This will include criminal and education checks to comply with regulations.

Skills Required

Languages required

Freelancer type required for this project