Seargin

Careers At Seargin

Break into the IT industry without coding or tech skills and join teams working on international projects.

Analytics Engineer

Seargin is a dynamic multinational tech company operating in 50 countries. At Seargin, we drive innovation and create projects that shape the future and greatly enhance the quality of life. You will find our solutions in the space industry, supporting scientists in the development of cancer drugs, and implementing innovative technological solutions for industrial clients worldwide. These are just some of the areas in which we operate.

Position:

SAP EWM Consultant

Location:

Remote

Country:

UE

Form of employment:

B2B

Experience level:

Senior

Responsibilities:

  • Data Pipeline Design and Implementation

    Design, develop, and implement efficient EL(T) data pipelines using SQL, with a strong emphasis on leveraging dbt for data transformation. Ensure these pipelines align with business requirements and are optimized for performance and scalability

  • Data Modeling and Architecture

    Create and maintain conceptual, logical, and physical data models to support business needs. Utilize best practices in data modeling, such as star schema, snowflake schema, and slowly changing dimensions (SCDs), to ensure robust and efficient data structures

  • Data Warehouse Development

    Design, build, and optimize distributed data warehousing solutions using Amazon Redshift. Implement best practices in data warehousing, including data partitioning, distribution keys, and performance tuning.

  • Git Version Control and Collaboration

    Manage all project code and SQL scripts using Git, ensuring version control, code reviews, and collaboration among team members. Implement Git best practices, including branching strategies and merge conflict resolution

  • DataOps Process Implementation

    Apply a DataOps mindset to all data engineering processes, ensuring continuous integration, continuous deployment (CI/CD), and monitoring are part of the development lifecycle. Collaborate with cross-functional teams to automate and streamline data operations.

  • Data Vault 2.0 Methodology

    Utilize the Data Vault 2.0 methodology in data modeling and warehousing projects when applicable. Leverage its benefits for handling large volumes of data, historical tracking, and auditability.

  • Python and Jinja Scripting

    Develop and maintain Python scripts for data processing tasks and automation. Utilize Jinja for dynamic SQL generation and templating within dbt or other SQL environments

  • CI/CD Pipeline Development

    Design and implement CI/CD pipelines for automated testing, deployment, and monitoring of data pipelines and models. Ensure these pipelines integrate with existing development tools and processes.

  • DevOps Integration

    Collaborate with the DevOps team to integrate data engineering processes with broader IT infrastructure. Implement monitoring, alerting, and logging practices to maintain data pipeline reliability and uptime.

  • Pharma Commercial Data Expertise

    Leverage knowledge of the Pharma Commercial Data landscape to design and implement data solutions tailored to industry-specific needs. Ensure compliance with regulatory requirements and optimize data models for commercial analytics.

  • Continuous Learning and Certification

    Stay up-to-date with the latest advancements in data engineering, data warehousing, and related technologies. Maintain and pursue certifications (e.g., dbt, Redshift) to ensure skills are current and aligned with industry standards

What we offer

Requirements:

  • dbt Certification

    Must hold a dbt certification, demonstrating proficiency in using dbt for data transformation and pipeline design

  • Proficiency in Data Processing

    Demonstrated ability to process and manipulate large datasets efficiently, with a strong understanding of data processing techniques and tools.

  • Experience in Data Modeling

    Proven experience in designing conceptual, logical, and physical data models. Ability to apply best practices in data modeling, including the use of dimensions, facts, star schema, snowflake schema, and slowly changing dimensions (SCDs).

  • Expertise in SQL for EL(T) Pipeline Design

    Extensive experience in designing and implementing EL(T) data pipelines using SQL. Familiarity with using dbt for data pipeline automation and optimization is highly preferred.

  • Knowledge of Traditional Data Warehousing Concepts

    Solid understanding of traditional data warehousing (DW) relational concepts, including dimensions, facts, star schema, snowflake schema, and SCDs

  • Understanding of Distributed Data Warehousing (Redshift)

    Strong grasp of the fundamentals of distributed data warehousing, with specific experience in Amazon Redshift or similar distributed data warehouse technologies

  • Proficiency in Git for Version Control

    Essential knowledge and hands-on experience with Git for version control, including branch management, code merging, and collaboration in a team environment.

  • DataOps Mindset

    Demonstrated ability to apply a DataOps mindset to data engineering processes, ensuring a focus on automation, continuous integration, and continuous deployment in the data lifecycle

Nice to have

  • Experience with Data Vault 2.0 Methodology

    Familiarity with the Data Vault 2.0 methodology for data modeling, particularly in environments requiring scalable, auditable, and historical data storage

  • Python and Jinja Experience

    Experience with Python for data processing and automation tasks. Proficiency in using Jinja for SQL templating and dynamic query generation within dbt or other SQL environments.

  • CI/CD Pipeline Experience

    Hands-on experience in designing and implementing CI/CD pipelines, particularly for automated testing, deployment, and monitoring of data pipelines and models.

  • DevOps & DataOps Skills

    Skills in integrating DevOps practices with data engineering processes, focusing on automating, monitoring, and improving the reliability of data workflows

  • Knowledge of the Pharma Commercial Data Landscape

    Understanding of the Pharma Commercial Data landscape, enabling the design of industry-specific data solutions and ensuring compliance with regulatory standards

Apply & join the team




    Ready to elevate your business? Let’s start the conversation.

    Reach out to learn more