Seargin is a dynamic multinational tech company operating in 50 countries. At Seargin, we drive innovation and create projects that shape the future and greatly enhance the quality of life. You will find our solutions in the space industry, supporting scientists in the development of cancer drugs, and implementing innovative technological solutions for industrial clients worldwide. These are just some of the areas in which we operate.
SAP EWM Consultant
Remote
UE
B2B
Senior
Design, develop, and implement efficient EL(T) data pipelines using SQL, with a strong emphasis on leveraging dbt for data transformation. Ensure these pipelines align with business requirements and are optimized for performance and scalability
Create and maintain conceptual, logical, and physical data models to support business needs. Utilize best practices in data modeling, such as star schema, snowflake schema, and slowly changing dimensions (SCDs), to ensure robust and efficient data structures
Design, build, and optimize distributed data warehousing solutions using Amazon Redshift. Implement best practices in data warehousing, including data partitioning, distribution keys, and performance tuning.
Manage all project code and SQL scripts using Git, ensuring version control, code reviews, and collaboration among team members. Implement Git best practices, including branching strategies and merge conflict resolution
Apply a DataOps mindset to all data engineering processes, ensuring continuous integration, continuous deployment (CI/CD), and monitoring are part of the development lifecycle. Collaborate with cross-functional teams to automate and streamline data operations.
Utilize the Data Vault 2.0 methodology in data modeling and warehousing projects when applicable. Leverage its benefits for handling large volumes of data, historical tracking, and auditability.
Develop and maintain Python scripts for data processing tasks and automation. Utilize Jinja for dynamic SQL generation and templating within dbt or other SQL environments
Design and implement CI/CD pipelines for automated testing, deployment, and monitoring of data pipelines and models. Ensure these pipelines integrate with existing development tools and processes.
Collaborate with the DevOps team to integrate data engineering processes with broader IT infrastructure. Implement monitoring, alerting, and logging practices to maintain data pipeline reliability and uptime.
Leverage knowledge of the Pharma Commercial Data landscape to design and implement data solutions tailored to industry-specific needs. Ensure compliance with regulatory requirements and optimize data models for commercial analytics.
Stay up-to-date with the latest advancements in data engineering, data warehousing, and related technologies. Maintain and pursue certifications (e.g., dbt, Redshift) to ensure skills are current and aligned with industry standards
Employment based on a B2B contract
Opportunity to work in a stable, dynamically developing international company
Chance to participate in interesting projects and work with the latest information technologies
Attractive remuneration rates offered
Involvement in the most prestigious international projects
Access to Multisport benefits and private healthcare services
Must hold a dbt certification, demonstrating proficiency in using dbt for data transformation and pipeline design
Demonstrated ability to process and manipulate large datasets efficiently, with a strong understanding of data processing techniques and tools.
Proven experience in designing conceptual, logical, and physical data models. Ability to apply best practices in data modeling, including the use of dimensions, facts, star schema, snowflake schema, and slowly changing dimensions (SCDs).
Extensive experience in designing and implementing EL(T) data pipelines using SQL. Familiarity with using dbt for data pipeline automation and optimization is highly preferred.
Solid understanding of traditional data warehousing (DW) relational concepts, including dimensions, facts, star schema, snowflake schema, and SCDs
Strong grasp of the fundamentals of distributed data warehousing, with specific experience in Amazon Redshift or similar distributed data warehouse technologies
Essential knowledge and hands-on experience with Git for version control, including branch management, code merging, and collaboration in a team environment.
Demonstrated ability to apply a DataOps mindset to data engineering processes, ensuring a focus on automation, continuous integration, and continuous deployment in the data lifecycle
Familiarity with the Data Vault 2.0 methodology for data modeling, particularly in environments requiring scalable, auditable, and historical data storage
Experience with Python for data processing and automation tasks. Proficiency in using Jinja for SQL templating and dynamic query generation within dbt or other SQL environments.
Hands-on experience in designing and implementing CI/CD pipelines, particularly for automated testing, deployment, and monitoring of data pipelines and models.
Skills in integrating DevOps practices with data engineering processes, focusing on automating, monitoring, and improving the reliability of data workflows
Understanding of the Pharma Commercial Data landscape, enabling the design of industry-specific data solutions and ensuring compliance with regulatory standards