Seargin is a dynamic multinational tech company operating in 50 countries. At Seargin, we drive innovation and create projects that shape the future and greatly enhance the quality of life. You will find our solutions in the space industry, supporting scientists in the development of cancer drugs, and implementing innovative technological solutions for industrial clients worldwide. These are just some of the areas in which we operate.
Data Virtualization Software Engineer
Wroclaw/Krakow
Poland
Senior
B2B
Develop solutions emphasizing data virtualization, distributed processing, and large-scale data storage in cloud-based architectures.
Construct and deploy robust, scalable data distribution systems utilizing tools like Denodo, Python, and Spark, ensuring efficient data delivery to various business components.
Manage complex datasets using Denodo, ADLS, Databricks, and Kafka, focusing on data modeling, replication, and efficient SQL query optimization.
Apply advanced programming skills in VQL and Python, integrating machine learning and AI technologies like Tensorflow to enhance data functionalities.
Collaborate with architects and analysts to design and deliver high-quality, bug-free software modules, ensuring adherence to project specifications and SDLC standards.
Effectively communicate technical decisions and project status to ensure alignment with the project management team and stakeholders across multiple sites.
Employment based on a B2B contract
Opportunity to work in a stable, dynamically developing international company.
Chance to participate in interesting projects and work with the latest information technologies.
Attractive remuneration rates offered.
Involvement in the most prestigious international projects.
Access to Multisport benefits and private healthcare services.
A total of 8-10 years in the tech industry, with a focus on data virtualization and streaming technologies for at least 4-5 years.
In-depth knowledge and hands-on experience with Denodo, Python, Spark, and other open-source technologies used for large-scale data processing and distribution.
Proficiency in managing and optimizing large datasets with tools like ADLS, Databricks, Kafka, including experience with data access libraries like Numpy, Scipy, and Panda.
Strong capability in designing innovative solutions using machine learning and AI libraries to improve data processes and outputs.
Proven ability to work effectively in a collaborative, agile development environment, with a strong emphasis on cross-functional team cooperation.
Detailed understanding of SDLC processes, including code control, inspection, and deployment, ensuring high-quality software development.