Careers At Seargin

Break into the IT industry without coding or tech skills and join teams working on international projects.

Data Virtualization Software Engineer

Seargin is a dynamic multinational tech company operating in 50 countries. At Seargin, we drive innovation and create projects that shape the future and greatly enhance the quality of life. You will find our solutions in the space industry, supporting scientists in the development of cancer drugs, and implementing innovative technological solutions for industrial clients worldwide. These are just some of the areas in which we operate.


Data Virtualization Software Engineer





Experience level:


Form of employment:



  • System Architecture Design

    Develop solutions emphasizing data virtualization, distributed processing, and large-scale data storage in cloud-based architectures.

  • Solution Development and Deployment

    Construct and deploy robust, scalable data distribution systems utilizing tools like Denodo, Python, and Spark, ensuring efficient data delivery to various business components.

  • Data Management and Optimization:

    Manage complex datasets using Denodo, ADLS, Databricks, and Kafka, focusing on data modeling, replication, and efficient SQL query optimization.

  • Technology Integration and Application:

    Apply advanced programming skills in VQL and Python, integrating machine learning and AI technologies like Tensorflow to enhance data functionalities.

  • Project Delivery and Quality Assurance

    Collaborate with architects and analysts to design and deliver high-quality, bug-free software modules, ensuring adherence to project specifications and SDLC standards.

  • Communication and Team Collaboration

    Effectively communicate technical decisions and project status to ensure alignment with the project management team and stakeholders across multiple sites.

What we offer:


  • Professional Experience

    A total of 8-10 years in the tech industry, with a focus on data virtualization and streaming technologies for at least 4-5 years.

  • Technical Proficiency

    In-depth knowledge and hands-on experience with Denodo, Python, Spark, and other open-source technologies used for large-scale data processing and distribution.

  • Advanced Data Handling Skills

    Proficiency in managing and optimizing large datasets with tools like ADLS, Databricks, Kafka, including experience with data access libraries like Numpy, Scipy, and Panda.

  • Innovative Approach

    Strong capability in designing innovative solutions using machine learning and AI libraries to improve data processes and outputs.

  • Collaborative Skills

    Proven ability to work effectively in a collaborative, agile development environment, with a strong emphasis on cross-functional team cooperation.

  • Quality and Standards Adherence

    Detailed understanding of SDLC processes, including code control, inspection, and deployment, ensuring high-quality software development.

Apply & join the team

    Ready to elevate your business? Let’s start the conversation.

    Reach out to learn more