Contract Java Developer – Big Data

Opus Recruitment Solutions

Gemini said

Job Description

Opus has partnered with a major financial services organization to recruit a Contract Senior Java Data Engineer for a high-performing engineering squad. This is a 12-month mission-critical project focused on distributed data processing and large-scale workflow modernization.

The role is perfect for a software engineer who transitioned from a strong JVM/Java background into the world of Big Data and Cloud-native pipelines. You will be at the forefront of re-platforming legacy workflows into modern, scalable AWS and Spark-based workloads to feed critical reporting pipelines across the business.


Key Requirements

  • Core Engineering: Strong Java background, specifically in Core Java, multithreading, and distributed systems.

  • Data Processing: Hands-on experience with Apache Spark or other large-scale data processing frameworks.

  • Cloud Native: Proficiency with AWS services, particularly Glue, S3, Lambda, and Step Functions.

  • Enterprise Modernization: Proven experience re-engineering legacy pipelines or upgrading business-critical workflows.

  • Collaboration: Ability to work within enterprise-scale squads, collaborating with platform and controls teams to set engineering standards.


What You’ll Be Doing

In this role, you will act as a bridge between high-level software engineering and complex data architecture. Your daily responsibilities include building and enhancing Java-based data services and distributed processing components that form the backbone of the company’s reporting infrastructure. You will be responsible for re-engineering legacy pipelines into scalable AWS/Spark workloads, ensuring that the framework is optimized for performance and reliability. Beyond coding, you will lead delivery on individual workstreams, defining technical patterns and standards that will be adopted across the enterprise engineering environment.

Technical Environment & Stack

You will be working within a sophisticated tech stack designed for high-throughput financial data:

  • Languages: Core Java (distributed systems focus).

  • Processing: Apache Spark for heavy-lift data transformations.

  • AWS Serverless: Utilizing AWS Glue for ETL, Lambda for event-driven logic, and Step Functions for workflow orchestration.

  • Data Warehousing: Snowflake for scalable storage and reporting.

  • DevOps: Mature CI/CD pipelines and production monitoring within a large-scale enterprise framework.


Contract Details & Benefits

  • Duration: 12-Month Contract.

  • Rate: £500–£600 per day.

  • IR35 Status: Outside IR35.

  • Location: Dual hubs in London or Birmingham.

Industry Context: Java in Big Data

While many data roles have shifted toward Python, the financial services sector remains heavily invested in the JVM for its performance in multithreaded and distributed environments.

Feature Java/JVM for Data Why it matters in Fintech
Concurrency Mature multithreading models Essential for high-frequency data ingestion.
Spark Support Native Scala/Java API Direct integration with Spark’s core engine.
Performance JIT compilation & Static Typing Reduces runtime errors in critical reporting pipelines.
Ecosystem Spring, Maven, Gradle Seamless integration with existing enterprise services.

How to Apply

If you are an engineer who thrives at the intersection of Core Java and Big Data, please click apply to start the process with Opus.

To apply for this job please visit uk.linkedin.com.