Big Data Engineer - Sydney | Sydney
|Attachments:||No File Attached|
|Application Close Date:||25-Apr-2019|
About the role
The Big Data Engineer will expand and optimise our clients' data and data pipeline architecture, as well as optimise their data flow and collection for cross functional teams. Your responsibilities include:
- Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data into Hadoop data platform.
- Design and develop real time streaming and batch processing pipeline solutions
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Design, develop and implement data pipelines for data migration & collection, data analytics and other data movement solutions.
- Work with stakeholders including the Product Owner and data analyst teams to assist with data-related technical issues and support their data infrastructure needs.
- Collaborate with Architects to define the architecture and technology selection.
You will have the ability to optimise data systems and build them from the ground up. You will support software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
Essential skills and experience
- Proven working experience as Big Data engineer for 2+ years preferably in building data lake solution by ingesting and processing data from various source systems on AWS cloud
- Experience with multiple Big data technologies and concepts such as HDFS, NiFi, Kafka, Hive, Spark, Spark streaming, HBase , EMR and Redshift on AWS
- Experience in one or more of Java, Scala, python and bash.
- Ability to work in team in diverse/ multiple stakeholder environment
- Experience in working in a fast-paced Agile environment
- BS in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
- Implement test cases and test automation.
- Apply DevOps, Continuous Integration and Continuous Delivery principles to build automated pipelines for deployment and production assurance on the data platform.
- Share knowledge with immediate peers and build communities and connections that promote better technical practices across the organisation
Preferable skills and experience
- Knowledge of and/or experience with Big Data integration and streaming technologies (e.g. Kafka, NiFi, Flume, etc.)
- Experience in building various frameworks for enterprise data lake is highly desirable
- Knowledge of building self-contained applications using Docker and OpenShift
What we can offer you?
Working at Capgemini you'll find the rewards are more than just financial. Not only will you work alongside inspiring colleagues with a world of experience but you will also have access to great benefits including, salary continuance insurance, paid parental leave, education assistance, salary packaging, the ability to purchase additional leave, as well as, discounts on entertainment, financial and wellbeing services, travel and shopping.
Capgemini is one of the world's foremost providers of consulting, technology, outsourcing services and local professional services. Present in over 40 countries with more than 180,000 people, the Capgemini Group helps its clients transform in order to improve their performance and competitive positioning.
Ranked among Ethisphere's 2019 Most Ethical Companies in the Word. Our seven values are at the heart of everything we do - Honesty, Boldness, Trust, Team Spirit, Freedom, Fun and Modesty.
If you believe you have “La Niaque” to go the extra mile, then apply by submitting your resume and cover letter.
Want to know more? To learn more about Capgemini and find out about what makes our people unique, log onto www.capgemini.com.au
Proof of work entitlements and visa status will be required prior or at offer time. Successful applicants will be required to complete a Criminal Record and Reference checks prior to commencement of employment.