AWS Big Data Engineer | Sydney
|Attachments:||No File Attached|
|Application Close Date:||26-Oct-2018|
About the Team
Our Insight and Data team helps our clients make better business decisions by transforming an ocean of data into streams of insight. Our clients are among Australia's top performing companies and they choose to partner with Capgemini for a very good reason - our exceptional people.
About the role
The Big Data Engineer will expand and optimise our clients' data and data pipeline architecture, as well as optimise their data flow and collection for cross functional teams. Your responsibilities include:
- Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data into AWS based data lake platform
- Design and develop real time streaming and batch processing pipeline solutions
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Design, develop and implement data pipelines for data migration & collection, data analytics and other data movement solutions.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Build DevOps pipeline
- Work with data and analytics experts to strive for greater functionality in our data systems
You will have the ability to optimise data systems and build them from the ground up. You will support software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
Essential skills and experience
- Proven working experience as Big Data engineer for 2+ years preferably in building data lake solution using AWS Big Data stack
- Experience with multiple Big data technologies and concepts such as HDFS, Hive, MapReduce, Spark, Spark streaming and NoSQL DB like HBase etc
- Experience with specific AWS technologies (such as S3, Redshift, EMR, and Kinesis)
- Experience in one or more of Java, Scala, python and bash.
- Ability to work in team in diverse/ multiple stakeholder environment
- Experience in working in a fast-paced Agile environment
- BS in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
Preferable skills and experience
- Knowledge of and/or experience with Big Data integration and streaming technologies (e.g. Kafka, Flume, etc.)
- Experience in building data ingestion framework for enterprise data lake is highly desirable
- Experience of CI/CD pipeline using Jenkins
- Knowledge of building self-contained applications using Docker, Kubernetes or similar technologies
What we can offer you?
Capgemini is a world leader in technology enabled change. We can offer our consultants;
- Formal training with industry recognized certifications
- The ability to interact with peers in a sharing and inclusive community
- Exciting and challenging projects
- A well-structured and tailored career framework
- A culture of collaboration and recognition
- Excellent remuneration
Capgemini is one of the world's foremost providers of consulting, technology, outsourcing services and local professional services. Present in over 40 countries with more than 180,000 people, the Capgemini Group helps its clients transform in order to improve their performance and competitive positioning.
Ranked among Ethisphere's 2018 Most Ethical Companies in the Word. Our seven values are at the heart of everything we do - Honesty, Boldness, Trust, Team Spirit, Freedom, Fun and Modesty.
If you believe you have “La Niaque” to go the extra mile, then apply by submitting your resume and cover letter.
Want to know more? To learn more about Capgemini and find out about what makes our people unique, log ontowww.capgemini.com.au
Proof of work entitlements and visa status will be required prior or at offer time. Successful applicants will be required to complete a Criminal Record and Reference checks prior to commencement of employment.