As a Big Data Developer on our team, you will be involved in building one of the largest Big Data systems in the world. Collaborating with Product Managers, Architects and Software Engineers to research, design and improve core Big Data analytics functionalities, in the area of ingestion, enrichment, search, and/or analytics.
Who you are:
• You are passionate in applying software engineering skills to solve real life Big Data analytical problems.
• You can write code to process billions of data records daily from heterogeneous sources.
• You are an ETL Champion, deploying solutions to work on clusters with hundreds of nodes, storing petabytes of data.
• You are creative, translating user stories into innovative solutions that provide an excellent client experience and align with architectural roadmap.
• You are detailed, maintaining thorough technical designs that account for security, scalability, maintainability, and performance.
• You transform ambiguity into clarity.
• You are a team player, participating in all phases of the development process – analysis, design, construction, testing and implementation in Agile development lifecycles.
• You enjoy collaborating in a multicultural and diverse environment that expands to include various geographic locations and spans numerous cultures.
• You have stellar communication skills, effectively expressing yourself. You convey and receive information in a clear, credible, and consistent manner.
What you’ll need:
• Bachelor’s Degree or higher with a minimum of 5 years of experience working on Big Data related technologies; Master’s Degree and a minimum of 2 years of experience are preferred.
• Experience in Java, Scala, with OOD/OOP design skills, and proficiency in Linux environment along with knowledge of writing unit test and testable code.
• Hands on experience in building Big Data applications.
• Strong experience in Spark Core and Spark Streaming, Flink, Storm or Apache Beam.
• Strong experience in Kafka.
• Strong experience with NoSQL databases like ElasticSearch, Solr, HBase, Cassandra.
• Strong experience working with Cloudera, Hortonworks or similar.
• Experience in using Spring framework.
• Experience in debugging issues, resolving performance issues and coordinating with external teams.
• Proven experience working with various data formats, such as Columnar databases, Parquet, Avro, XML, JSON, datasets & data frames concepts.
Bonus if you have:
• Experience with cloud stacks, such as OpenStack, AWS, K8s or Google Cloud.
• Telecom and/or government data set experience/knowledge. http://pax-ai.ae 2
• Knowledge in Spark SQL, Spark ML, or Spark GraphX.
• Experience working in an Agile development. CSD, CSM, SA, ASE.
• Experience working with or setting up CI tools like Jenkins, Bamboo, TeamCity or Gitlab