Required skills have not yet been specified for this position this job
Fields of study
Electrical, Electronics and Communications Engineering
Full Time
Share
Job Description
Application Deadline: June 03 ,2022
About Us
Safaricom Telecommunications Ethiopia Plc is a company supporting Ethiopia’s digital transformation. As a member of the Vodacom family, we have a wealth of experience connecting over 334 million people globally and over 180 million people in Africa across our network. We look forward to partnering with Ethiopians as we build a new network in Ethiopia.
We are setting the groundwork in readiness for the launch of our services next year and are looking to work with purpose-led teams that put the community at the heart of service.
Safaricom Ethiopia is offering a wide range of careers, whether you’re looking to join our technology, commercial or corporate teams. If you would like a challenge and the promise of a digital future for the people of Ethiopia, we are looking for you.
We are pleased to announce the following vacancy for a Data Engineer (Big Data) in the Technology Functionin Ethiopia. In keeping with our current business needs, we are looking for a person who meets the criteria indicated below.
Detailed Description
Reporting to Senior Specialist – Big Data, the role holder will work closely with all stakeholders to ensure all data ingestion pipelines as optimally built, deployed, and operationally supported.
Job Responsibilities
Deploying a hadoop cluster, maintaining a hadoop cluster, adding and removing nodes using cluster monitoring tools like Ambari Manager & Apache Airflow manager, configuring the NameNode high availability and keeping a track of all the running hadoop jobs.
Implementing, managing and administering the overall hadoop infrastructure.
Takes care of the day-to-day running of Hadoop clusters and ecosystem.
Will have to work closely with the database team, network team, BI team and application teams to make sure that all the big data applications are highly available and performing as expected.
Responsible for capacity planning and estimating the requirements for lowering or increasing the capacity of the hadoop cluster.
Responsible for deciding the size of the hadoop cluster based on the data to be stored in HDFS.
Ensure that the hadoop cluster is up and running all the time.
Monitoring the cluster connectivity and performance.
Manage and review Hadoop log files.
Backup and recovery tasks
Resource and security management
Troubleshooting application errors and ensuring that they do not occur again.
Job Requirements
Qualifications
BS or MS degree in Computer Science (or related fields like Electronic Engineering, Physics or Mathematics).
3+ years of experience Big Data technologies in application administration, monitoring (Prometheus, Grafana, Kibana etc), operations, management and support.
Hands on experience in private cloud (docker/ kubernetes-K8s) computing.
Excellent knowledge of UNIX/LINUX OS.
Knowledge of cluster monitoring tools like Ambari, Ganglia, or Nagios.
Knowledge of Troubleshooting Core Java Applications is a plus.
Good understanding of OS concepts, process management and resource scheduling.
Basics of networking, CPU, memory and storage.
Good hold of shell scripting
A knack of all the components in the Hadoop ecosystem like HDFS, Apache Hive, Apache HBase, Apache Airflow, Apache Nifi, Apache Kafka, Apache Spark etc.