Hadoop (HDFS, Hive, IMPALA, SQL, Sqoop), Python, Spark, RDBMS, Data Mining, Linux/Unix, Spark Architecture, Azure.
- Min. 3+yrs of experience of experience in Big Data.
- Strong communication skills (verbal/written) - Should be able to interact with customers
- Should be good in analytical and problem solving
- Should be a data savvy - knowledge in JSON and other data formats is essential
- Must have Hadoop knowledge (Hive, HDFS, OOZIE, Sqoop, Impala, MapReduce)
- PySpark (Python, Spark Framework, Spark Core, Spark SQL, Data Frames, Spark Streaming)
- Experience in handling structured and unstructured/semi structured data(flat file, json, XML, binary files)
- Strong Linux/Unix knowledge
- Azure Cloud (Migrate data from an on-premises Hadoop cluster to Azure)
- Experience in batch processing and streaming
- Strong in writing SQLs and designing database tables
- Should have experience in handling large volume (bulk processing)Critical thinker and problem-solving skills
- Team player and Good time-management skills
Competitive Package, Free Accommodation, Medical Insurance, Accident Insurance, Free Gym, Subsidized Meal facility, Power Nap during noon, Work From Home option, Technology Grooming Community and more.