Our Big Data capability team needs hands-on developers who can produce beautiful & functional code to solve complex analytics problems. If you are an exceptional developer with an aptitude to learn and implement using new technologies, and who loves to push the boundaries to solve complex business problems innovatively, then we would like to talk with you.
- You would be responsible for evaluating, developing, maintaining and testing big data solutions for advanced analytics projects
- The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into business insights
- The role would also involve testing various machine learning models on Big Data, and deploying learned models for ongoing scoring and prediction. An appreciation of the mechanics of complex machine learning algorithms would be a strong advantage.
QUALIFICATIONS & EXPERIENCE
- 3+ years of demonstrable experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.
Ideally, this would include work on the following technologies:
- Expert-level proficiency in at-least one of Java, C++ or Python (preferred). Scala knowledge a strong advantage.
- Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies — one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc..
- Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage.
- Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services and the AWS CLI)
- Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks
- Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works
In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners.
Must Have (hands-on) experience:
- Java or Python or C++ expertise
- Linux environment and shell scripting
- Distributed computing frameworks (Hadoop or Spark)
- Cloud computing platforms (AWS).
Desirable (would be a plus):
- Statistical or machine learning DSL like R
- Distributed and low latency (streaming) application architecture
- Row store distributed DBMSs such as Cassandra
- Familiarity with API design
- B.E/B.Tech in Computer Science or related technical degree
What you will be doing:
– Analyze structured/semi-structured/unstructured metadata about customer behaviour/click stream data/ transactional data from eCommerce or PoS
– Write Map-Reduce jobs in Java/Python/Scala-Scalding/Cascading
– Build fault tolerant, highly performant data pipelines that are capable of ingesting 100s of GBs of data everyday
– Experiment with massive data sets to surface valuable insights about customer behaviour/eCommerce systems/ content management systems
– Implement a mixed batch / near-real time architecture to analyze, index, and publish customer-facing data
– Bring experience with Hadoop (MapReduce, Sqoop, Hive, HBase, Pig, Oozie, Storm) to augment team’s capability and accelerate the rollout of data features
– Practice test-driven development on an Agile team; designing, BS or MS in Computer Science, Engineering, Mathematics, Science or equivalent work experience
Contact us for jobs:
WebtechLearning - Web Education Academy
ISO 9001:2008 Certified Academy
Professional Training + with Live Projects + Certifications+ Job
Call for query: +91 9878 375 376, +91 9915 337 448 , Skype: webtechlearning
Note: you can meet your trainer, get demo class before joining the Course.