We need to develop a wrapper of Apache spark for an existing application which works on a custom distributed framework. The work involves the following tasks
1) Create a job execution code that will read a JSON file which will have the list of the tasks to be executed in order.
2) Task will have a push method which will generate RDDs from task,
10 freelance font une offre moyenne de ₹29694 pour ce travail
Hello I am working in Bigdata/Hadoop technologies for years. I worked in Spark Streaming MLLib, Mapreduce. Can we talk more on this? Thanks
I am an IITK graduate and I have 11 years of experience in software development. I have 100% completion rate and I have finished projects with the highest level of customer satisfaction. I have a team of rock star dev Plus
I have specialization in this area . Please check my blog [login to view URL] for some of my work in this space
Have 10 years of IT experience with more than 4.5 years of experience in hadoop technologies like hive,pig,spark,sqoop,map reduce and [login to view URL] have very good experience in Java,scala,Python and shell scripting. Wor Plus
Good experience with distributed computing and i have been developing a spark application which runs on IOT devices as a part of my research project.
I have very good experience in developing the applications using spark Scala and Kafka along with other stuff like java spring etc. I have very good understanding on your requirements and I have very good confidence a Plus
I have working in hadoop and spark. I have knowledge of RDD, DataFrame and Dataset. I'll create this application is very optimized manner and distributed in cluster.