scala object main def should have 6 arguments as below.
val awsAccessKeyId: String = args(0)
val awsSecretAccessKey: String = args(1)
val csvFilePath: String = args(2)
val host: String = args(3)
val username: String = args(4)
val password: String = args(5)
*based on the args, the job should read csv file from s3 and load into cassandra.
*you can use any csv file say 2 fields, and cassandra table has 3 columns..
* need complete project codes, dependency jars, tools used and all versions etc.
20 freelance font une offre moyenne de $186 pour ce travail
⭐⭐⭐ Hi, Dear client. ⭐⭐⭐ I'm very interested your project. I read your description carefully. I'm very talented Scala/Spark developer. You can see my work history about Scala following url. [login to view URL] Plus
Dear Customer, My name is Yuriy Tumakha. I am interested in your project. I am Senior Scala/Java Developer with 14 years of experience. You can see my code examples on GitHub [login to view URL]
Hi, I have 8 years of experience and working on hadoop, spark, nosql, java, BI tools(tableau, powerbi), cloud(Amazon, Google, Microsoft Azure)... Done end to end data warehouse management projects on aws cloud with ha Plus
Hi, I have more than 5+ years of experience in hadoop ecosystems like HDFS, MapReduce, Hive, Spark etc. I can complete your project. Please contact me for more details.
Hi, I am a bigdata developer and a module lead in reputed MNC.i an into the IT industry for more then 12 years. I have tonnes of experience in developing projects using Java,Apache Spark,Hive,Kafka,Scoop,Pig,Scala,aws Plus
Hi i am a Data Scientist working in machine learning from past 3 years. i have done many projects like recommendation system, anomaly detection, fraud detection etc. i can do your task in Python very efficiently.
Hi, I am writing to you today as I would like to draw your attention towards my company Data Lamp. Our company is into Big Data, Spark, Flow Designing/Optimizations, Research & Development, Algorithms (Graph Theory, D Plus
Hi, I worked with same set before(Scala,s3 and cassandra) and can easily deliver solution within next day.
Hi, I have over 5 years of experinece in scala java spark strom hdfs. I have written may live streaming and batch mode projects. I can complete this work in 5 days. Thanks Devesh Kumar
Hi, I am interested in this project. I've more than 5 yrs of experience in big data related tools and technologies. I've already implemented the spark scala related pipelines on aws for my company.
Hello Sr. I feel that I have the relevant skills to develop your application, I have worked as an Spark developer by 3 years using the most commons Hadoop applications in the ecosystem. Please find attached a copy of Plus
Hi, I can help you in this project as I have good expertise in Spark jobs to read CSV and export data to NoSQL dbs. Thanks
"Hi, Hope you are doing well! Thanks for sharing your project requirement with us. It will be our great pleasure to work on your project. I have checked your requirement, yes we can do it, because we already work on si Plus
Our team of experts can write the Spark Scala code with industrial standard. Would like to know more about the work.
I have been working for more than 3 years in Hadoop/Big Data stack. It will be my pleasure if you give me an opportunity to work on your project. Please connect me for further details to start working on this.
I have read your project requirements and i understood it clearly. I am Scala Spark developer with 3 year of experience. I have worked on lot of Machine learning and Data Sciences projects. Some of them are listed belo Plus
I have been doing the data processing using the spark and Scala for last 4 yrs - so this is suiting me very much. Deliverable: Application code and packaged jar along with a tutorial about how to maintain the code in Plus
Hi It is straight forward requirement and would like to take up the project. Please reach out to me for further discussion. Thanks.
I am having 4 years experience Scala spark technologies with real time experience and you can check my linked in profile
I have more than 4 years of experience writing spark code with scala and python. I have also been working with AWS projects since 1 year. There are two ways to go about solving this problem: 1. Using spark dataframe's Plus