Total Experience : 4+ years to 7 years
Designation : Sr. Data Engineer
Mandatory skills : Pyspark & EMR
Location : Pune /Remote
Job Description -
1) Hands-on experience with Python, Spark, EMR
2) Proficient understanding of distributed computing principles
3) Proficiency with Data Processing: HDFS, Hive, Spark, Scala/Python
4) Independent thinker, willing to engage, challenge and learn new technologies.
5) Understanding of the benefits of data warehousing, data architecture, data quality processes, data warehousing design, and implementation,
6) Table structure, fact and dimension tables, logical and physical database design, data modeling, reporting process metadata, and ETL processes.
1) Client-facing skills: Solid experience working with clients directly, to be able to build trusted relationships with stakeholders.
2) In-depth understanding of Data Warehouse, ETL concept and modeling structure principles
3) Expertise in AWS cloud native services
4) Hand-on experience in developing data processing task using Spark on cloud native services like Glue/EMR.
5) Good to have experience Snowflake SQL queries against Snowflake Developing scripts using java scripts to do Extract, Load, and Transform data
6) Good to have experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures.
7) Excellent verbal and written communications skills
8) Ability to collaborate effectively across global teams
Please apply if you're an independent freelancer only & available for a full-time contractual job.
*Agencies please do not apply*
11 freelances font une offre moyenne de 109091 ₹ pour ce travail
PYSPARK EXPERT HERE!!! "Satisfy the client with my ability and passion" This is my slogan here. I hope you will be interested in me. Thanks.
Hi, I have more that 7+years of experience in Hadoop technologies like HDFS, MapReduce, python, pyspark, Kafka, Java, Scala, Hive etc. Please review my profile for skills. contact me. Streaming: Azure event hubs, Kafk Plus
Hello there I'm a software engineer and I have 6 years of experience in data engineering pipelines using Spark pyspark scala. I've worked on gcp AWS platforms. Let's discuss more in details thanks
Hello: After reading in detail the requirements of your project and concluding that they match my areas of knowledge and skills, I would like to introduce myself. My name is Anthony Muñoz and I am the lead engineer Plus
I can do this for you. Please view my past submitted work in my portfolio's Portfolio Items section.
Hi, I cam across your job posting. It is interesting. I have over 12 years of experience in Big Data and 8 years in Cloud. I have worked in Oracle Cloud, AWS and Rackspace. Already have implemented multiple projects i Plus
•I have 4+ years experience as an Data Engineer. •I have hands on experience on python, pyspark, pysparkSQL, SQL, AWS services, data visualization, data modelling, ETL Tool talend, execl etc. • I Have hands on experien Plus
I'm fresher looking for opportunities. I have good knowledge in hive, hdfs and spark. Please help me to accept my bid
I have an experienced python team, I trust that my team can deliver all of your requirements on te with the desired quality. Hire use and you will not regret it.