Find Jobs
Hire Freelancers

Spark Scala JSON data parsing

₹600-2000 INR

Imefungwa
Imechapishwa over 4 years ago

₹600-2000 INR

Kulipwa wakati wa kufikishwa
Parsing a JSON file and having duplicate rows due to multiple array items. Need to flatten the data and make it a single record. Using Spark or Spark SQL.
Kitambulisho cha mradi: 21834901

Kuhusu mradi

12 mapendekezo
Mradi wa mbali
Inatumika 4 yrs ago

Unatafuta kupata pesa?

Faida za kutoa zabuni kwenye Freelancer

Weka bajeti yako na muda uliopangwa
Pata malipo kwa kazi yako
Eleza pendekezo lako
Ni bure kujiandikisha na kutoa zabuni kwa kazi
12 wafanyakazi huru wana zabuni kwa wastani ₹5,777 INR kwa kazi hii
Picha ya Mtumiaji
Hi, I have 8 years of experience and working on hadoop, spark, nosql, java, BI tools(tableau, powerbi), cloud(Amazon, Google, Microsoft Azure)... Done end to end data warehouse management projects on aws cloud with hadoop, hive, spark and presodb. Worked on multiple etl project like springboot, angular, node, PHP, Kafka, nifi, flume, mapreduce, spark with XML/JSON., Cassandra, mongodb, hbase, redis, oracle, sap hana, ASE.... Many more. Let's discuss the required things in detail. I am committed to work done and strong in issue resolving as well. Thanks
₹2,250 INR ndani ya siku 1
5.0 (3 hakiki)
3.5
3.5
Picha ya Mtumiaji
Hi , i have a good experience in scala . I am working in the same since 2.6 years . Developed rest apis and spark project for streaming in scala . Have been used kafka , akka , spray json for the scala language . We cam discuss more on detail about the requirement .
₹1,750 INR ndani ya siku 1
5.0 (3 hakiki)
3.1
3.1
Picha ya Mtumiaji
Hi, I have about 15 years of experience in java stack and 2 years in spark . I have recently implemented a solution to flatten a hierarchical json structure. Kindly share your json file and I will share the solution with you in about 2-3 hours, however in java. Please let me know if you absolutely need this in scala. Regards, Rabiya
₹1,250 INR ndani ya siku 1
5.0 (4 hakiki)
3.0
3.0
Picha ya Mtumiaji
don't worry about money. I will do it for you. please be ready with the test data and the expected output. I'm waiting!!!
₹750 INR ndani ya siku 1
0.0 (0 hakiki)
0.0
0.0
Picha ya Mtumiaji
I have been working on data cleansing for past 6 years using spark and storing it. This requires the expertise to understand how to setup the cluster of computers and also the cost of same. I have been worked on deduplication framework which removes the duplicates by a file or, if a stream of data, in a particular time frame. Even though I will be getting only Rs. 500 at this bid, I am doing it to build a reputation among clients and gain trust as I am just starting my freelance work (even though I have more than 6.5 years of expertise). I would be fixing the issues reported, if any and will provide all sort of support needed.
₹750 INR ndani ya siku 5
0.0 (0 hakiki)
0.0
0.0
Picha ya Mtumiaji
I have an experience in spark scala almost 3 years. I have handled json data in my two previous projects . I can able to do this as i have did flattening of json and read the json file through spark scala,and i have did array explode as well. in spark scala. I am confident that i can able to handle this.
₹1,700 INR ndani ya siku 7
0.0 (0 hakiki)
0.0
0.0
Picha ya Mtumiaji
I can do it. Please share me the input and output, I can code it as per the requirement. Let me know other details in chat.
₹750 INR ndani ya siku 2
0.0 (0 hakiki)
0.0
0.0
Picha ya Mtumiaji
send me a sample JSON and the development language scala/python will do it in few hours.8+ years of bigdata experience.
₹1,750 INR ndani ya siku 1
0.0 (0 hakiki)
0.0
0.0
Picha ya Mtumiaji
i have worked on the same scenario in my compny projects. so i think i can do this in less time. i can do using spark core or spark sql as per your requirments.
₹650 INR ndani ya siku 2
0.0 (0 hakiki)
0.0
0.0
Picha ya Mtumiaji
hi i can deliver your requirement in 2 days. also need tk understand if its for real time or a simple flat file.
₹2,250 INR ndani ya siku 2
0.0 (0 hakiki)
0.0
0.0
Picha ya Mtumiaji
I am currently doing very similar role for other client based on hourly rate and available immediately. Below is my summary. I have extensive Big Data experinece, and designed and delivered a metadata-driven data ingestionI have extensive Big data experience and develped a number of PySpark/Scala framework which ingests data from various Westpac data sources to Westpac Data Hub (HDFS) then integrates, transforms, and publishes to target sources including Kafka, RDBMS (Teradata, Oracle, and SQL Server) and SFTP, etc. Technologies used for the project including Python, Spark, Spark SQL, Hadoop, HDFS, Hive, Cassandra, Hbase, Kafka, NIFI, and Atlas. I created Scala/PySpark based framework which ingests data from customer rating bureau including Equifax, Illion, and Experian. Designed the entire JSON/XML explosion pattern which involves multi-level JSON/XML explosion and normalized table creation in HDFS platform using Scala/PySpark, Hive, SparkSQL, and Hbase. Created entire downstream conceptual, logical, and physical data models for downstream users including credit risk analysts and data scientists.
₹53,221 INR ndani ya siku 3
0.0 (0 hakiki)
0.0
0.0

Kuhusu mteja

Bedera ya UNITED STATES
Tempe, United States
0.0
0
Mwanachama tangu Okt 17, 2019

Uthibitishaji wa Mteja

Asante! Tumekutumia kiungo cha kudai mkopo wako bila malipo kwa barua pepe.
Hitilafu fulani imetokea wakati wa kutuma barua pepe yako. Tafadhali jaribu tena.
Watumiaji Waliosajiliwa Jumla ya Kazi Zilizochapishwa
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Onyesho la kukagua linapakia
Ruhusa imetolewa kwa Uwekaji wa Kijiografia.
Muda wako wa kuingia umeisha na umetoka nje. Tafadhali ingia tena.