Fermé

Fix PlayFramework Spark Slick Error !!! [login to view URL]: Task [login to view URL]$DatabaseDef rejected from [login to view URL]

Hello

We are facing a random error with Slick 3.

That is the brief history.

We are building an ETL pipeline Spark, Minio as S3 Storage ( Minio is an open-source alternative to AWS) and Delta tables. The pipeline has a web interface created using Play Framework (Scala).

Cluster is consisted of:

7 workers nodes of 16 cores and 64GB RAM each configured in client mode.

1 Storage node

[login to view URL] and [login to view URL] are both set to 600

[login to view URL] is disabled

App data (session data, users data, and some other records in) is saved in PostgreSQL using Slick 3 mapper.

Data processed size is exponentially growing and now it is around 50GB. (In production, we aim to process Terabytes of data)

Data processing flow consists essentially in data Aggregation using group-by and saving data into S3 Storage following these steps

1. Read CSV data from Storage and create read_df dataframe

2. Read main_db from dtorag and create main_df

3. Merge read_df with main_df

4. GroupBy a specfic Key (let’s say user_id)

5. Save records to Storage to replace main_db. To guarantee data integrity, this stage is split into three phases:

- Write records to a temp object referenced by datetime

- Backup Existing database object main_db (copy to another object)

- Rename temp object to main_db (copy and delete)

6. Then Update PostgreSQL history table with processed job informations such as:

time_started, time_ended, number_of_rows_processed, size, etc. And that is where issue occurs.

We are facing a random error and we noticed it happens when shuffle occurs after groupby. Sometimes, we end up with 1000+ partitions. In those cases Step 5 is not completed and gives folowing Exception:

[login to view URL]: Task [login to view URL]$DatabaseDef$$anon$3@291fad07 rejected from [login to view URL]$$anon$1$$anon$2@7345bd2c[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 26]

completed tasks value sometimes is lowe sometime reaches hundreds

Below is code that is executed in step 5

[login to view URL]([login to view URL]("[login to view URL]", mainDb), [login to view URL])

Googling the exception, we found that it could be because connections are closed before code is excecuted when using transactionally. Notice we don’t use transactionnally in our code. Below is code excecuted when calling update()

val updateQuery = [login to view URL]([login to view URL] === id).update(db)

[login to view URL](updateQuery)

That is the actual slick configuration:

connectionPool = "HikariCP"

dataSourceClass = "[login to view URL]"

numThreads = 100

Initially before errors starting, it was

numThreads = 20

maxConnections = 20

We tried queueSize = 2000 but not fixed.

Can someone have a soution for us?

Furthermore, we suspect the step5 to be responsible of that connection closed issue because that did not happen when it is turned off. What is the link between threads that read/write from S3 Storage (on another server) and hikari (slick) processes that are killed?

And is there a better way to guarantee data integrity (in case of failure while writing data) without this time consuming copy-restore-and-delete process ?

Note:

1. After Aggregation we repartition() to reduce partitions and avoid skew data before saving results. Coalesce() made driver JVM craches with OOM.

2. main_df and read_df do not have the same schema so, overwritting using delta in built-in method is not possible.

3. Update() function’s Await time was 10s but following issue, we increased it but that did not fix the issue.

Compétences : Spark, PostgreSQL, Big Data, ETL, Scala

Concernant le client :
( 5 commentaires ) SAINT DENIS, France

Nº du projet : #34784651

6 freelances font une offre moyenne de 202 € pour ce travail

hashirqureshi01

Concurrent Asynchronous Error when saving to Postgres Slick and Spark Hello Rafik G., I would like to grab this opportunity and will be dedicated to your work till you get 100% satisfied with the tasks. I have 10 Plus

%bids___i_sum_sub_35% %project_currencyDetails_sign_sub_36% EUR en 7 jours
(0 Commentaires)
0.0
andchapow

Hi, How are you? I am a senior developer. *****Please focus on me***** I am an senior Big Data expert and have 4+ years of experiences in this field. And I have been worked as a data analyst in big data project team a Plus

%bids___i_sum_sub_35% %project_currencyDetails_sign_sub_36% EUR en 5 jours
(0 Commentaires)
0.0
korzha

How are you?, I'm a full-stack developer with 7+ years of experience developing applications under both web and mobile environments. I have interested in algorithm-level efficiency and have rich experience working both Plus

%bids___i_sum_sub_35% %project_currencyDetails_sign_sub_36% EUR en 7 jours
(0 Commentaires)
0.0
h87md3h

Hello there I'm a data engineer and I have 6 years of experience in creating etl pipelines in Spark. Let's discuss more in details thanks

%bids___i_sum_sub_35% %project_currencyDetails_sign_sub_36% EUR en 7 jours
(0 Commentaires)
0.0
josephwriter1996

Hi, Greetings and hoping you are doing well, i welcome you to my profile where quality and client satisfaction is the Priority. I am Expert Joseph and i hope to cooperate with you on your project . CERTIFIED EXPERT IN Plus

%bids___i_sum_sub_32% %project_currencyDetails_sign_sub_33% EUR en 1 jour
(0 Commentaires)
0.0
grapessoft

Hi Greetings! I am available right now for the project discussion and can start the project on an immediate basis. I have understood your project requirement I have7++ experience in design and development. I can ha Plus

%bids___i_sum_sub_35% %project_currencyDetails_sign_sub_36% EUR en 7 jours
(0 Commentaires)
0.0