Concurrent Processing Protocol | Züs Weekly Debrief June 28, 2023

Tiago Souza
June 28, 2023
News & Updates
Concurrent Processing Protocol Zus Weekly debrief June 28 2023
Concurrent Processing Protocol Züs Weekly debrief June 28 2023

Cloud Cover AMA:

Hello Züs Community! We trust this message finds you well. We are excited to invite you to the upcoming Cloud Cover AMA, Ecclesia #18, set for tomorrow, Thursday, June 29th, at 9 AM PST. Saswata is eager to present the latest updates on the Züs Mainnet, Active Set, and learn more about the Apps’ concurrent processing protocol during this session. Please make sure to submit your questions on the Discord channel or directly to me via Telegram. Looking forward to your active participation!

Storm of the Week:

Concurrent Processing Protocol

Today, we are excited to introduce a powerful new feature for our storage Apps (Vult and Blimp): the Concurrent Processing Protocol.

With the Concurrent Processing Protocol, you will be able to upload and download multiple files simultaneously, significantly accelerating these processes. Whether handling large data files, sharing important documents, or preserving digital memories, this new feature is designed to facilitate a faster, more efficient experience.

No more waiting for one file to finish processing before starting another. This protocol enables concurrent processing, allowing all your files to be uploaded or downloaded at the same time, providing speed and efficiency like never before.

We welcome your feedback as it is invaluable in our pursuit to continuously refine and improve our products, so make sure to use our Discord channel. Please do not hesitate to share your thoughts or report any issues you may come across while testing this new feature.

Thank you for your ongoing support and for being an integral part of the Züs community. Your enthusiasm continues to motivate us in our mission to deliver top-tier solutions.

Blockchain Updates:

Last week, the team focused on the optimization of smart contracts. Continuing from last week, the blockchain team successfully completed all allocation-related smart contract (SC) optimizations. Here are the results:

benchmarks-benchmark-1 | storage.cancel_allocation,3.442529ms OK

benchmarks-benchmark-1 | storage.finalize_allocation,3.426934ms OK

benchmarks-benchmark-1 | storage.free_allocation_request,4.415771ms OK

benchmarks-benchmark-1 | storage.free_update_allocation,4.298039ms OK

benchmarks-benchmark-1 | storage.new_allocation_request,4.207143ms OK

benchmarks-benchmark-1 | storage.update_allocation_request,4.250859ms OK

These results are based on a configuration of 9 blobbers per allocation. For comparison, the results before optimization were:

benchmark_1 | storage.cancel_allocation,11.020000ms OK

benchmark_1 | storage.finalize_allocation,10.510000ms OK

benchmark_1 | storage.free_allocation_request,12.862500ms OK

benchmark_1 | storage.free_update_allocation,16.192308ms OK

benchmark_1 | storage.new_allocation_request,21.203125ms OK

benchmark_1 | storage.update_allocation_request,16.473684ms OK

Finalize/cancel allocation

The finalize/cancel allocation SCs were primarily slow due to the necessity of updating stake pools to distribute reward tokens. With N numbers of blobbers per allocation, there was an O(N) performance issue due to the slow rate of stake pool saving. To optimize this, the blockchain team applied the same technology used in the new_allocation_request SC, which saved the rewards to each blobber’s info list (all blobbers in one MPT node) and distributed the rewards to each staking pool in collect_rewards SC, thereby reducing SC execution time from 11.0ms to 3.4ms.

Update Allocation Smart Contract

The update allocation smart contract was also enhanced using the blobber info list technology, which saved CPU time by avoiding updates to blobbers and stake pools. Furthermore, a new technology was implemented to speed up the MPT loading through concurrency: concurrentReader. This technology enabled all MPT reading to be conducted in a multithreaded way, which proved beneficial in numerous SCs for improving performance.

Both the free_allocation_request and free_update_allocation were improved with concurrentReader and blobbers info list technologies.

Optimization of the encryption.Hash function

Also, some optimization was performed on the encryption.Hash function, which improved overall MPT writing. The MPT reading and saving speed for specific nodes were also enhanced by storing them using a short path. This allowed the avoidance of unnecessary middle MPT nodes loading and decoding, particularly useful when the MPT becomes large. Although not used in the current implementation, it is being considered.

The following were the main PRs the team merged in the previous week:

  • PR#2541 – upgraded herumi/mcl and herumi/bls
  • PR#2551 – added error checking when saving transactions with merge
  • PR#2548 – fixed 400 handling
  • PR#2536 – removed allocation and blobber from the partition on allocation finalization
  • PR#2550 – added contributing guidelines
  • PR#1049 – fixed miner’s set
  • PR#789 – fixed lint errors: literal copies lock value from consensus
  • PR#1136 – removed custom nonce managing logic

Learn more about the Concurrent Processing Protocol tomorrow.

Thanks for tuning into our weekly update. We hope you got the gist of how the new concurrent processing protocol feature will assist businesses in optimizing their workflows. We look forward to seeing how this technology transforms businesses over time and look forward to answering your questions tomorrow. So, mark your calendars and set a reminder for our upcoming AMA session about these exciting changes. We will see you all tomorrow!

Latest Articles
Lori Bowers
March 29, 2024

Enterprises are constantly seeking ways to enhance the performance of our AI applications while keeping costs low. With the rise of data lakes as a key component of modern enterprises, finding a solution that offers high performance for faster AI processing at minimal cost is crucial. This is where Züs comes in, offering not only […]