heroImg

QoS Protocol | Weekly Debrief — April 12, 2023

Tiago Souza
April 12, 2023
News & Updates
Zus QoS Protocol
Zus QoS Protocol

Cloud Cover AMA:

Happy Wednesday! Tomorrow we will be hosting our Cloud Cover AMA (Ecclesia #13). Be sure to attend on Thursday, April 13, at 9 am PST, as Saswata will be giving an update on Züs Mainnet and the Züs App demos! Do not forget to drop your questions on the Discord channel for Saswata to answer or send them directly to me on Telegram. Now let’s dive into this week’s update and learn more about Züs’ QoS Protocol!

Mainnet and Apps Update:

We plan to release all the web apps first, followed by mobile versions (available for Bolt and Vult) and desktop versions of Vult. After that, we’ll introduce our hackathon sample web and mobile development apps. We’ll provide a mainnet timeline once we’ve assessed the impact of the changes we’ve made so far.

We’re in the process of merging three large PRs related to transaction fees, optimized block management, and final configuration changes. We plan to incorporate these changes into all our test networks soon so that we can assess their impact.

Our team has been working hard to improve data consistency and correctness across our apps. We’re pleased to report that we’ve made significant progress, and our apps, except for Chalk, are almost ready for demo.

In addition to these improvements, we’re making headway on other key tasks, such as 2-phase commit and blockchain sync robustness on the blobber side and addressing unusual MPT state mismatches on the blockchain side. Once we complete these tasks, we’ll be better positioned to provide a mainnet timeline.

We plan first to release all the web apps, then mobile versions (available for Bolt and Vult), and desktop versions of Vult. We’ll then introduce our hackathon sample apps for web and mobile development. Once our community has used these, we’ll assess the impact of our changes and provide a firm mainnet timeline.

Storm of the Week

Quality of Service (QoS) Protocol

“In the QoS protocol, the blockchain randomly challenges storage providers (blobbers) for proof of storage and performance. Blobbers earn rewards for passing challenges; failed challenges result in slashed stakes.”

The Züs Network will offer a decentralized cloud service with enterprise-grade quality that is specifically designed for demanding applications such as hot storage, high-resolution streaming, and other resource-intensive services. This unique proposition sets Züs apart from other decentralized cloud services.

To achieve top performance, privacy, and data protection, the Quality of Service (QoS) protocol was implemented in the network. The QoS protocol constantly challenges Blobbers (service providers) on the network to incentivize optimal service and penalize suboptimal service. This ensures that customers receive reliable and performant cloud services with data availability and utmost integrity.

Service providers are expected to deliver an enterprise-grade quality of service to maintain customer satisfaction and meet industry standards. By committing to a ZCN collateral, service providers risk their stake but are also rewarded for providing optimal service and contributing to the QoS via validating. Whenever a Blobber is challenged, a storage allocation is randomly chosen, and its files are randomly tested to secure the privacy and performance of the network.

Blockchain Update

Last week, the development team focused on going through the blockchain protocol implementation code and documented in detail each of the phases on the `VRF share,` `Block notarization,` and `Block finalization.` The team documented how to handle the round time out, how sharders sync missing blocks after restarting, and how miners do fast state sync to participate in the consensus protocol fast. Once this document is completed, the team will analyze it to help address any remaining issues.

Chain Stuck Issue

In addition, by going through the protocol, the blockchain team found new ways to debug the testnet stuck issue. The stuck network issue happens because of the error on loading the last partition as a chargeable error. However, this error should only occur when the partitions are corrupted. Therefore, it can be set as an internal error so that the transaction would not be packed into blocks; hence all miners and sharders should not see this error if they all worked well. This change could fix the state hash mismatch caused by the chain stuck. 

Furthermore, the team discussed a possible way to roll back the network in the event the chain gets stuck. They suggested marking corrupted notarized blocks when sharders get a `state hash mismatch` error on finalizing blocks. These corrupted notarized blocks would never be broadcasted anymore in the future so that miners could have chances to generate new notarized blocks for the stuck round. But after discussion, the team found the change would break the protocol. Therefore, they decided to focus on fixing the root cause of the stuck issue.

Other Issues

Another issue the team put effort into last week was checking to see if there are tokens burned after each transaction. This is a critical issue that must be addressed before launching the mainnet. The team will fix this mainly in 3 parts: 1) client balances assertion before or after executing a transaction to make sure there are no tokens burned or unexpected tokens minted. 2) Run benchmark/system tests to test specific SCs, the total tokens locks, movements between pools, and finally, collected and cashed out to clients’ balance. 3) Code auditing. The 1) part is almost done.

The team will continue the other parts in the following weeks. They are also performing config preparations for the mainnet launch. Check for details in the PR. The next thing they will focus on is the manual view change, so that we can have new miners/sharders added on manually later after launching the mainnet. 

Beyond the works mentioned above, the team closed 17 PRs on 0chain repo, see details about the core PRs below:

Fixed duplicate blobber allocation removing

The team included allocation id when generating readMarker keys. Previously, it consisted of blobber id and client id, so if a client created two readMarkers for the same blobber for different allocations, the readMarkers would be the same and cause errors when redemption happens. 

Returned `is_available=true` blobbers only

Added user snapshots table and used it in calculating user aggregates.

Corrected the validator selection on generating challenges. 

Fixed ‘Insufficient free capacity’ error.

Added CORS headers to miner endpoints.

Latest Articles
Lori Bowers
March 20, 2024

The need for reliable and efficient storage solutions is more important than ever. Traditional cloud storage has long been the go-to option for many users, but decentralized storage is quickly emerging as a superior alternative. One of the key advantages of decentralized storage is its performance, which far surpasses that of traditional cloud storage. In […]