heroImg

Generating Rewards | Züs Weekly Debrief (August 24, 2022)

Clarke McMakin
October 31, 2022
News & Updates

Happy Wednesday! This week, we were featured in an article by the Investors Business Daily, where our CEO Saswata Basu talked about the value of enterprise data security and privacy solutions. This week we delve into the process of Generating Rewards on the network. As well as get some of those juicy technical details.

Calling action fans: we are looking for high-resolution videos to showcase our decentralized streaming capabilities. If you have something or would like to create a video, let us know by the end of next week! We would like to feature your video on our streaming demo! Note, that the content must be your own, or unrestricted. Message us on telegram if you are interested!

Last week our devs continued working with frontend supports, load test optimization, and bug fixes. Let’s dive into this week’s update!

Development Team

Generating Rewards

During load tests, the team detected a slow block finalization process on block rewards update event storing. This would take about 1.3 seconds to finish working. This is unsatisfactory, as it would hinder the performance of the blockchain the team was able to locate the code that led to this slow finalization process.

When distributing and generating rewards to all clients that staked into miners that generated the block, the loadtest network would allow a max of 200 delegation clients. The problem was that, when generating rewards the rewards were distributed, it generated 200 reward update events. Even though it is not a problem to save the 200 events into the database, it was stored one by one, and each time it would query the client delegate pool from the database first, update reward, and then save to the database. This process was not optimal, but the team was able to fix by updating all events into one database action (ie: batch/bulk update). This change optimized the process to about 3 milliseconds with the local benchmark test, and to about 50 milliseconds in loadtest.

Partition Bug

Furthermore, the team detected and fixed a partition bug that could cause data to be lost. This bug happened when the partitions had more than one partition. When updating any item into one partition, and calling the Save() method, it would cause all other partitions to be removed from MPT. This bug didn’t happen before because our data could be stored in one partition, so it was always saved to the MPT, and no other partitions were removed.

In addition, the team refactored to use txn.ClientID as miner/sharde’s stake pool id. Previously, it was used to stake transaction hash as pool id. Each time tokens were staked to a miner or sharder, the client would occupy one stake pool position. However, each miner has limited positions for staking pools. For example, if a miner allows a max of 200 stake pools, then the miner is only allowed for a max of 200 staking transactions. Which is not intended by our protocol design. Nevertheless, after the refactoring, the total allowed stake pools are still limited. However, each client can perform as many staking transactions as they want.

Furthermore, the team fixed the ‘previous block state is not computed’ bug. For some unknown reason, the block in chain.blocksi and round could be the same.  This would make it possible that the round is computed, but not synced to chain.blocks. Then, in the next round, it would get the previous block from chain.blocks, and cause the not computed error.

Merged PRs

All changes above can be found here. Beyond these changes above, the team also merged PRs as below:

They fixed logic to get requested data points in response for front end and merge them into the snapshot branch. Added new endpoints to return a list of blobber ids ordered by rank, and added the round number to transaction event db, fixed a bug in the evaluation of dominant response on MakeSCRestAPICall in gosdk. Also, the team exposed getWalletBalance API to wasm in gosdk, fixed a lock/unlock panic issue in InitAllocation function in gosdk, exposed preferred blobber ids on creation allocation in gosdk, modified consensus logic similar to delete operation, and fixed a lock issue on blobber. These changes are intended to ensure stability and performance upon launch.

About Züs

Züs is a high-performance storage platform that powers limitless applications. It’s a new way to earn passive income from storage.

Latest Articles
Lori Bowers
March 29, 2024

Enterprises are constantly seeking ways to enhance the performance of our AI applications while keeping costs low. With the rise of data lakes as a key component of modern enterprises, finding a solution that offers high performance for faster AI processing at minimal cost is crucial. This is where Züs comes in, offering not only […]