Welcome back to another week! Over the past week, a lot of work has been ongoing in the background. We saw an update provided by Züs CEO Saswata Basu. This update offers insight into the ongoing work, upcoming release of products, and dStorage solutions. Progress continues on the blockchain benchmark testing with code fixes to address the issues that Züs dev Yury Dabasov identified. Sculptex Corner returns this week where he discusses his ongoing testing of our storage layer. He requests community assistance from those who have interacted with other platforms. Read on to learn more!
An Update from Züs CEO Saswata Basu
A perspective and update.
While the market is down, I believe we will have a significantly market impact with our holistic #dStorage solution.
Let me explain.
In a bear market, both #web3 and #web2 apps look for a cost effective performant storage option. $ZCN
— saswata basu (@saswata_0chain) June 20, 2022
Recently, Saswata popped into the Züs Telegram and then took his message to Twitter to provide some insight into the ongoing progress the team has made. Saswata notes that current dStorage solutions are struggling to compete with traditional centralized storage speeds. This indicates the need for a fast, reliable dStorage protocol. Following major upgrades to our blockchain layer, recent blockchain benchmark testing has revealed lightning-fast speeds that support the scaling of millions of wallets, allocations, and files.
We are happy with the enhanced performance of our blockchain and dStorage solutions. In addition, Saswata indicates the need for an ecosystem of applications in order to compete with others. The team will not only launch its blockchain-storage network tandem, but it will also include 6 products: a personal storage application, a crypto wallet, an NFT platform, an enterprise-facing storage application, a service provider application, and a block explorer. The devs continue their work to approach our blockchain and Active Set testing. We are emphasizing thorough testing to ensure we are able to scale transactions and APIs while closing potential Byzantine loopholes. To coincide with the dev progress, the business development team continues to ramp up its work. They are in preparation for the rollout of products, marketing schemes, and business strategies.
Thank you again to everyone who has signed up for any of our campaigns for our wallet, storage application, and blobber platform. If you have not done so already, sign up for our storage application! This referral program has been extended until August 15, 2022. Make sure to take advantage of the free storage space! Interested in becoming a storage provider on the Züs network? Sign up now to be part of the first cohort of Blobbers on our network!
Sculptex Corner: Blobber Testing
“Continuing my blobber stress and performance testing another week, my Blobbers have continued to scale up to nearly a million files in just a few days without a hitch. When performing multiple simultaneous uploads/downloads to the same pair of blobbers, my poor little blobber VMs (2*vCPUs each) are getting maxed out to 100% CPU and the performance noticeably deteriorated.
However, vCPUs are notoriously underpowered. They are just a share of a real CPU after all, so I needed to test on some real CPUs. Real tests make sure that our blobbers will be able to scale. A generous offer from Zero Services GmbH was happily accepted. They set up a bunch of Blobbers in Madrid, Spain to perform some further testing.
The first thing to try was higher EC values. With only 2 blobbers, I have had to use EC1+1 (EC1/2), with many more Blobbers now available I tested EC settings up to EC10/16. (If you aren’t already familiar with EC (Erasure Coding), check out my deep dive included in the Sept. 22 Weekly Update.
The bulk of my testing was from my server in Helsinki, Finland. Unfortunately, as Central Europe goes, this is about as far away from Madrid as you can get, so I expected the results to be much slower than those where my Blobbers and client-server were all in Helsinki.
For small files, as you’d expect, the advantage of the low latency of Blobbers in Helsinki resulted in faster uploads and downloads there.
For larger files, however, the benefit of parallel downloads became apparent. Faster download speeds were achieved, despite the significant latency between Helsinki to Madrid. Uploads did not see as much benefit though. The logic is that because there are many Blobbers involved, the client software waits for all chunks to be uploaded. It’s effectively restricted by the slowest Blobber. As stated previously, this is not too much of an issue since clients can upload multiple files concurrently. Albeit there are plans for post-mainnet that will allow clients to effectively blacklist poorly performing Blobbers that are detrimental to performance.
For downloads, the logic is that the fastest Blobbers win. So the client only requires the first n (data) Blobbers to reconstruct the data, so it’s no surprise that higher EC values can result in download speed improvement.
Regarding CPU, for significantly approx. 10–15 concurrent uploads/downloads, Zero Services GmbH dedicated server Blobbers were barely breaking a sweat. They are a peak of 6% CPU observed. Whereas my vCPU Blobbers hit 100% with much less! This confirms to me that the Blobber software is ‘lite’ with resources. A decent dedicated server as per recommended specs is capable of handling hundreds of concurrent connections.
Between us, we did also perform some client testing from other locations, more local to Madrid, however, the results were mixed as some errors were observed. The errors related to read and write markers, but the whole read/write markers code has more recently received an overhaul. I have reported the errors back to the team but because the builds I have been using for testing are a couple of months old, it’s possible that the issues have already been resolved in more recent releases.
For this reason, I have suspended further testing for the time being. The intention is that we retest again with the next stable builds that are released.
A big thanks to Zero Services GmbH for their help!
NOTE: The Blobber testing I am performing is in addition to continuous testing being internally performed by the team. But it is outside the ‘sandbox’ of the traditional Dev environment and therefore is intended to give real-world insight into performance characteristics of a more distributed network and form the basis of comparison with other dStorage solutions.
Ps. If anyone knows anyone who has actually used FileCoin, ArWeave, or Stratos directly via CLI tools please let me know.
Development Team Updates
The blockchain team continues to make progress with the closing of 13PRs and 10 issues in the Züs repo. The team noted they opened an additional 15 new PRs that are pending review and small change requests. Despite opening new PRs, many of these are close to being completed and require rather simple modifications. We expect to see a flurry of PRs being submitted in the coming days and weeks. Submitted from both the blobber and blockchain team since a lot of the new PRs are pending code review.
Blockchain Testing Fixes
The blockchain recently fixes for loading data that were discovered during the blockchain benchmark testing, resulting in the optimization of REST endpoints and adding pagination for all storage smart contract APIs. Pagination allows data to be fed through APIs in a manner that makes it easily decipherable on the front end by providing lists of info that a user can scroll through. The team also removed a redundant payer ID field that was part of the readMarker. This field is not needed since the cost of reads is deducted from the allocation owner’s read pool and could serve as a potential attack vector in the future.
In turn, the team also removed the client ID requirement for blobber-validator pairs. This is part of the team addressing a series of issues encountered during bobber registration. Which was mainly caused by the same wallet being used for a blobber and validator. As the blobber and validator are both sending a series of transactions, there was periodical failed transactions due to they are attempting to use the same nonce.
Max Challenge Completion Time (CCT) Removed
The team has recently removed the max challenge completion time (CCT) from the Blobber’s structure. The CCT is to be read from Züs’s global configuration from MPT. This change was put in place to ensure consistent implementation of challenges across all Züs nodes. In previous weeks, there was a discussion on how to improve the blobber challenge process when considering the usage of blobber in terms of the number of allocations, size of storage, and overall usage.
The team implemented a database update for removing an allocation from the blobber’s allocation list (blobberAllocPartition) to accurately reflect when an allocation is canceled or completed (finalizeAllocation and cancelAllocation). Keeping consistent with partition upgrades to coincide with recent tokenomics-based modifications, the team removed the blobber challenges indexes as they are not used because the event database is always referred to in order to get information regarding blobber challenges.
Small updates to code were implemented to rename Used to Allocated to reflect the data size that the blobber has allocated. Meanwhile Used reflects the amount of storage actually being used by the client. At last, the team has added GitHub actions to automatically generate and deploy swagger documents.
The Blobber and gosdk team have closed an additional 10 PRs over the course of this past week. Some of these updates coincide with the changes mentioned above by the core blockchain testing and smart contract team. Last week, we saw mention of the ongoing work to change the transaction.Value type from int64 to uint64, which has been merged this week. To reflect that, the team has updated a blobber’s read/write price to uint64 after changing the currency type in the past weeks. The team also plans to implement these upgrades on the client end to ensure compatibility across all platforms.
The Blobber team has also implemented the 0nft sdk, our interactive toolset that will be used in conjunction with our NFT platform. In parallel with the removal of the max challenge completion time (CCT) by the blockchain team, the blobber team has implemented changes to mirror those.
Züs is a high-performance storage platform that powers limitless applications. It’s a new way to earn passive income from storage.