#Gate Latest Proof of Reserves Reaches 10.453 Billion Dollars#
Gate has released its latest Proof of Reserves report! As of June 2025, the total value of Gate’s reserves stands at $10.453 billion, covering over 350 types of user assets, with a total reserve ratio of 123.09% and an excess reserve of $1.96 billion.
Currently, BTC, ETH, and USDT are backed by more than 100% reserves. The BTC customer balance is 17,022.60, and Gate’s BTC balance is 23,611.00, with an excess reserve ratio of 38.70%.The ETH customer balance is 386,645.00, and Gate’s ETH balance is 437,127.00, with an excess reserve
Manus brings the dawn of AGI, AI Security is also worth pondering
Author: 0xResearcher
Manus has achieved SOTA (State-of-the-Art) performance in the GAIA benchmark, demonstrating its performance surpasses that of Open AI's models at the same level. In other words, it can independently complete complex tasks, such as cross-border business negotiations, involving contract clause decomposition, strategic forecasting, scenario generation, and even coordinating legal and financial teams. Compared to traditional systems, Manus excels in its ability for dynamic objective decomposition, cross-modal reasoning, and memory-enhanced learning. It can decompose large tasks into hundreds of executable sub-tasks, simultaneously handle multiple types of data, and continuously improve its decision-making efficiency and reduce error rates using reinforcement learning.
Amidst the rapid development of technology, Manus has once again sparked debate in the industry about the future evolution path of AI: will AGI dominate the world, or will MAS co-lead?
It all starts with the design concept of Manus, which implies two possibilities:
One is the AGI path. By continuously improving the intelligence level of individuals to approach the comprehensive decision-making ability of humans.
There is also a MAS path. As a super coordinator, command thousands of vertical domain Agents to collaborate in combat.
On the surface, we are discussing different paths of divergence, but in fact, we are discussing the underlying contradiction of AI development: how should the balance between efficiency and security be achieved? As a single intelligent entity approaches AGI, the risk of decision-making black boxing increases; while multi-agent collaboration can diversify risks, it may miss the critical decision window due to communication delays.
The evolution of Manus has invisibly magnified the inherent risks of AI development. For example, there are data privacy black holes: in medical scenarios, Manus needs real-time access to patient's genomic data; during financial negotiations, it may involve undisclosed financial information of enterprises; there are algorithmic bias traps, where Manus may give salary suggestions below the average level to candidates of specific ethnic groups during recruitment negotiations; the misjudgment rate of emerging industry terms during legal contract reviews is close to half. Another example is combating attack vulnerabilities, where hackers implant specific voice frequencies to make Manus misjudge the opponent's price range during negotiations.
We have to face a scary pain point of AI systems: the smarter the system, the wider the attack surface.
However, security has always been a term constantly mentioned in web3, and under Vitalik's impossible triangle (blockchain networks cannot simultaneously achieve security, decentralization, and scalability), it has also led to various encryption methods:
In multiple rounds of bull markets, both zero-trust security models and DID have a certain number of projects breaking through. They may succeed or be drowned in the wave of encryption. As the youngest encryption method, Fully Homomorphic Encryption (FHE) is also a powerful weapon to solve security issues in the AI era. Fully Homomorphic Encryption (FHE) is a technology that allows computation on encrypted data.
How to solve?
First, at the data level. All information entered by users (including biological features, speech patterns) is processed in encrypted form, and even Manus cannot decrypt the original data. For example, in a medical diagnosis case, the patient's genomic data is fully involved in the analysis in ciphertext form to avoid the leakage of biological information.
Algorithmically, with FHE, the "encrypted model training" implemented, even developers cannot peek into the decision-making path of AI.
At the collaborative level. Multiple Agents communicate using threshold encryption, and compromising a single node will not lead to global data leakage. Even in supply chain attack and defense exercises, attackers cannot obtain a complete business view after infiltrating multiple Agents.
Due to technical limitations, web3 security may not have direct contact with most users, but it has intricate indirect interests. In this dark forest, without arming oneself, there will never be a day to escape the identity of "leeks".
uPort and NKN are projects that the editor has never heard of. It seems that secure projects are not really of interest to speculators. Can Mind network escape this curse and become a leader in the security field? Let's wait and see.
The future is already here. As AI approaches human intelligence, it increasingly requires a non-human defense system. The value of FHE lies not only in solving current problems but also in paving the way for the era of strong AI. On this treacherous road to AGI, FHE is not optional but a necessity for survival.