Rexis
  • Rexis: The L1 for DeSci
  • Key Challenges in BioDeSci
    • Overview
  • Data Level: Secure Biomedical Sharing
  • Model Level: Privacy-Preserving Fine-Tuning
  • Evaluation Level: Reproducible Computation
  • The Rexis Solution: Layer for DeSci
    • Overview
    • Data Level: Decentralized Biomedical Data Market
    • Model Level: Privacy-Preserving Training & Inference via Equivariant Encryption
    • Evaluation Level: Secure and Verifiable Biomedical Computation
  • Example BioDeSci Data and Models
    • Overview
  • Tabular Data
  • Biomedical Signals
  • Biological Sequences
  • Medical Imaging
  • Volumetric Medical Imaging
  • Spatial Omics Data
  • Tokenomics
    • $REX Overview
  • Links
    • rexis.io
  • Term of Use
  • Privacy Policy
  • Community
Powered by GitBook
On this page
  1. The Rexis Solution: Layer for DeSci

Evaluation Level: Secure and Verifiable Biomedical Computation

PreviousModel Level: Privacy-Preserving Training & Inference via Equivariant EncryptionNextOverview

Last updated 1 month ago

Evaluation Level: Secure and Verifiable Biomedical Computation

Although decentralized biomedical model fine-tuning can be achieved securely with Equivariant Encryption (EE), verifiable and privacy-preserving evaluation remains an open challenge.

Institutions must not only compute correct results, but also prove correctness to regulators and collaborators—without revealing sensitive patient data or proprietary model details.


EE-Enabled Encrypted Inference and Verification

Given an encrypted input ( x' = T x + \delta ) and EE-transformed model parameters ( (A', b') ), inference proceeds as:

y′=R(A′x′+b′)y' = R(A' x' + b')y′=R(A′x′+b′)

No plaintext parameters or activations are revealed at any point.

The result is decrypted only by the requester:

y=T−1(y′−δ)y = T^{-1}(y' - \delta)y=T−1(y′−δ)

To guarantee correctness and model integrity:

  • The hash of the deployed model ( \mathrm{hash}(A') ) is stored on-chain

  • A lightweight proof ( \pi ) may be generated to link ( y' ) to the authorized model


Decentralized Certification Workflow

The evaluation pipeline proceeds through four stages:

  1. Encrypted Inference Query A user encrypts their private input ( x ) and submits ( x' = T x + \delta ) to the model host.

  2. On-Chain Model Verification Before inference, smart contracts or validators check that the model hash ( \mathrm{hash}(A') ) matches the authorized version.

  3. Inference and Optional Proof Generation The encrypted inference is computed as ( y' = R(A' x' + b') ), and the host may also produce a lightweight proof ( \pi ).

  4. Decryption and On-Chain Logging The requester decrypts the output, and optionally logs ( \mathrm{hash}(y') ) and ( \pi ) to the blockchain for audit purposes.


Security Considerations

  • Privacy Preservation Intermediate states and final outputs remain encrypted unless explicitly decrypted by the user.

  • Verifiability and Integrity Cryptographic links between ( A' ), ( y' ), and ( \pi ) allow third-party verification—without exposing internal model logic or data.

  • Efficiency and Practicality This approach avoids the computational burden of full ZK proofs. Only commitments (e.g., hashes) are logged, enabling real-time, scalable inference.


Benefits for Biomedical Applications

This framework supports cryptographically verifiable evaluation in sensitive biomedical domains, including:

  • Genome-wide association studies

  • Regulatory audit of pharmaceutical trials

  • Clinical model validation across institutions

When combined with Rexis’s encrypted training and compute-to-data sharing, this final layer completes an end-to-end, privacy-first platform for decentralized biomedical AI.

It empowers institutions to share, train, and evaluate models—without ever compromising data integrity or privacy.

Evaluation Level: EE-Enabled Encrypted Inference and Verification. The evaluator encrypts inputs and queries the deployed EE-protected model. Inference is performed entirely in the encrypted domain, and a lightweight proof π is optionally generated to attest correctness. Validators verify the model’s integrity via hash(A′) and log the proof and hash(y′) on-chain. Certified results are returned without exposing raw data or intermediate states.