Skip to content

lunal-dev/home

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

178 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Home   Components   Cloud   Pricing   Docs

Confidential AI

Confidential is the confidential computing stack. We run your AI workloads (inference, training, agents) in hardware-encrypted Trusted Execution Environments (TEEs). Your data and code stay private while being processed. Your code can't be tampered with. You can cryptographically verify both claims without trusting us.

You can run the Confidential stack on your hardware. Or host your workload on our Confidential Cloud.

Use Cases

  • Private inference. Guarantee data privacy during inference. Customer prompts, responses, and model interactions are never visible to you or your infrastructure.
  • Weight protection. Protect proprietary weights from extraction during inference or fine-tuning. Weights never leave hardware-enforced secure enclaves.
  • Private training. Train on sensitive data and cryptographically prove exactly what data was used.
  • Agent security. Agents run inside TEEs with hardware-enforced credential isolation. Tokens and API keys never exist in plaintext outside a TEE.

Get Started

To add privacy to your existing infra, see components. To run workloads on our infra, see cloud. Or contact us.

About

Confidential is the confidential computing stack. We run your AI workloads (inference, training, agents) in hardware-encrypted Trusted Execution Environments (TEEs). You can provide on-prem security and end-to-end privacy guarantees on your off-prem hosted infrastructure. Your data and code stay private. You can independently verify these claims.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors