
The optimization stack forcheaper, faster AI inference.
A startup with a real research moat.
GAISSA builds on prior UPC research in efficient AI systems, optimization workflows, and energy-aware evaluation. That foundation matters because it gives the product stronger technical grounding in the tradeoffs teams actually care about: latency, cost, and deployment fit.
Efficiency as a product advantage
GAISSA grows out of work on efficient AI systems, model optimization, and the tradeoffs that matter in production.
Built on peer-reviewed work
The underlying research line includes publications in venues such as ICSE, ESEM, TOSEM, IEEE Software, and Computing.
From lab results to transfer
The platform is shaped by the same team that has been turning optimization and evaluation research into practical assets.
Research, product, and engineering in the same room.
GAISSA is being shaped by a UPC-centered team spanning research, product transfer, technical delivery, and business mentoring.








A Platform for Every Stage
Whether you build models or just need to deploy them, we have the tools to cut costs and boost performance.
Optimization Selector
Intelligently analyzes your architecture and constraints to determine the single best optimization strategy. Prevents over-optimization.
Model Optimization Pipeline
A secure, automated workflow to ingest, quantize, and compile your models. Reduces human error and protects proprietary weights.
Strategic Planning Suite
Forecast impact before deploying. Includes 'What-if Analysis' for simulation and 'ROI Calculator' to estimate dollar savings.
Energy Labeling
Automated energy efficiency ratings (like appliance labels) for compliance with EU sustainability regulations and ESG reporting.
Live Benchmarking
Compare your baseline vs. optimized models side-by-side. Visualize real-time gains in cost and latency on actual hardware.
Transparent Model Catalog
Choose between our ultra-efficient Slim models or standard open-source versions. Deploy instantly via API or securely on your own cloud.
| Model Name | Type | Input Cost | Output Cost | Actions |
|---|---|---|---|---|
Llama 3.3 70B Slim GAISSA Optimized | Optimized (FP8) | $0.10 / 1M | $0.21 / 1M | |
Mistral Small 3.1 Slim GAISSA Optimized | Optimized (INT4) | $0.05 / 1M | $0.08 / 1M | |
DeepSeek R1 Slim GAISSA Optimized | Optimized (FP8) | $0.28 / 1M | $0.44 / 1M | |
Llama 3.3 70B Standard | BF16 (Standard) | $0.15 / 1M | $0.31 / 1M | |
Mistral Small 3.1 Standard | BF16 (Standard) | $0.11 / 1M | $0.17 / 1M |
Need a Private Deployment?
Organizations that need sovereignty, privacy, and full control over their models can run GAISSA on their own infrastructure. If you want a private setup tailored to your environment, security requirements, and deployment constraints, talk to us.
- Private Infrastructure
- Sovereign Deployment
- Model Privacy
- Tailored Integration
Talk to the team.
If you want to run optimized models on your own infrastructure, explore a private deployment, or understand whether GAISSA fits your stack, send us the context and we will follow up.
Infrastructure-first
Built for teams that care about sovereignty, deployment control, and predictable inference economics.
Direct response
Messages land in the team inbox so you can start with a simple website form instead of a sales maze.
- Private deployments on your own infrastructure
- Custom model optimization and benchmarking
- Hosted API access and model catalog questions
- Pilot projects, partnerships, and enterprise requirements
Tell us what you need.
Share your use case, infrastructure constraints, or deployment goals. We will route it directly to the team.