The Monarch Platform

Monarch is our compute utility for training, fine-tuning, and hosting AI models. It gives you one simple interface to discover GPUs, launch jobs, manage models, and control spend, automatically weaving together providers, regions, billing systems, and deployment tooling.

Get Activated

The Compute Utility

You tell Monarch what you want to run and it handles resource selection, job lifecycle, and the operational glue around training and inference so you can stay focused on the model and the product.

We expose a unified set of region identifiers across Africa and global hubs, so the same workflow works everywhere. You can target a preferred region when it’s available, and keep a consistent interface as local capacity expands.

GPU Catalog

Monarch maintains a fast, cached catalog of available GPU options with clear specs and pricing. You can filter by region, GPU family, memory, and price, then pin exact hardware, or let Monarch automatically choose from your preferred set.

Models & Deployment

Your trained models live in your account alongside a registry of public base models available for fine-tuning. When you’re ready to serve, deploy a model and call inference endpoints for real-time or batch predictions.

Training Jobs

Training is a single job submission: choose a base model, point to data, and set parameters. Fine-tuning is the default for adapting existing models; pre-training is available when you need full control and custom initialization.

Datasets

Browse and inspect datasets before you train so you can validate fields, splits, and metadata early. Use a dataset ID or a direct HTTPS file, and override the text field mapping when your data isn’t in a standard format.

Monarch API
Monarch-1
Monarch Voice-1
Datasets
Benchmarks
Chip Design