The Monarch Platform

Monarch is our compute utility for training, fine-tuning, and hosting AI models. It gives you one simple interface to discover GPUs, launch jobs, manage models, and control spend, automatically weaving together providers, regions, billing systems, and deployment tooling.


The Revenue Program

The Revenue Program is for developers building AI-enabled products for African users who want a monetization model that fits prepaid behavior and avoids unpredictable billing risk.

Instead of the developer paying first and recovering money later, the end user tops up a balance and spends it as they use AI inside your product. Usage is metered, credits are deducted automatically, and when the balance runs out, AI usage pauses until the user tops up again.

If you’re building for SMEs, education, creators, customer support, logistics, research workflows, or any product where AI creates direct day-to-day value, this model can fit naturally.


The Compute Utility

Monarch lets you specify the workload, and the platform handles placement, provisioning, execution, and lifecycle management across training and inference jobs. It abstracts the operational layer around compute orchestration so teams can focus on models, data, and application logic rather than infrastructure coordination.

We expose a unified set of region identifiers across Africa and global hubs, so the same workflow works everywhere. You can target a preferred region when it’s available, and keep a consistent interface as local capacity expands.

GPU Catalog

Monarch maintains a fast, cached catalog of available GPU options with clear specs and pricing. You can filter by region, GPU family, memory, and price, then pin exact hardware, or let Monarch automatically choose from your preferred set.

Models & Deployment

Your trained models live in your account alongside a registry of public base models available for fine-tuning. When you’re ready to serve, deploy a model and call inference endpoints for real-time or batch predictions.

Training Jobs

Training is a single job submission: choose a base model, point to data, and set parameters. Fine-tuning is the default for adapting existing models; pre-training is available when you need full control and custom initialization.

Datasets

Browse and inspect datasets before you train so you can validate fields, splits, and metadata early. Use a dataset ID or a direct HTTPS file, and override the text field mapping when your data isn’t in a standard format.