The Monarch Platform

Monarch is our compute utility for training, fine-tuning, and hosting AI models. It gives you one simple interface to discover GPUs, launch jobs, manage models, and control spend, automatically weaving together providers, regions, billing systems, and deployment tooling.


The Revenue Program

The Revenue Program is for developers building AI-enabled products for African users who want a monetization model that fits prepaid behavior and avoids unpredictable billing risk.

Instead of the developer paying first and recovering money later, the end user tops up a balance and spends it as they use AI inside your product. Usage is metered, credits are deducted automatically, and when the balance runs out, AI usage pauses until the user tops up again.

If you’re building for SMEs, education, creators, customer support, logistics, research workflows, or any product where AI creates direct day-to-day value, this model can fit naturally.


The Tenders API

The Tenders API turns tender documents into a submission-ready compliance pack.

Upload the tender documents and your supporting company files, and it generates:

  • A compliance checklist (pass/fail with evidence links),

  • A gap report (what’s missing and how to fix it),

  • A submission pack structure (annexures, naming, indexing),

  • A clear audit trail you can share internally.

Use it as a simple web workflow, or integrate via API if you run tenders at scale.


The Compute Utility

You tell Monarch what you want to run and it handles resource selection, job lifecycle, and the operational glue around training and inference so you can stay focused on the model and the product.

We expose a unified set of region identifiers across Africa and global hubs, so the same workflow works everywhere. You can target a preferred region when it’s available, and keep a consistent interface as local capacity expands.

GPU Catalog

Monarch maintains a fast, cached catalog of available GPU options with clear specs and pricing. You can filter by region, GPU family, memory, and price, then pin exact hardware, or let Monarch automatically choose from your preferred set.

Models & Deployment

Your trained models live in your account alongside a registry of public base models available for fine-tuning. When you’re ready to serve, deploy a model and call inference endpoints for real-time or batch predictions.

Training Jobs

Training is a single job submission: choose a base model, point to data, and set parameters. Fine-tuning is the default for adapting existing models; pre-training is available when you need full control and custom initialization.

Datasets

Browse and inspect datasets before you train so you can validate fields, splits, and metadata early. Use a dataset ID or a direct HTTPS file, and override the text field mapping when your data isn’t in a standard format.