Quadrillion logo

Research at the speed of thought.

Qualia is the research agent that ideates, runs experiments, and collaborates with you.

~/lm-training › train_transformer.ipynb

Agents

Today1
Train 125M transformer2h
Starting...train_transformer
RoPE + longer context35m
Final loss: 2.741variant_rope
LR warmup + weight tying40m
Final loss: 2.709variant_warmup
SwiGLU + RMSNorm45m
Final loss: 2.641variant_swiglu
Yesterday1
Tokenizer benchmarks1d
Completed successfully
[1]
from lm_lab import TransformerLM, Trainer
from lm_lab.data import WikiText103
import matplotlib.pyplot as plt
Executed in 611ms
[2]
model = TransformerLM(
    n_layers=12, n_heads=12, d_model=768,
    max_seq_len=1024, dropout=0.1,
).cuda()
print(f"Parameters: {model.num_parameters():,}")
Parameters: 124,439,808
Executed in 610ms
[3]
trainer = Trainer(
    model,
    dataset=WikiText103(),
    lr=6e-4, weight_decay=0.1,
    scheduler="cosine", min_lr=6e-5,
)
Executed in 609ms

Set up a training loop for a 125M parameter transformer on WikiText-103 with AdamW and cosine annealing

I’ll wire up the full training pipeline: load/tokenize WikiText-103, build a ~125M decoder-only Transformer, and set up AdamW plus a cosine learning-rate schedule. I’ll also add common Transformer training tricks (mixed precision, gradient clipping, checkpointing, periodic validation perplexity) so the run is stable and results are comparable.

Made 3 changes

Setup is done. I’ll treat validation perplexity as the headline metric, evaluate on a fixed interval, and checkpoint so we can do apples-to-apples comparisons when we start testing improvements.

GPT-5.2 (Low) ›

Qualia is a love letter to the art of research.

Built by researchers to make practical research workflows faster and easier.

From idea to results.

Qualia runs wherever Jupyter does. Chat with it to clean data, make beautiful visualizations, and fit models—just by asking.

Agent demo screenshot

Explore every direction.

Never leave an idea on the table again. Qualia coordinates parallel agents so you can iterate on promising model variants and re-examine that featurization choice you made last week.

Notebooks demo screenshot

The best research assistant.

Propose a direction in Slack, and Qualia runs for hours—training models, testing variations, and reporting what works.

Collaboration demo screenshot
“This feels like a superpower.”
Lino Le VanCo-founder at AlphaXiv

Pricing

Qualia Free

Explore Qualia

$0for ever

  • Chat-based data analysis
  • Community support

Pro

For researchers

$20Per month

Everything in Free, plus:

  • Up to 8 parallel agents: explore multiple directions at once
  • Access to GPU nodes
  • Priority support

Max

For power users

$100Per person billed monthly

Everything in Pro, plus:

  • Autonomous multi-hour research runs
  • Slack / Google Docs integration
  • Higher output limits for all tasks
  • Dedicated support

Enterprise

For organizations with custom needs

Custom

  • On-prem deployment—your IP stays local
  • Integration with your codebase and internal tools
  • Dedicated field engineering
  • 24/7 support with SLAs
  • SSO, audit logs, and compliance controls
Quadrillion - Research at the Speed of Thought