Account

Credit system

Complete guide to AI model costs, pricing, and spending management in Cension.

Credit system

Complete guide to AI model costs, pricing, and spending management in Cension.

Every time you run a workflow in Cension, you're choosing from a range of AI models provided by different companies like OpenAI, Google, and Anthropic. Each model has different strengths, capabilities, and costs. Understanding how this system works helps you make smart choices that balance quality, speed, and budget.

Model pricing table

Here's a complete list of available AI models with their pricing parameters:

ModelBase multiplierInput block tokensOutput block tokens
gemini-2.5-flash42000180
gemini-2.5-flash-lite1800160
gpt-4o4100080
gpt-4o-mini31800160
gpt-543300180
gpt-5-mini33000200
gpt-5-nano12400180

How model selection works

When you click the AI model selector in your workspace toolbar, you see a dropdown organized by provider. OpenAI models appear first, followed by Google and Anthropic options. Each model entry shows its name, a credit cost multiplier, and a brief description of what it's good at.

The credit multiplier is the key number to pay attention to - it tells you how much this model costs relative to others. A model with '3x' costs three times as much per operation as a '1x' model. This multiplier reflects the model's capabilities and the computational resources it requires.

Block-by-block charging

Cension charges based on actual token usage, but in 'blocks' rather than raw tokens. Each model has defined token limits for input and output, and you get charged based on how many of these blocks you consume.

  • Each model has `input_block_tokens` (e.g., 2000) and `output_block_tokens` (e.g., 180) limits
  • Your actual usage gets divided by these limits and rounded up
  • Input blocks + output blocks = total blocks used
  • Total blocks × model's multiplier = credits charged

Example calculation

If a model has 2000 input tokens and 180 output tokens per block, with a 3x multiplier:

  • Using 3500 input tokens = CEILING(3500 ÷ 2000) = 2 input blocks
  • Using 400 output tokens = CEILING(400 ÷ 180) = 3 output blocks
  • Total blocks = 2 + 3 = 5
  • Credits charged = 5 × 3 = 15 credits

This system means costs scale with actual usage while staying predictable. Larger requests consume more blocks, but you never pay for tokens you don't use.

Choosing the right model

Different models excel at different types of work. Here's how to think about your options:

Managing costs effectively

Since costs depend on actual token usage divided into blocks, you have several levers to control spending:

Remember that Deep Search doubles all credit costs, so use it strategically for only the most demanding research tasks.

Real, up‑to‑date, customizable data. Create or enrich any dataset you want with AI.

Copyright © 2025 Cension AB. All rights reserved.