Accepting new projects · senior-only team · remote-first PWN-ALL · Custom software studio

Software in Rust and Python that outlives the team that shipped it.

We are a senior-only studio building custom software in two languages on purpose: Rust where being wrong is expensive, and Python where being slow-to-ship is expensive.

Avg. p99 improvement across 40 migrations
Memory reduction 0% vs JVM / Node baselines
Services in production 0 since 2024
Security incidents shipped 0 across our entire history
Teams who trust our code
01 Two languages. Not five.

One stack, picked on purpose.

Polyglot agencies sound great in a pitch deck. In production they mean three build systems, four flavours of null, and a graveyard of half-maintained services. We picked two languages that cover 95% of real workloads — and we got very, very good at both.

Hot path · systems · safety

Rust

9.6 internal fit score

Memory-safe without a garbage collector. Data-race-free at compile time. The meanest code reviewer you'll ever have — and once it lets you through, your service doesn't wake you up on Sunday.

When we reach for it

  • Payment rails & anything touching money or PII
  • Hot API gateways with five-nines SLAs
  • Low-latency matching engines, trading, realtime
  • WebAssembly modules shipped to the browser
  • CLI tools & daemons that must start in milliseconds

Trade-offs we won't pretend away

  • Ramp-up for new hires: ~2–4 weeks to productive
  • Compile times on huge workspaces (we fix w/ sccache)
  • Younger ecosystem than Java — mature where it counts
Glue · data · ML · velocity

Python

9.3 internal fit score

Fastest path from whiteboard to working system. The richest ecosystem on Earth for data, ML and automation. Modern Python — 3.12, uv, ruff, pydantic, FastAPI — is a precise, typed, and fast enough language for most of what you need.

When we reach for it

  • Internal tools, dashboards, admin panels
  • ETL, data pipelines, Airflow / Dagster / Prefect
  • ML — training, serving, evaluation
  • Automations & integrations with vendor APIs
  • MVPs that ship this quarter, not next year

Trade-offs we won't pretend away

  • Single-core throughput is 20–50× slower than Rust
  • Higher memory per request — fatal for some workloads
  • Dynamic typing bites without strict mypy / pydantic
Rust where wrong is expensive. Python where slow-to-ship is expensive. One team. Zero dogma.
02 The numbers, not the vibes

Rust & Python vs. the usual suspects.

Tap a metric to highlight where each language lands. Scores are 0–10, compiled from our own benchmarks plus public sources (Techempower R22, CLBG, real migrations we shipped).

Criterion Rust Python Go C++ Java Node.js
Raw performance (p99, single core) 10 3 7 10 7 5
Memory safety & data races 10 9 8 2 8 7
Time to working prototype 5 10 7 3 6 8
Ecosystem breadth 8 10 7 9 10 9
Concurrency without tears 10 6 9 4 6 7
Ops cost per request 10 5 8 9 5 6
Senior talent availability 6 10 7 8 10 9
10-year maintainability 10 8 8 5 8 5
03 From → To

What changes when you migrate.

Pick a starting language. Watch the impact of moving to Rust or Python. Numbers are medians across our last 40 migration projects, not marketing fluff.

Currently on

C / C++

Fast, yes. But every null pointer is a potential CVE, every thread is a potential data race, and your build system is somebody's full-time job.

Typical pain
  • Memory-safety CVEs
  • Undefined behaviour
  • Build-system sprawl
Move to

Rust

Throughput ×6.4
Memory footprint −78%
Runtime crashes −99%
Monthly compute bill −65%

Best when you currently live with null-pointer bugs, data races, or memory that grows unbounded. Rust keeps the speed and deletes the foot-guns.

Move to

Python

Dev velocity ×3.1
Lines of code −55%
Time to release −60%
Runtime overhead +40%

Best when the real cost is engineering time, not CPU. Trade raw throughput for a shorter feedback loop, richer libraries, and code humans can actually read.

How we measure these numbers

Medians across 40 completed migrations between 2023 and 2026. Throughput measured at the application layer (end-to-end p50 under realistic load, not microbenchmarks). Memory is RSS at steady state. Cost is monthly on-demand compute on AWS/GCP, all else equal. Individual results vary — we publish the ones that didn't go to plan too, on request.

04 Numbers without asterisks

Requests per second under real load.

Identical workload — JSON validate → query Postgres → render — measured on a single AMD Ryzen 7 box. These are not microbenchmarks. Source & methodology ↓

  1. 1 Rust · Axum
    21,030 req/s
  2. 2 C# .NET · ASP.NET Core
    14,707 req/s
  3. 3 Node.js · Fastify
    9,340 req/s
  4. 4 C++ · Drogon
    7,200 req/s
  5. 5 Go · Gin
    3,546 req/s
  6. 6 Python · FastAPI (Uvicorn)
    1,185 req/s
  7. 7 PHP · Laravel
    299 req/s

Read this the right way: Python is near the bottom of this chart, and that's fine. We don't run FastAPI on the hot path. We run it where 1,185 req/s is already ~10× more than the workload needs, and engineer-hours are worth more than CPU cycles. Methodology: AMD Ryzen 7, Linux, Docker, single instance, one popular framework per language. Numbers are averages across multiple runs.

05 The real cost of broken code

What an outage actually costs you.

"Five nines" is not marketing. Below is what one hour of unplanned downtime costs, by industry — with sources. We build Rust where these numbers live.

Cost accrued since you opened this section
Finance / healthcare $0 ~$83k/min · $5M/hr
Automotive $0 ~$38k/min · $2.3M/hr
Large enterprise $0 $23,750/min · $1.4M/hr
Mid-size enterprise $0 ~$5k/min · $300k/hr
Finance & healthcare $5M+ / hour

The highest-stakes verticals. Trading platforms, settlement systems and clinical systems can exceed $5 million per hour during a serious outage — before any regulatory or litigation costs are counted.

Source: Gartner 2024 Fortune 500 study; ITIC 2024 Hourly Cost of Downtime.
Automotive manufacturing $2.3M / hour

A stopped production line burns roughly $640 per second. The CrowdStrike outage in July 2024 cost Delta Air Lines alone $380M in five days.

Source: Erwood Group 2025 industry breakdown; Antithesis CrowdStrike postmortem.
Large enterprise (avg.) $1.4M / hour

BigPanda's 2024 large-enterprise number: $23,750 per minute. ITIC reports 41% of large enterprises lose between $1M and $5M per hour of outage.

Source: BigPanda 2024 research; ITIC 11th Annual Hourly Cost of Downtime.
Global 2000 (Oxford Economics) $400B / year

Total hidden cost of unplanned downtime across the world's 2,000 largest companies, per Oxford Economics' 2024 study — averaging $200M of impact per company when revenue, productivity and remediation are summed.

Source: Oxford Economics 2024, “The Hidden Costs of Downtime”.
Mid-size & large (typical hour) $300k+ / hour

ITIC's 2024 survey: over 90% of mid-size and large enterprises now place a single hour of unplanned downtime above this floor — exclusive of legal, civil or regulatory penalties.

Source: ITIC 2024 Hourly Cost of Downtime Report.
Small & mid-size (SMB) $25k–$150k / hour

The 2025 ITIC / Calyptix joint study finds many SMBs lose this much per hour; Siemens reports SMEs hit by outages can see up to $150,000/hr. The average outage event lasts 87 minutes.

Source: ITIC + Calyptix 2025; Siemens True Cost of Downtime 2024.
06 Selected work

Three projects. Three different fires.

Anonymised where the NDA says so, specific where the results do. These are the engagements we’d point a technical buyer to first.

  1. Case 01 Python Database Crypto GDPR

    Fintech data vault: 4× smaller, 5.5× faster, globally compliant.

    Client carried a 1.8 TB Postgres cluster bloated with legacy columns, dead indices, and inline-encrypted BLOBs that had grown over seven years. Crypto ran on a deprecated library flagged in three separate audits. Regulatory exposure was real; auditors were circling.

    What we did

    • Full schema + usage audit, drop unused columns and indexes, introduce proper partitioning.
    • Migrate crypto pipeline from a legacy library to a modern, audited AEAD stack with rotating keys.
    • Convert BLOB-inline encryption to referenced envelope encryption + dedicated KMS.
    • Align data retention & subject-access flows with GDPR, CCPA and APPI.
    Outcome

    Same data, a quarter of the storage bill, 5.5× the throughput, and a clean bill of health for the next regulator who came knocking.

  2. Case 02 Rust C++ → Rust Security Storage

    C++ service rewritten in Rust: 100+ CVE-class bugs killed in 9 weeks.

    User-facing file-processing service in C++, crashing every 4–5 days and patched in-place each time. Our audit surfaced 100+ real bugs: denial-of-service paths, buffer overflows, unbounded request handling. Peak-hour 503s were a weekly ritual. On the storage side, user uploads had accreted into a swamp of duplicate files eating the bucket.

    What we did

    • Complete rewrite in Rust (axum + tokio) with strict input validation and bounded resource limits.
    • Property-based tests + cargo-fuzz over every parser and wire-format boundary.
    • Content-addressed storage layer with deduplication at write-time.
    • Blue-green rollout behind a 4-hour integration window, no downtime.
    Outcome

    Service went from "fragile and patched weekly" to "we stopped looking at the pager". Storage costs dropped with dedupe, support tickets about errors and 503s dried up, and the rewrite paid for itself inside the quarter.

  3. Case 03 Rust Python eBPF / XDP CRM · 4k users

    Enterprise CRM, rebuilt: 18 servers → 5, spend down 60%+.

    Internal CRM serving 4,000+ users across IAM, SOC, centralised logging, chat, file-share, VoIP and end-to-end encrypted data. Eighteen servers, Cloudflare on top, and a cloud bill that kept growing regardless of headcount. We rebuilt the hot path in Rust, kept Python at the integration and reporting layer, and put an eBPF/XDP filter directly in front of the ingress.

    What we did

    • Rust services for auth (IAM), real-time messaging, VoIP signalling, file transfer.
    • Python for admin surfaces, reporting, SOC event correlation, integration with vendor APIs.
    • eBPF/XDP bot & abuse filtering at the kernel — replaced Cloudflare for this workload.
    • Structured logging pipeline rewritten around a zero-copy schema.
    Outcome

    Thirteen fewer servers, no more Cloudflare line-item, the SOC team sees cleaner signal through the logging pipeline, and the CFO stopped asking awkward questions about the infra budget.

07 How we actually work

Turn the dials. Watch the plan change.

Every project balances speed, cost and reliability. The five-stage estimate below is calibrated against industry medians (discovery 2–6 weeks, architecture 1–4 weeks, implementation 4–20 weeks, hardening 2–8 weeks, handover 1–2 weeks — per 2024–2026 reports from NIX United, Agilie, SOLTECH, OTG Lab). Move the sliders; the plan re-weights live.

01

Discovery

3 wks

Read your code, interview your ops, list the unknowns, pick the language per component.

  • Domain interviews & code audit
  • Risk register & SLA targets
  • Per-service language decision
02

Architecture

2 wks

Contracts before code. OpenAPI / protobuf, data models, deployment topology, runbook skeletons.

  • RFCs for every public contract
  • Data model + migration plan
  • Infra-as-code baseline
03

Implementation

8 wks

Small PRs, CI green from day one, deploys on every merge, review by a second senior.

  • Rust: axum · tonic · sqlx
  • Python: FastAPI · Pydantic · SQLAlchemy
  • Weekly demos + changelog
04

Hardening

3 wks

Fuzzing, property-based tests, load tests against realistic traffic, threat model.

  • cargo-fuzz · proptest · hypothesis
  • k6 load tests tied to SLOs
  • Security review & dependency audit
05

Handover

1 wk

Runbooks, on-call rotation, ADRs, and a team that has already shipped this once.

  • Runbooks + on-call matrix
  • ADR log & architecture diagrams
  • 30-day post-launch support
08 Same problem, two languages

What the same endpoint looks like in each.

Fetch a user, validate input, persist to Postgres, return JSON. Toggle between languages — both are real code we'd actually ship.

SQL_CREATE_USER = "insert into users(email,name) values(lower($1),$2) returning id,email,name"
Name = Annotated[str, StringConstraints(strip_whitespace=True, min_length=1, max_length=120)]

class UserIn(BaseModel):
    email: EmailStr
    name: Name

class UserOut(UserIn):
    id: int

@router.post("/users", response_model=UserOut, status_code=201)
async def create_user(u: UserIn) -> UserOut:
    try:
        row = await pool.fetchrow(SQL_CREATE_USER, str(u.email), u.name)
    except UniqueViolationError as exc:
        raise HTTPException(status_code=409, detail="email already exists") from exc
    if row is None:
        raise HTTPException(status_code=500, detail="insert failed")
    return UserOut.model_validate(dict(row))
#[derive(Deserialize, Validate)]
#[serde(deny_unknown_fields)]
pub struct UserIn {
    #[validate(email)]
    pub email: String,
    #[validate(custom(function = "valid_name"))]
    pub name: String,
}

pub async fn create_user(
    State(pool): State<PgPool>,
    ValidatedJson(u): ValidatedJson<UserIn>,
) -> Result<(StatusCode, Json<UserOut>), ApiError> {
    let user = sqlx::query_as!(UserOut,
        "insert into users(email,name) values(lower($1),$2) returning id,email,name",
        u.email.as_str(),
        u.name.trim(),
    )
    .fetch_one(&pool)
    .await
    .map_err(ApiError::from_db)?;

    Ok((StatusCode::CREATED, Json(user)))
}
Lines of code
16 22
Throughput
1,185 req/s 21,030 req/s
p50 latency
21.0 ms 1.6 ms
RAM at idle
41.2 MB 8.5 MB
We reply within 1 business day. Not kidding.

You read the whole page.
Let's build the thing.

Tell us what's wrong with your stack, or what you want to build from scratch. You'll get a real engineering opinion — not a sales deck.

  • No junior engineers. No offshoring.
  • Fixed-price options on scoped work.
  • NDA signed before we ask you anything.