We don't ship
features. We ship
capabilities.
This page is not a job listing. It is a filter. If you finish it and feel recognised — we should talk. You don't need to cosign every paragraph. If you agree with most of it but one stretch makes you tense, that's ordinary: what we look for is whether you can name the tension, defend the slice you believe, and stay curious about the rest — including ours.
Choose your discipline
Two paths. One philosophy.How we think, in six lines.
Records, not rows. Data carries ownership, history, context, behavior.
Capabilities, not features. We solve classes of screens, never one.
Engines hosted in adapters. The next module costs an adapter, not a fork.
The Pause is the work. Code written without it is rewritten in six months.
Re-render budget is real. Memos derive. Effects mutate. Don't confuse them.
Empty, error, loading, offline. Four states. Always. Not the follow-up.
A feature solves one screen.
A capability solves a class of them.
Add a CSV export to DataBoard.
Add export to TableKit so every record table — DataBoard, contract obligations, FyDrive listings — exports the same way.
Add comments to contracts.
Build the annotation primitive in FyUI so any record — contract, doc, sheet cell, calendar event — supports threaded comments with the same shell, hooks, permissions.
Add a date picker to this form.
Adopt FyCal's date input so 'date' means the same thing everywhere — locale, keyboard nav, a11y, time-zones.
Add formulas to DataBoard's number column.
Wire FyFormula into TableKit's cell layer once, get formulas everywhere a TableKit cell appears.
Read the codebase the way you'd onboard.
We don't run brain-teaser interviews. The day-one job is reading someone else's code, slowly, asking why a sentence is shaped the way it is. So that's what this section is. A real working tree. Open files. Watch for annotations in the margin — they're where the actual culture lives.
# FyboardYou are looking at a real working tree.This is not a sample. The shapes, names, and decisionsmirror what runs in production at RCL. Module-by-module,this is the codebase you would touch on day one.## How to read this1. Open three or four files. Read them like you reada great book — slowly, asking why a sentence isshaped the way it is.2. Watch for inline annotations from senior engineers.They explain why we did NOT do the obvious thing.3. The annotations are the point. The code is the surface;the reasoning underneath is what the job feels like.## Layoutsrc/tablekit/ record-table enginesrc/fybrain/ local intelligence workersrc/workflow/ stateful, recoverable orchestrationsrc/people/ HR / identity primitivesrc/fydrive/ file storage with structureARCHITECTURE.md the long-form rationale## A note on AIRoughly 60% of new code in this tree was first draftedby an LLM. 100% of it was reviewed by a human who coulddefend every line. We do not romanticise typing.We optimise for taste. That is what the code review in §04is asking you to demonstrate.
Click any file in the left tree to open it. Try src/tablekit/core/Engine.ts first.
An AI-drafted PR. Review it.
The day-one job in 2026 is not writing code. It's reviewing someone else's — usually an LLM's. So here's a real-looking PR, drafted by Claude and applied by an engineer. Half of it is structurally wrong. Click any line to leave a comment. When you've reviewed every file, submit. We'll show you what a senior at Fyboard flagged and where you and we disagreed.
There is no score. There's overlap.
feat(contracts): add comments thread to obligations
Adds an inline threaded-comments feature to contract obligations so legal teams can negotiate without leaving AlphaCore.
— New AlphaCoreCommentsService (in alphacore module)
— New <ObligationComments /> component (in alphacore module)
— Polls every 3s for new comments
— Stores comments in a new `obligation_comments` table
Closes #1842
src/modules/alphacore/services/AlphaCoreCommentsService.ts+38Working with LLMs.
Most engineering pages either pretend AI doesn't exist or fetishise it. Both miss the point. The bottleneck stopped being typing speed years ago. The bottleneck is taste — knowing what to build, what not to build, and which lines of an LLM's confident output are quietly broken.
- 01
Use it for everything you'd ask a junior to do.
Boilerplate, tests, refactors, type plumbing, documentation, transforms, migration scripts. We do not romanticise typing. The bottleneck stopped being keystrokes years ago.
In practiceansh: "draft me a typed wrapper around this Express handler" — five minutes. The wrapper is then read line by line, edge cases are added, the tests are written, the PR is opened. The drafting is the cheap part.
- 02
Do NOT use it for the Pause.
An LLM can suggest 30 ways to model a record. It cannot tell you which of them you'll regret in six months. The Pause is taste — and taste is the part that does not delegate.
In practiceWhen designing FyBrain's recovery model, we did not ask Claude what to do. We asked Claude to argue against three approaches we'd already drafted. The decision was ours.
- 03
Every PR — AI-drafted or not — gets the same review.
Often more, because LLM-drafted code is fluent and confident even when it's structurally wrong. The §04 PR you just reviewed was machine-drafted. Half of it is a wrong-engine mistake. It looked right at first glance. That is the trap.
In practiceWe tag the original drafter in commit messages. "co-authored-by: claude" is normal. We do not hide it. We do not credit the human for what the model wrote.
- 04
The new senior skill is review, not write.
Spotting a wrong abstraction. Catching a leaky engine boundary. Knowing when 'looks good' is the wrong answer. We hire for these. We promote for these. The page you are reading was, in places, drafted with an assistant — and reviewed by a human who could defend every choice.
In practiceA '5x productivity' from LLMs is a 5x amplifier on the senior reviewer's taste. With bad taste, that's a 5x amplifier on shipping the wrong thing.
- 05
Tenant-isolated, always. No exceptions.
Models we serve to customers are scoped to that customer alone. Their data does not train shared models. Ever. The day this becomes inconvenient is the day we double down — not the day we publish a long blog post about why we changed our minds.
In practiceFyBrain runs locally per tenant. Embeddings, summarisation, retrieval — all in-tenant. The only shared thing is the worker's source code.
Before code, the Pause.
The Pause matters more now, not less. Code is cheap. An LLM will draft three implementations of anything you ask. The bottleneck is not can we build it. The bottleneck is should this exist, in this shape, owned by this engine, with these consequences.
The Pause is a discipline — a refusal to write code (or to ask Claude for code) until we've answered a small list of questions out loud. It is uncomfortable. It looks like nothing happening. Other teams measure throughput in tickets-closed-per-week and would see the Pause as a gap. We see it as the most leveraged hour of the project.
You will be evaluated as much on how often you pause as on how much you ship.
- 01Is this a feature, or a capability?
- 02What record shape does this touch?
- 03Which engine should own this?
- 04What is the smallest version that proves the idea?
- 05What does this look like at module #20?
- 06What does removal look like?
We've paid for some of this clarity.
Early AlphaCore obligations moved fast because the business needed proof in market. That speed taught us where Workflow's graph was too sharp — one path read well on a whiteboard and punished tenants in production. The fix wasn't a manifesto; it was weeks of careful migration work everyone signed up for, including the people who had argued loudest for shipping.
We once let an LLM-assisted draft get too close to a customer-facing edge without enough human review. Nothing headline-level broke — which almost made it worse. The lesson was boring and expensive: models draft fast; institutions move on trust. The Pause on anything that leaves the building with someone else's name on it is stricter now than it was then.
Tradeoffs stay real. Tenant-isolated intelligence costs more per token than a shared shortcut. TableKit cost more than wrapping a grid — until it didn't. If you only want the polished story, this page will disappoint you. If you want the places we've been wrong, tightened, or slower than we wished, we're happier in that conversation.
Lines we will not cross.
Anti-patterns we reject on sight
- Wrapper components that just rename props.
- Index files re-exporting everything in a folder.
- Custom event buses when the engine already provides one.
- `any` as a load-bearing type.
- useEffect chains that re-derive what should be a useMemo.
- Catch-and-ignore — if you're unsure what an error means, log it, surface it to the user, or narrow blast radius; swallowing it hides what you still need to learn about the function.
- "We'll write the tests later."
- Re-implementing TableKit / FyCal / FySheet behaviour inside a module.
Commitments we sign in public
- Training for your org only. Tenant-isolated models. Your data does not feed shared models. Ever.
- No lock-in. APIs are documented. Export buttons are everywhere. The day you want to leave is a day we help you leave.
- No fake intelligence. If it's rules-based, we say so. If it's AI, we say so. No "AI" stickers.
- No hidden limits. Beta is beta. The pricing page is the price.
Who thrives here. Who doesn't.
- Reads the code (and the LLM's output) before asking the question. Then asks the better question.
- Has opinions, holds them lightly, changes them fast when shown a better one — and can challenge ours without either folding or performing cynicism.
- Has built one thing they're proud of and one thing they're embarrassed by.
- A 4px misalignment makes them itch. They fix it without asking.
- Has read the API of one major library all the way through. Not skimmed.
- Treats Claude / Cursor / Copilot output as a draft, not an answer.
- Knows when to walk away from a problem and come back.
- Believes craft is intrinsic. Would write good code if no one were watching.
- Ticket-driven. Closes the issue and stops thinking.
- Ships LLM output unread. "It compiled" is not a review.
- Library-first. Reaches for npm i before reading their own codebase.
- Framework maximalist. "We should rewrite this in [favourite framework]."
- Allergic to design reviews.
- Allergic to backend reality. A button that hits a slow API is a slow button.
- Confuses activity with progress.
- Thinks "senior" means "doesn't write code" — or "doesn't review code."
No gate. No quiz. Just write.
The page itself was the filter. If you got this far, you've read code, sat with where we've been wrong, reviewed an AI-drafted PR, and spent time with our take on LLMs and the Pause. We don't need a form to test you again.
Send four things to careers@fyboard.com
- 01A short note (under 500 words). What part of this page hit hardest, and why. Honesty over polish. If something irritated you and you can articulate why, that's also a great note.
- 02One artifact. A real thing you built, wrote, designed, decided against, broke, or rebuilt. The more it shows your thinking, the better. AI-assisted is fine — tell us what was yours and what was the model's.
- 03Answers to any three self-reflection questions:
- The most unglamorous problem you've voluntarily worked on, and why.
- A time you deliberately did not ship something, even though you could have.
- What you would refuse to build, even if a customer paid for it, even if a manager asked.
- 04Your engagement summary. Already pre-filled in the email. Don't edit it. We compare what the page recorded with what you tell us — not to grade you, but to see whether you took the page seriously. Both directions are signal.
Preview the engagement summary that will be in your email
— Fyboard /careers · engagement summary — time on page: ~1 minute codebase files read: 0/7 PR comments left: 0 PR review submitted: no side quests found: 0/8
This is a snapshot of how you used the page. We read it. We don't grade it.
We read every email. We reply to most. If we say no, we'll tell you the real reason — not a form letter.
There is a particular feeling you get when you read code written by someone who paused before writing it. The functions are short. The names earn their length. The abstractions appear exactly when they're needed and not one beat sooner. The whole file feels inevitable.
That's what we're optimising for.
Not lines per day. Not features per quarter. Not headcount.
Inevitability of code.