Inside Ohana — how a team runs on agents
NYC's largest medium-term housing platform runs an agent for every teammate. Three weeks after starting, they call it 'Ohana 3.0.' Here's the deep dive on what they built and why it worked.
A lot of AI case studies read like thought experiments. This one isn't. Ohana — the largest medium-term and subletting platform in New York City — has been running agents since mid-March 2026. Three weeks later they'd stopped calling it their 'Spawn deployment' and started calling it 'Ohana 3.0.' This post is the inside account.
I spent a week with the Ohana team to understand what they actually built. Some of it is public. This is the full picture.
The problem: a human-touch business entering five new cities at once
Ohana's model is relationship-heavy. Host relationships, tenant relationships, neighborhood knowledge — all of which live in specific people's heads. That was sustainable when they operated one city. As they prepared to launch London, Boston, the Bay Area, Sydney, and Melbourne in parallel, the question became: how do we replicate the knowledge without diluting it?
The obvious answer — hire aggressively in each city — takes a year and risks losing what made the NYC operation work. The alternative — encode the team's expertise into agents — was what we pitched. Three weeks later we were building the third cohort together.
The architecture: one agent per teammate
The key architectural decision wasn't 'build a real estate agent.' It was 'build an agent for each member of the team, encoded to that person.' Sarah's agent. Mike's agent. Aisha's agent. Each one carrying the specific way that human operated — their screening criteria, their neighborhood knowledge, their negotiation style, their escalation instincts.
This mattered because Ohana's best people weren't interchangeable. The way Sarah priced a medium-term lease in the Upper West Side was different from how Mike did it in Williamsburg, and the difference was commercial. A role-agent — 'an Ohana leasing agent' — would've averaged those differences and produced generic work. A person-agent kept the differences.
The encoding loop — what it actually looked like
Not an interview. Not a ten-hour discovery. The team built their agents by working alongside them on real lease files. The agent took a first pass. The person corrected. The corrections updated the agent's persistent memory. Repeat.
By week two, most team members' agents were producing work they were willing to ship. By week three, most were producing work that didn't need rewriting — just review. That's the compounding moment.
Three observations from the encoding phase:
- The best encoders were the people who could already explain their process clearly. People who worked on instinct alone took longer — not because the agent couldn't keep up, but because verbalizing the instinct was the bottleneck.
- Corrections carried more signal than initial training. The agent learned faster from 'you missed the fact that this neighborhood is transitional' than from 'here's my general framework.'
- Edge cases moved the needle more than averages. The agent converged fast on typical files; what took weeks was handling the exceptions — and that's exactly where the human expertise lived.
What runs autonomously today
Three weeks in, here's what Ohana agents do without supervision:
- Screen new host applications against the platform's standards and each city's specific criteria.
- Respond to tenant inquiries 24/7, in the team member's voice, escalating only when judgment is actually needed.
- Run market research when evaluating new neighborhoods — comparables, trajectory, operational fit.
- Handle the logistics of onboarding — document collection, verification, initial walkthrough scheduling.
- Flag at-risk leases before they become problems, using signals only the specific team member would've caught.
- Train new hires — walk them through how the specific senior person on their team actually works.
The compounding effect
Here's what surprised the Ohana team: the agent carrying Sarah's expertise started training new team members better than Sarah could. Not because the agent was smarter — because it never got bored. A new hire can ask the agent 'why did you decline this application' a hundred times without wearing out a human.
Which means the best people's judgment is now not only operating at scale (through their agents) but also cloning itself (through new hire training). That's a step-change in how the firm scales talent.
“The companies that figure this out quickly and take advantage of this technology are the ones that are going to win. If you don't use this, you're not going to be able to compete.
— The Ohana team
What they're exploring next
The next frontier Ohana is testing: real-time monitoring of subletting activity across markets globally. An agent that watches the market, flags expansion opportunities, and produces the internal memo justifying a new-city move. The playbook that took 12 months to build for NYC becomes a two-week feasibility pass for any new city.
The deep version of this is: the roadmap itself changes. When execution gets this cheap, you take more shots. That's the real unlock, and it's what we think every firm that runs on encoded-expertise agents will discover — you stop being rate-limited by the size of your team and start being rate-limited by the quality of your strategic questions.
What transfers to your team
The Ohana pattern isn't specific to real estate. The setup generalizes:
- Start with your best people, not your broadest need.
- Encode by working alongside the agent on real files, not by interviewing.
- Focus on exceptions — that's where the judgment lives.
- Measure by how much human time the agent saves, not how much work it does.
- Use the agents to train new hires. That's the compound.
We'll publish more deployments as they go live. Ohana was the proof that the pattern works on real-world operational businesses. Your business is probably closer to it than you think.