Lendi Built Guardian — Agentic AI Automating Mortgage Refinance in 16 Weeks

How Lendi Built Guardian: Agentic AI That Reimagines Mortgage Refinance in 16 Weeks

TL;DR: Lendi built Guardian, an agentic AI system that continuously monitors home loans, surfaces better mortgage rates via a “Rate Radar,” and automates refinance flows so a customer can often lodge an application in about 10 minutes (company-reported). Built in a focused 16-week sprint and more than 30,000 development hours, Guardian pairs specialized AI agents with cloud orchestration, observability, and baked-in governance to speed outcomes while preserving auditability and human oversight.

Why mortgage refinance is a fit for agentic AI

Mortgage refinance is a high-value, repetitive, regulation-heavy process that rewards speed and accuracy. Customers want better rates without the paperwork; lenders need compliant, auditable interactions. That combination makes refinance automation an ideal early use case for agentic AI—multi-agent systems where each AI handles a specific task (data collection, eligibility checks, product matching, or customer outreach) and coordinates to complete an end-to-end workflow.

“We’ve built our platform so that refinancing happens at the speed of life, not at the speed of paperwork.” — Devesh Maheshwari, CTO, Lendi

How Guardian works: the customer journey

Example vignette: Jane gets a Rate Radar alert on her morning commute saying market rates moved in her favor. Guardian already has her property valuation, current loan details, and consent on file. A set of AI agents validates eligibility, pulls lender offers, prepares the application, and notifies Jane. By evening she taps one confirmation button and the refinance application is lodged. That is the “10-minute” workflow Lendi reports: rapid, low-friction, and human-approved.

Key agent roles:

  • Mortgage Broker Associate Agent — orchestrates the end-to-end workflow and decides which agent to call next.
  • Customer Information Collection Agent — gathers and validates identity, income, and property context.
  • Product Recommendation Agent — matches customers to lender offers based on eligibility and preferences.
  • Product-Specific Collection Agents — gather any additional documents or deal-specific inputs.
  • Linda (communications agent) — manages re-engagement across SMS, email, WhatsApp and push, using a digital twin of customer data to tailor messages.

“A customer can receive a Rate Radar alert about a sharper rate or a shift in property value during their morning commute…by the time they’re heading home, their refinance loan application can be lodged.” — Devesh Maheshwari

Build cadence and team effort

Lendi launched Guardian in 16 weeks from kickoff. That rapid timeline required a focused scope, executive alignment, and a large cross-functional commitment—more than 30,000 development hours across product, engineering, compliance, and partner integration teams. The sprint prioritized three constraints: deliverable customer value, regulatory compliance, and stable integrations with lending partners.

Typical weekly milestones (compressed)

  • Weeks 1–2: Scope, risk assessment, compliance requirements, lender integration priorities.
  • Weeks 3–6: Core agent prototypes and data pipelines; secure consent and data access patterns.
  • Weeks 7–10: End-to-end orchestration, acceptance tests with mock lenders, and human-in-loop checkpoints.
  • Weeks 11–14: Observability, guardrails, and audit trails; pilot with a subset of customers.
  • Weeks 15–16: Go-live readiness, production monitoring, and feedback loops to iterate rapidly.

Technical architecture at a glance

Core pattern: specialized AI agents coordinated by an orchestration layer, with centralized observability and strict guardrails.

  • Foundation models and governance: Amazon Bedrock plus Amazon Bedrock Guardrails to enforce compliance and policy checks for customer-facing output.
  • Agent orchestration: Kubernetes (AWS EKS) runs agent processes; orchestration calls model endpoints and internal services.
  • Integration layer: Model Context Protocol (MCP) servers and the Agno open-source agent framework provide connections to institutional data and lender APIs.
  • Persistence & storage: MongoDB for session and context state; Amazon S3 for documents.
  • APIs & logging: Amazon API Gateway for RESTful calls; CloudWatch for system logs.
  • Observability: Langfuse captures agent traces, reasoning chains, and decision context to provide audit trails for engineers and compliance teams.
  • Cost efficiency: Bedrock batch inference for non-real-time tasks; exploring Bedrock AgentCore to reduce operational overhead as scale grows.

Governance, auditability and human-in-the-loop

Lendi treats governance as a design requirement. Every agent action is logged with a decision context, and Langfuse traces link model prompts, intermediate reasoning steps, and outputs to create an auditable chain. Sensitive data is encrypted at rest and in transit; access is role-based, with explicit escalation and override paths for human reviewers.

Important governance patterns used:

  • Platform-level guardrails (Bedrock Guardrails) to block disallowed content or actions before they reach the customer.
  • Immutable or exportable decision trails for regulatory review showing data sources, model outputs, and human sign-offs.
  • Defined thresholds where human review is mandatory—for example, complex product matches, large loan changes, or any flagged anomalies.
  • Bias mitigation through controlled model selection, constrained ranking logic, and monitoring of recommendation patterns.

Outcomes and company-reported metrics

Reported highlights (company-reported):

  • 16-week build time from kickoff to launch.
  • More than 30,000 development hours across teams.
  • Enabled millions in settled home loans (company-reported scale figure).
  • Rate Radar scans thousands of home loans daily to surface better rates.
  • Reduced refinance cycle times versus Lendi’s prior baseline; a common workflow reported as “refinancing in only 10 minutes, with no paperwork, no phone calls, only a single tap.”

Note: Where precise numeric deltas (percentage reduction in cycle time, cost per refinance, or conversion uplift) are material for decision-making, consider requesting the company-reported dashboards or a short briefing to verify sample sizes and segmentation.

What worked, and what was hard

What accelerated success:

  • Clear product boundary (refinance flows), which reduced scope creep.
  • Close partnership with cloud and platform vendors for rapid model access and governance primitives.
  • Instrumented observability so compliance teams could validate behaviors quickly.

Harder than expected:

  • Lender integrations and API heterogeneity—real-world lender systems required adapters and edge-case handling.
  • Data quality and consent plumbing; cleaning and validating legacy data took effort before automation could rely on it.
  • Operationalizing human review—designing interfaces and SLAs for humans to step in without creating bottlenecks.

Playbook: How an executive team can attempt a similar 16-week sprint

  • Define a narrow MVP: Pick one high-value workflow (e.g., refinance application lodging) and a limited customer segment.
  • Secure data and consent early: Lock down access to loan, property, and identity data and confirm exportable consent trails.
  • Partner on models and guardrails: Use managed foundation models and vendor guardrails to accelerate safe deployment.
  • Instrument observability from day one: Capture traces of reasoning, decisions and the data context that produced them.
  • Design human escalation: Define when agents must pause and hand off to a human, and make override actions auditable.
  • Limit integrations initially: Start with a small set of lender partners to validate flows before broadening coverage.
  • Measure what matters: Cycle time reduction, completion rate, conversion uplift, error/rollback rate, and compliance exceptions.

Quick checklist to run a 16-week agentic AI sprint

  • Scope: single workflow, limited product set.
  • Team: product lead, 2–4 ML engineers, 4–6 backend/integration engineers, compliance owner, UX designer.
  • Data: lender APIs, property valuations, consent records ready.
  • Platform: managed foundation models, container orchestration, observability tooling.
  • Governance: guardrails, audit logs, human-in-loop checkpoints.
  • Pilot: small customer cohort, defined rollback plan.

Limitations and open questions

Reproducibility: The 16-week timeline is achievable with a firm scope, experienced teams, and existing integrations. Organizations starting from scratch—no lender APIs, fragmented data, or smaller engineering capacity—should expect longer timelines and higher cost.

Data & privacy: Public descriptions focus on encryption and role-based access; specifics (retention windows, third-party access, deletion guarantees) need to be confirmed for compliance audits.

Recommendation bias and conflicts: Product recommendation agents require explicit rules and monitoring to avoid systematic bias or conflicts of interest—essential when matching customers to lender products.

FAQ for executives

  • How much does a project like this cost?

    Costs vary by scale and vendor choices. Expect major components: engineering labor, foundation model usage, cloud orchestration, observability, and integration work. Lendi’s 30,000+ hours indicate significant people cost—smaller pilots can reduce engineering hours by narrowing scope.

  • Can other firms match the 16-week timeline?

    Yes, if they have a focused scope, integrated data sources, and vendor support. Without lender APIs or clean customer data, timeline expands significantly.

  • Is vendor lock-in a concern?

    Using managed services like Bedrock accelerates delivery but introduces dependency. Mitigate with abstraction layers (MCP-like patterns) and exportable model inputs/outputs for portability.

  • How is customer data protected?

    Company-reported controls include encryption in transit and at rest, role-based access, and consent tracking. Validate retention policies and third-party access before productionizing.

  • What ROI can leaders expect?

    ROI depends on volume and conversion uplift. High-value mortgages can yield rapid payback if automation increases completion rates and reduces manual broker time; ask for unit economics per refinance to model ROI realistically.

Questions for your team

  • Which high-value, repetitive workflow would we prioritize for an agentic AI sprint?
  • Do we have lender or partner APIs and consent mechanisms in place?
  • What is our human-in-loop policy for high-risk decisions and escalations?
  • Which observability and audit tools will meet our compliance needs?
  • Can we commit the cross-functional hours needed for a 12–20 week focused sprint?

“Refinancing in only 10 minutes, with no paperwork, no phone calls, only a single tap.” — company-reported description of Guardian’s capability

For leaders, the lesson is straightforward: agentic AI works best when teams combine a narrow product focus with rigorous governance, observability, and rapid integration patterns. The architecture and practices Lendi used—specialized agents, managed foundation models, and traceable decision chains—form a practical blueprint for fintechs and regulated businesses looking to automate customer journeys without sacrificing compliance or trust.