Making Deep Learning + LLMs Useful for CRE: Product Preview
Get a sneak peek of upcoming Trebellar product enhancements, including LLM and AI tools designed to generate actionable insights and custom reports in minutes.
I'm excited to share a preview of some upcoming product enhancements that combine the power of deep learning with large language models (LLMs) across the Trebellar platform. With these new enhancements, our customers can:
- Get instant answers and insights for any question about their portfolio/workplace
- Produce a shareable executive summary + key findings in just a few clicks
- Use AI/LLMs to save time and scale their team's impact
For the past few months, we've worked closely with our customers and advisors to develop useful applications of AI for real estate and workplace teams. Two observations: (1) People are extremely interested in AI, and (2) People are highly skeptical of current AI offerings tailored to real estate and workplace use cases.
As technologists and engineers, we share and appreciate this skepticism. We'll be the first to tell you that it's exceptionally difficult to get an LLM to "behave" consistently to solve a particular data or business problem at scale. This is particularly true if your technology approach involves dumping data tables into ChatGPT and crossing your fingers, hoping for a miracle...
In order to build AI/LLM models that actually work – consistently and at scale – we went on a long journey. We studied emerging LLMs and benchmarks. We dissected different protocols and strategies for minimizing hallucinations and giving accurate results. We looked at comparable products (spoiler: there is a lot of vaporware within the AI hype). We tested and iterated constantly. And, throughout it all, we connected deeply with our customers to understand their needs.
In sum, we learned a TON.
In the coming weeks, I'll be sharing some of our learnings, our core methodologies, and tips for others looking to build LLM-capable products. A preview of what we've learned and how we're applying it:
- Data Foundations: Garbage in, garbage out. Establishing a clean, normalized data pipeline with high integrity is a necessary precursor to success.
- Constraints are King: Asking an LLM about the entire universe is a fool's errand; establishing constraints and boundaries in an structured way is the only way to maximize accuracy and reliability.
- Metadata & Post-Processing: Even structured data is messy. A metadata-oriented approach and schema-driven architecture not only bring customizability, but LLMs perform best when the data they receive is "packaged" or "curated" with additional context or structure. Many other providers skip this step at their own peril.
- AI Agents are the Future: Rather than having a single general knowledge model, our approach relies on a team of LLM models, each with specialized domain expertise with a set of actions, strategies, and collaborative protocols. While this is an active research topic, this approach has yielded significantly better outcomes.
By sharing some of our learnings in detail, we hope to accelerate progress for others in the industry. Stay tuned!