Data Pollution Potential to Cause Bubbles
578 18 4
In meetings with the National Association of Realtors and The Appraisal Foundation, there was a lot of time spent listening to AVM owners espousing their importance and more sober observations of the pitfalls. One of the presenters seemed to be bragging that 90% of the time, a good Automated Valuation Model (AVM) can be within plus/minus 10% of the actual value. Remember that Zillow’s Zestimate is within 5% of the actual value only 50% of the time. Both numbers are very dreadful and very random inconsistency across the marketplace.
But still, there is a place for their use in conjunction with appraisers, just not to the intensity being touted now, especially as their data gets polluted going forward by the impact of waivers.
Here’s a simple scenario on how data pollution works and in large scale has the potential to cause bubbles in the future – a sale’s transaction is given a waiver by a GSE and the sale happened to sell for 5% above current market conditions. That sale closes and is used by AVMs AND BY APPRAISERS as a valid comp. Multiply that by tens of thousands of transactions and we are creating unnecessary market volatility.
There was an excellent guest speaker from Columbia University, Josh Panknin who made the following observations about “big data” and the current wiz-bang “overhype” that seems to be threatening the future of appraisers.
- Computers can’t fill in the blanks
- Computers can’t do qualitative (my interpretation- i.e. views and condition despite UAD).
- Incomplete data give us incomplete answers (so throwing more data at big data does not resolve that problem).
He also used a “turkey sandwich” metaphor for AVMs.
The quality of this sandwich of bread, cheese, turkey, and mayo get better by improving the quality and components, not by moving around the items.
In other words, we don’t improve quality by simply swapping out technologies.