Working Paper Credibility Questioned
118 37 28
NO REAL ESTATE APPRAISER IN THE COUNTRY IS ALLOWED TO MISS THE MARK BY 7.8% WITHOUT RISKING LOSS OF LICENSE! The Working Paper demonstrates more than anything else, that given an 8-hour work day the computer will take 6 full working days (1.2 working weeks) to arrive at a result that is at best 7.8% error prone.
Office of the Director, Honorable Melvin Watt
Federal Housing Finance Agency…
The American Guild of Appraisers is a Guild within the more than 12 1/2 million members, retirees and family members of our parent union OPEIU, AFL/CIO. In addition to representing our own Appraiser Guild members we also look at the consumer and taxpayer issues arising from appraisal and appraisal policies on behalf of our entire family of unions, and American taxpayers and consumers at large.
Numerous appraisers have reviewed the working paper. The following communication reflects the issues raised from their collective input.
- Alexander Bogin is not listed as an appraiser in the National Registry of Appraisers
- Ms Jessica Shui is not listed as an appraiser in the National Registry
- Andy Leventis is not listed in the National Registry
- Ms Michela Barba is not listed in the National Registry
- A Robert (Bob) S. Witt is listed, from Stevensville Md though we have no way of knowing if this is the Bob Witt being acknowledged in the paper
- Ming-Yuen & or Meyer-Fong are not shown in the national Registry though this could conceivably result from name conventions not matching the computer.
The point is that the others, and to the extent that can be confirmed at this writing none of those listed as having helped are real estate appraisers.[i]
We have identified numerous errors or weaknesses in the analyses presented that respectfully result in fatal flaws that are so significant as to undermine the entire papers credibility.
Page two of the working Paper Abstract, first sentence is one we agree with partially. Everything that follows is open to challenge. Credibly developed, adequately supported, clearly presented opinions of specifically defined values are essential to credit risk management. Our appraisal opinions are not undefined value estimates as stated. Knowing the subtle distinction is critical for anyone purporting to scientifically analyze whether there is a demonstrable upward bias or bias of any kind evident from the sources and datasets cited in the paper.
I can look at a table and estimate the square footage of its surface. While over simplified, as an appraiser I would measure that surface area and report what it is, factually. Any value opinions developed as to its relative worth would be clearly defined as to the type of ‘value.” This could range from a fast-limited exposure yard sale value; to a professionally marketed antique value, and auction value, or even a value in use. The Working Paper (WP) conflated undefined value(s) with price paid throughout. [ii]
The WP cites empirical studies as having shown that “property appraisals tend to be biased upwards, and over 90% of the time either confirm or exceed the contract price.” The first part of that statement (upward bias) has absolutely no support or relationship to or from the end of the sentence.
In America, the real estate market generally works and functions reasonably well before it gets to Wall Street. Not perfectly, but adequately. Sellers of property consult local experts as to how to market their property and how much to ask for it. Buyers also use professional expertise in helping them to decide how much to offer or pay for property. It should surprise no one nor raise any eyebrows that 90% of the time these experts ‘got it right.’ That’s what they were hired to do.
That some of these properties subsequently appraise for more than the negotiated contract price is also not at all an anomaly. It is a normal function of the market. If the normal value of property in my area is $100,000 and takes 90 days to sell but I must sell in 30 days to relocate to a new job or other reasons, I may offer a discount. While a 1% discount might not offset other properties features, or generate a very fast sale, a discount of 5% or even 10% certainly would. I might even offer inducements within that range. If I sell my house in one day, and an appraiser did NOT appraise it for $100,000 (all other factors being equal to the others in the area) they would be at risk of censure from their state regulatory board. Even the GSEs could cite them for inadequate analysis. A variance in the defined value of more than 5% would likely put the appraiser’s license at risk. By all applicable standards that hypothetical property should ONLY be appraised for $100,000, or “10% high” by the Working Paper' interpretations.
I note that three of the four studies cited were pre 2008 crash. Any, repeat ANY housing study prior to September 2008 that did not foresee the crash must necessarily be viewed skeptically. Studies from 1996 that failed to predict the rapid increases from circa 2001 through 2004 and even 2005 are also suspect. In any case no study from before 2008 can reasonably be considered reliable or applicable to the post 2011 era. The 1996 studies would have had only the 1980’s S&L scandal influenced market and RTC aided 1990’s-1996 recovery period to study. While we are in a similar post recovery period now, the causes were hugely different. People went to jail back then.
What is clear from the Working Paper is that it is biased in favor of supporting an alternative approach to measuring loss risks associated with bad lending policies and bad loans; and the ability to mitigate loss through alternative analyses of collateral value. The use of AVMs is clearly favored both in the narrative content and the analytical content of the Working Paper.
Let’s discuss the perceived higher rate of upward bias in rural areas. Before doing so, let’s clarify a point. There is NO uniformity in America today for the definition of ‘urban’ amongst appraisers, Realtors®, agents, recognized educators, state legislators or the U.S. Census Bureau. If the definition(s) of urban is not universally accepted, it follows that definitions for suburban and rural are also subjective rather than objective.
The fact the Appraisal Institute’s old dictionary version (4th edition) is used to cite their definition does not mean that same definition is used or even should be used by all appraisers across America. Certainly the North Carolina Board of Real Estate Appraisers does not concur. As this letter is being written they are considering revising the definition of market area to mean the county or metropolitan statistical area and mandating that only that definition be used in North Carolina appraisals. IF that option is approved then the definition used by the U.S. Census Bureau will become the metric. The reality will remain that within the county or MSA there will likely be all three zones. In practice if the market area is redefined by regulation to cover the entire county or MSA the description of that area will also become one…urban, suburban, rural.
Real estate appraisers are either bound by LOCAL custom or act of legislatures in defining neighborhoods and competitive market areas. It varies from state to state. In truth most buyers’ perceptions are based on apparent character and area ambiance rather than population statistics or the presence of sidewalks. In some markets buyers will prefer county services over city services in the belief that they are less intrusive.
Much of Eastern America has preserved both rural and suburban character even in great urban regions like the Beltway area and environs of Washington DC. In southern California we are largely city to city with little or no open space in between cities near our larger megalopolitan areas such as Los Angeles, San Diego and San Francisco. More and more this becomes a subjective market consideration.
During a not too long-ago trip to the GSA hub in the Kansas City Area the entire surrounding area appeared rural to me, though by most accepted definitions it was clearly suburban and if one considered things such as major employment centers proximity and population an argument could be made that the area was urban. For many decades we were accustomed to urban meaning only the center cores of large cities that were typified by low income residents living in high density residential housing amid unattractive commercial or industrial uses. Now we have rural-like or urban-like rather than definitively rural or urban areas.
Georgetown in DC is clearly urban; as is Crystal City. Is Chevy Chase or Ohio St in NW? How about the area outside Washington Navy Yard’s main gate? The population and commercial businesses found from Vienna, VA though Tysons Corner past Reston to Dulles Airport outside Herndon would or could now be defined as urban based on population – though residents would be outraged at any description other than suburban.
Statisticians can call urban whatever they decide they want to define it as but what they cannot do is change the minds and perception of the people that live in one part of the country to another as to what kind of area they live in. As a result, any predictions or analyses based on definitions not in use by the residents or buyers for a given area, will doom their analyses & models to failure.
Page Five (5) of the Working Paper says that two primary datasets were used. The first contains every active appraisal record associated with loan applications submitted to FNMA and Freddie Mac from the 4th quarter of 2012 to the 1st quarter of 2016. The term ‘active appraisal’ is not explained and could likely be misleading.
What is clear, is that ALL data derived from the cited sources prior to the first quarter of 2015 (specifically January or February 2015) was tainted. Tainted or contaminated is too soft a description. For credible statistical purposes it was 100% unreliable!
This is not my opinion. This is the documented opinion and official position of FNMA itself who said in their lender letter 2015-02 that they were extremely concerned that appraisers had been adjusting so that net and gross adjustments fell within a limit- range of less than 15% (net adjustment) and 25% (gross adjustments). They removed the percentage policy in December 2014, but appraisers were still making the same less than 25% (total) adjustments.
While never official FNMA policy they also recognized that most appraisers felt they were limited to 10% line-item adjustments and selection of comparables from only within a one-mile radius except in the most extreme of cases; and then only if carefully explained. This means that the entire FNMA database from 2012 (and earlier) through December 2014 and into an undefined / undetermined date in 2015 were completed with non-market adjustments applied. FNMA knew the database was corrupted, yet they still used it for CU and ongoing policy development purposes.
Consider “model” adjustments oft cited by FNMA/CU. All are based on bad report data. They / it cannot be relied upon.
The “peer” adjustment range for individual adjustments? It’s also based on artificial FNMA imposed constraints as well as perceived non-market constraints regarding maximum line item adjustments and comparable sales allowable distances (and date of sale limits). The peer adjustment range is among one of the least accurate metrics within the patented FNMA Collateral Underwriter process.
Conclusion? The Working Paper analysis is based almost entirely on data known to be no good and roughly 1 to 1 ½ +- years suspect data that may or may not include rote adjustments made to old ‘guidelines’ and incorrectly interpreted (nonexistent) guidelines.
No matter how sophisticated or ‘robust’ the math and regression analyses were performed for the Working Paper, the results cannot be relied upon when the very database itself was ‘compromised’ (skewed so badly as to be rendered useless and not credible) for more than 60% of the period drawn from; probably 80% and potentially up to 100%.
Another item I don’t want to overlook (from page 5 of the WP. Data Heading; 3rd paragraph). 30 million appraisal records were obtained / screened from the GSE’s. Apparently for reasons not explained, about 12 million of these were duplicates. The question of WHY there would be nearly 67% of the appraisals “duplicated” in the data source is inexplicable and a critically important question that needs to be answered. In the end only 2.1 million involved purchase money mortgages. That’s only 11.6 % of the ‘singular’ or unique appraisal total.
Out of 11.6% of the 18 million total appraisals roughly 9% of the 11.6% were deemed to have been biased upward? That’s 0.01044 or only 1% (rounded) of 18 million appraisals that the authors think is skewed upward? What was their own margin of error again?
OREP/Working RE Class screenshot; presumed copyrighted by WorkinGRE; Course instructor Richard Hagar.
Please note the above chart percentage of variance and third line from bottom inside chart – the ostensible data source CoreLogic, Inc.
The first observation is that the percentage of alleged “upward biased” appraisals meaning those that sold above sale price (2.7%) is statistically insignificant. Core accuracy of adjustments greater than 95% cannot be credibly claimed (according to the Appraisal of Real Estate by The Appraisal Institute, multiple editions). While final reconciled conclusions using the ‘art of appraising’ may frequently produce results with only 2% to 3% variance, the science part of appraisal is limited to a normal variance of 5%, or only 95% reliability.
The trend among FNMA and appraisal regulators is to ridicule the art of appraising by dismissing experience-based observations and conclusions. By requiring that 100% of all observations be statistically, or otherwise irrefutably supported degrades rather than improves the professional appraisal, which is a blend of artfully applied scientific principles rather than a science itself.
Accordingly, no system that produces less than 95% accuracy is as reliable as a professional appraisal. No automated system has proven itself to be even 90% accurate across all price ranges found in any representative heterogeneous market area. Most fall within a range of only 50% to 90% accuracy most of the time.
It has been MY personal experience as an appraiser in Southern California (primarily Los Angeles County) that CoreLogic’s RealQuest™ and Realist™ sources of property profile information has a greater than 9% error rate on major items such as price per SF & GLA; property primary improvement descriptions, and a much higher failure rate on lot sizes. Their error rate on items such as room count; bath count and type, garages and even year built can collectively exceed 50% of the time and often may entail 100% of potential comparable sales.
Periodically they completely miss the correct sale price based on their staff’s misinterpretation of documentary transfer stamps and recorder filing fees in Los Angeles. They almost always fail to make the correlation between their own notations that a sale transaction has multi parcels (assessor parcel numbers) for a single property with a result that the land to improvement ratios and assessments reported by them for a single address or property ownership entity will often be grossly incorrect.
Realist and RealQuest are great services for local real estate agents or area appraisers that fully understand the quirks, and their frequency. They are not credible or reliable sources for building complex databases to derive credible national statistical value correlations.
CoreLogic is a data aggregator. It means they only collect & assemble it in packages. It is only in very recent years that they started marketing themselves as having any degree of competency at interpreting the data they collect. Their growth while impressive, has far outstripped their long term in house expertise at analyzing LOCAL data accurately. They buy successful companies. DataQuick was the predecessor of RealQuest – a Canadian based company with many years’ experience in compiling meaningful statistics in the markets they served. CoreLogic itself has very little gravitas as experts at analysis. They marketed a product to provide purported “valuations” that has yet to prove itself any different or even as good as such services as Zillow, realtor.com and a host of other AVMs that routinely collectively produce comparative results for the same property with ranges as great as $400k+/- to $800K!
IF any of the Working Paper statistical data is originally sourced through CoreLogic then all of it must be dismissed as biased. CoreLogic offers a product that would or could directly benefit from a study that shows AVMs in a favorable light as this one has.
At the bottom of page 5 the authors again introduce irrelevant as well as unconfirmable statistical data from the flawed database (were those two bedrooms really two and a den or were the three bedrooms only two and a den?). It is a meaningless comparison by people unfamiliar with the importance and influence of major value related property characteristics across America.
A median value or size (or median ‘anything’) for all of America is a meaningless statistic by itself except to say half the houses sold for more or were bigger, and the other half sold for less and or were smaller. NO OTHER INFERENCE can be drawn from that median number!
TWO years medians can tell an analyst whether there has been an increase or a decline in the median. That is ALL. Everything that follows is speculation. ” It went up because interest rates went down; it went down because fuel prices; CPI went up. It went sideways because people were sitting on the fence wringing their hands while economists tell them what to think.” It is ALL speculation. Real estate activity in America takes place on local levels. So, must analyses.
Pretending to be able to analyze or predict from national statistics applied to a very clearly defined much smaller market area is pure hubris. It’s hard enough to do it in different parts of a small town or village.
Also, when using phrases like “value added” in a Working Paper that is reporting on purported appraisal bias over the magic of statistical modeling and forecasting is an even greater bias.
The concept is an accountant’s theoretical treatment of otherwise unexplained (usually unsupportable) sought tor, or claimed value arising from intangible aspects of an enterprises operation. It truly has no place in discussions about real estate appraisal…where it is bound to be misconstrued.
AGA has several concerns regarding the various regression techniques utilized along with the apparently intentional obfuscating language in the narrative explanations of the techniques. I’ll leave that for the regression experts to opine about for the most part.
The one area that is not complex is the selection of the recommended technique.
Page 8 of the Working Paper (WP) 1st paragraph states that “While the random forest estimator increases the explanatory power by approximately 7.8% relative to our baseline model, this improved model fit comes at the cost of significant computational time. Required server CPU time is over 48 hours versus the minutes it takes to re-estimate a standard hedonic regression.”
The statement above is a devastating admission (or confession).
It states in effect a cost to benefit decision must necessarily be made to use any automated, regression techniques at all. If sloppy, unreliable results are all that is needed then hedonic regression is fine and can be completed in minutes. If 7.8% greater accuracy (or more precisely results that are assumed to be less inaccurate) is desired, then it will take 48 hours of computational time.
NO REAL ESTATE APPRAISER IN THE COUNTRY IS ALLOWED TO MISS THE MARK BY 7.8% WITHOUT RISKING LOSS OF LICENSE!
The Working Paper demonstrates more than anything else, that given an 8-hour work day the computer will take 6 full working days (1.2 working weeks) to arrive at a result that is at best 7.8% error prone.
The lament most often touted by mortgage participants and FNMA is that AVMs are needed because appraisers take too long to complete when performed by trained humans that usually complete them in from one to three working days after inspection – being permitted five days has become a luxury in the push for faster loan delivery.
Perhaps the real solution is to allow licensed and certified appraisers to have an error rate as high as computers, and at least as much time to complete the work. Maybe eliminate the need to make credible adjustments at all? Certainly the ‘computer’ isn’t basing results on credible data.
We appreciate that this is a working paper and as such is intended to generate dialogue. In that sense it will be successful. There is little doubt this paper will be the topic of many a discussion by regulators, lobbyists, lawyers, marketing experts skilled at selling to government or large NGO bureaucracies such as the GSEs, FNMA; and appraisers alike.
Our hope is that when the dialogue is over, serious valuation professionals will be listened to in the future rather than snake-oil software hucksters, and overnight experts supported by a very large wallet rather than relevant experience.
The amount of waste, and disruption to the integrity of the appraisal process itself by the GSEs, and erosion of the public trust in appraisals needs to stop. Present reckless financial practices among the GSE’s that is tolerated or encouraged by federal regulators are placing our nation at extreme risk of an even worse financial failure than that of 2008-2009.
Instead of preservation as required under FIRREA, Title XI there is ongoing erosion of the public trust by The Appraisal Foundation. They’ve become lost and have forgotten their mandated purpose for being created.
TAF appears to operate purely for the benefit of commercial interests. That requires a very high-level reality check.
FNMA has long gone from being a guardian of appraisal integrity to being one of the biggest causes of its failure.
Perhaps you could refocus the direction that your agency subordinates appear to be heading before they become complicit in the activities of the above. Thank you for your time.
Michael F. Ford
American Guild of Appraisers
Chairman NAPRC /V.P. Special Projects
- [i] Robert S. Witt, a certified residential appraiser is listed in Stevensville Md. It is unknown if this is the same Mr. Witt referenced on Page 1 of the Working paper.
- [ii] Page 2 10th line 1st para; P3 multiple, P4, P5,
- [iii] Copyrighted chart from OREP/WorkingRE FNMA CU-Fail related course; Richard Hagar – Attributed to FNMA