THANK YOU FOR SUBSCRIBING
The Great Recession revealed that many of the models used by financial institutions (FIs) lacked adequate documentation. Following the passing of the Dodd-Frank Act and the emergence of various regulatory requirements (i.e., SR 11-7, OCC 2011-2012, FIL 22-2017, NYS DFS Part 504), FIs must validate their models or face penalties. This means establishing an effective model risk management program (i.e., 2nd Line of Defense) to screen for any discrepancies overlooked by model development teams (i.e., 1st Line of Defense). Over the past 13 years, financial services have experienced major growth in these roles. Despite an increase in validators, many original penalties remain unresolved with the addition of new ones. A reason for this trend is that managers insist on hiring validation candidates based on their model development experience. This might not be the wisest move.
Developers and validators must understand that they each have different responsibilities. The former create models by applying their technical expertise in quantitative analysis while the latter write lucid reports which review those models. Although some validators create challenger models, this represents a small portion of validation work. Ideally, validators should have development experience, but the main emphasis is preparing top-quality reports. In some instances, validators overstep their boundaries and assume development responsibilities. These include instructing developers how to build their models, preparing model methodologies, or receiving credit in any model development documentation. Although validators might have good intentions in performing these functions, these actions blur the lines between the 1st and 2nd Lines of Defense and ultimately lead to penalties against the FI.
Hiring a former developer seems intuitive because that individual should feel comfortable explaining the modeling process and its results. However, FIs generally acknowledge that developers are either incapable or lack the time to produce sufficient documents. This has resulted in most development teams hiring technical writers to fill this gap, either with internal staff or 3rd party consultants. This implies that many developers themselves do not prepare the documentation for their models; thus, a developer-turned-validator will probably lack sufficient writing skills.
Despite the inability of developers to write solid reports, some hiring managers believe that these skills can be easily acquired, especially within a short period of time. This represents wishful thinking at its worst. This is equivalent to expecting a sedentary individual to become an elite athlete overnight. In order for any candidate to be successful in a field, that individual must have the opportunity to learn and practice new skills in a non-stressful or non-live environment. Given the stringent deadlines of writing validation reports, hiring managers are setting up their inexperienced validators for failure.
Although not publicly revealed but widely known, the shortcomings of validators reflect the same problems found in development. These include inconsistent standards (i.e., ambiguous definitions for a pass or fail), erratic reports (i.e., different templates used by the same group), and inefficient processes (i.e., 10 people need 6 months to write an incoherent report). Until developers are able to resolve these issues themselves, the 2nd Line of Defense inherits a deficiency that it’s supposed to eliminate.
In addressing the dilemma of hiring the best possible validators, hiring managers must prioritize writing and communication skills, followed closely by a strong understanding of the development process. One way to screen for this essential attribute is to request a writing sample. Although most prefer a technical sample, my expectations are for the candidate to describe a situation in an organized manner. For example, I would hire a quantitatively trained candidate who submits a thoughtful analysis on Kurt Vonnegut’s Breakfast of Champions. My rationale for this decision is that the candidate demonstrates versatility by writing on a topic that might be out of his/her comfort zone. From my experience, there will be occasions where the validator will be reviewing models that he/she will have no prior knowledge of. A candidate’s ability to adapt to changing situations will translate into success as a validator.
For those situations where this ideal validator cannot be identified, I suggest pairing a strong technical writer with a savvy quantitative expert. The expectation of this approach is that one complements the other in producing a strong validation report. Hopefully, over time, this symbiotic relationship results in each being able to write a report separately because a transfer of knowledge occurs between the two.
Last but not least, the hiring manager must set the example by playing an active role in ensuring that validation reports represent the highest quality. This means thoroughly reviewing all reports and, if necessary, performing the work to ensure that all gaps are properly addressed. As a hiring manager, I hold myself responsible for all validation reports. There should be no disparity in quality based on whether these documents are prepared by subordinates or written by me.
It’s a false assumption to believe that a candidate with modeling experience will be a strong validator. The purpose of model validations is to replicate modeling processes, not their insufficiencies. In creating an effective model risk management program, hiring managers must understand the difference between the two roles in recruiting strong validation talent.