Header Image
Eric Dunleavy

Eric Dunleavy, Ph.D.

Director of Personnel Selection and
Litigation Support Services
Facebook Twitter Linkedin
Eric M. Dunleavy, Ph.D., is the Director of the Personnel Selection and Litigation Support Services Group at DCI, where he is involved in a wide variety of employee selection and equal employment opportunity/affirmative action (EEO/AA) consulting services. He also serves on staff with both the Center for Corporate Equality (CCE), a national nonprofit research group, and The Institute for Workplace Equality, a national nonprofit employer association. Both focus on education and training related to EEO/AA issues. His primary areas of expertise are in employee selection, validation research, adverse impact analyses and other EEO analytics. His most recent work has focused on advanced quantitative analyses for assessing adverse impact and on selection procedure validation research in the context of EEOC/OFCCP enforcement and litigation support.

Eric received his M.A. (2002) and Ph.D. (2004) in Industrial/Organizational Psychology with a concentration in data analysis from the University of Houston. He received an Honors B.A. (2000) in Psychology from St. Anselm College. Eric has served as President, Vice President, and Legal Chair of the Personnel Testing Council of Metropolitan Washington, D.C. (PTC/MW), and was on the editorial board of The Industrial-Organizational Psychologist for 7 years as co-author of the “On the Legal Front” column. In 2011, Dr. Dunleavy received the first Distinguished Early Career Contributions Award – Practice award from the Society for Industrial-Organizational Psychology, which is given to individuals who have developed, refined, and implemented practices, procedures, and methods that have had a major impact on both people in organizational settings and the profession of I-O psychology. In 2015, he was elected as Fellow of the Society for Industrial-Organizational Psychology, which is an honor bestowed upon I/O Psychologists that have enriched or advanced the field on a scale well beyond that of being a good researcher, practitioner, teacher, or supervisor and have had impact that is recognized broadly.

Eric has published articles in the International Journal of Selection and Assessment, Journal of Business and Psychology, Psychology, Public Policy, Law and Industrial and Organizational Psychology: Perspectives on Science and Practice. He is currently an adjunct faculty member of George Mason University, where he has taught graduate courses in multivariate statistics and applied measurement, and at University of Maryland Baltimore County at Shady Grove, where he has taught a graduate course on legal issues in selection. He is currently involved in a SIOP task force responsible for developing a dialogue with EEOC on employee selection issues of mutual interest.

Eric Dunleavy ’s Recent Posts

In another blog, Art Gutman provides an overview of the California Fair Pay Act (CFPA). The CFPA prohibits California employers from paying employees differently due to sex. This is not new, given existing law; however, some of the specifics outlined in the CFPA are unique.

One example relates to grouping employees for analysis based on “substantially similar” work. More specifically:

An employer shall not pay any of its employees at wage rates less than the rates paid to employees of the opposite sex for substantially similar work, when viewed as a composite of skill, effort, and responsibility, and performed under similar working conditions

For federal contractors, this is likely not a new concept in some ways. If you have interacted with OFCCP regarding pay analyses, you may have worked to identify similarly situated employee groups (SSEGs) for your pay analysis groups (PAGs). SSEG development is often a combination of subjective and objective considerations, but the CFPA adds more structure around dimensions of job similarity. As such, how do you determine whether or not jobs are substantially similar?

Industrial and Organizational (I/O) Psychologists are particularly well suited to address this question. I/O Psychologists often collect job-related information as part of a process called job analysis, which can be used to determine job similarity.

A job analysis is the systematic process of collecting and interpreting job-related information for a given purpose – such as assessment development, validation efforts, and/or determining job similarity. This systematic review is accomplished through a variety of data collection methods, such as job observations, documentation review, interviews, focus groups, and surveys. For additional details on job analysis, see this previous DCI blog.

Data collected from a correctly focused job analysis can allow for job similarity analyses to directly and empirically test how similar roles are in terms of skills, effort, responsibility, and working conditions. Given the court’s general acceptance of job analysis as a viable research methodology, a job analysis may be the most defensible approach to determining similarity of jobs along the mandated criteria.

For those of you that will be dealing with the CFPA in 2016, we recommend that you contemplate job similarity along these dimensions, as it could have enormous consequences for the results of EEO pay analyses under the CFPA.

By Eric Dunleavy, Director and Emilee Tison, Consultant at DCI Consulting Group 

Facebook Twitter Linkedin

The 30th Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held April 22-25, 2015 in Philadelphia, PA. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information. Many sessions cover topics of interest to the federal contractor community, including employment law, testing, diversity and inclusion, big data, and regulations for individuals with a disability. DCI Consulting Group staff members were well represented in a number of high profile SIOP presentations and also attended a variety of other sessions worth sharing.

DCI highlights included, but were not limited to, President Dave Cohen presenting a pre-conference workshop with EEOC Chief Psychologist Dr. Rich Tonowski, Dr. Mike Aamodt presenting a master tutorial on background checks with Dr. Rich Tonowski, and Dr. Eric Dunleavy being awarded SIOP fellow status at the plenary session. Additionally, DCI staff members Dr. Art Gutman, Dr. Kayo Sady, Joanna Colosimo, Keli Wilson, and Vinaya Sakpal all presented at the conference. Session summaries and highlights can be found within six major themes as listed below.

    1. Hot Topics
    2. Disability Disclosure
    3. Diversity and Inclusion
    4. EEO Analytics
    5. Performance Appraisals
    6. Testing and Selection


Hot Topics

OFCCP and EEOC Enforcement Trends: Practical Tips for Mitigating Risk

DCI’s David Cohen and Dr. Richard Tonowski, Chief Psychologist at EEOC, presented a workshop that reviewed aspects of both the OFCCP and EEOC’s regulatory and enforcement agenda. Several of the highlights are summarized below.

OFCCP Regulatory and Enforcement Agenda

  • Equal Pay Report – current proposal that will require the collection of contractor compensation data (status – NPRM).
  • Pay Transparency – EO 13665 prohibits retaliation against employees and applicants for disclosing, discussing, or asking about compensation information (status – NPRM).
  • LGBT Protections – EO 13672 prohibits federal contractors from discriminating against any employee or applicant because of sexual orientation or gender identity and requires contractors to take affirmative action to ensure applicants and employees are treated without regard to sexual orientation or gender identity (status – Final).

OFCCP Trends

  • When analyzing the last ten years of data, the majority of OFCCP findings of discrimination have been related to a pattern or practice of  intentional discrimination, hiring, and placement (approximately 74%) and compensation issues (approximately 17%).
  • OFCCP’s focus on large analytical units (or aggregation of data) will almost always yield statistically significant differences between groups of interest. In some cases, data aggregation may be improper.
  • Disparity analyses should compare subgroups of interest to the highest selected group. OFCCP has endorsed this approach and recent settlements are reflective of this.

Current EEO Litigation Trends

  • Private-sector charges to EEOC are down.
  • EEOC-initiated litigation is down.
  • Few EEOC cases involve complex psychometric or statistical issues.
  • The current EEOC emphasis is on systemic cases, though most activity still involves single-claimant, disparate treatment issues.
  • The hottest EEOC litigation is over procedural matters (such as the adequacy of complaint, conciliation).

EEOC Strategic Enforcement Plan (2013-2016)

  • Eliminating barriers in recruitment and hiring, including those involving religious discrimination, credit and criminal history, and social media.
  • Protecting immigrant, migrant, and other vulnerable workers. The take-away message was employers should be proactive when there is a ‘vulnerable workforce.’
  • Addressing emerging and developing issues, such as pregnancy accommodation, ADAAA, big data, and LGBT issues.
  • Enforcing equal pay laws, with an emphasis on sex discrimination.
  • Preserving access to the legal system by targeting policies and practices which discourage or prohibit individuals from exercising their rights or impede EEOC’s enforcement efforts.

Preventing harassment via systemic enforcement and targeted outreach. Note that there has been a high volume of recent harassment cases.

Disability Disclosure

Alliance Special Session: Working with Mental Health Issues

In light of new data collection requirements now in effect under Section 503 of the Rehabilitation Act, the self-identification process for individuals with disabilities (IWD) is a hot discussion topic. Many are particularly curious about the decision to disclose from the perspective of applicants and employees with disabilities. During a panel discussion on mental health issues in the workplace, counseling psychologist Susanne Bruyère, Ph.D., shared her research on the factors influencing the decision to disclose or not disclose disability status in an employment setting. Listed below are several highlights from Bruyère’s discussion of her research:

  • Factors identified as most influential in the decision to not disclose a disability (barriers to disclosure)
    • Fear of unfavorable employment outcomes (e.g., not being hired, being terminated)
    • Concern of shifting employer’s focus from employee performance to the disability
  • Factors identified as most influential in the decision to disclose a disability (facilitators to disclosure)
    • Need for a workplace accommodation
    • Positive employee-supervisor relationship
    • Perception of employer’s commitment to disability inclusion
  • Factors identified as significantly less important in the decision process by IWD who ultimately decided against disclosure (in comparison to those who decided to disclose)
    • Including statements in employer recruitment material
    • Having an employee with a disability at a job fair

Implementing Diversity and Inclusion Practice

Panelists discussed a current shift in the way organizations are, and should be, approaching diversity and inclusion. Companies are moving away from just training women, for example, and moving toward training the managers who have the power to promote those women. One challenge that still remains is identifying any biases or stereotypes that may be present and learning how to overcome them.

Furthermore, it was emphasized that diversity is more than you can see. It is not about “how do I manage or teach minorities?” but rather, “how do I tailor my teaching style to each individual, no matter their background?” With a decrease in external pressures on employers, motivation to improve diversity and inclusion programs must ultimately come from within the organization.

Attracting and Retaining Qualified Individuals with Disabilities: A Contemporary Update

DCI’s Joanna Colosimo moderated this session which focused on a variety of issues regarding the recruitment, selection, and retention of individuals with disabilities in the context of the new reporting requirements that went into effect March 24, 2014. Employer, researcher, and practitioner panelists including Keli Wilson and Arthur Gutman of DCI covered a range of topics including the voluntary self-identification requirements pre- and post-offer, workforce metrics and the 7% utilization goal for individuals with disabilities, and potential legal considerations. Additionally, panelists addressed the challenges that employers continue to face in attempting to foster inclusive environments in which employees feel comfortable disclosing their disability status and shared best practices on outreach and selection.

In one example, Eyal Grauer, the Manager of Equal Opportunity Initiatives at Starbucks Coffee Company, shared that his employer has long been committed to recruiting, hiring and retaining people with disabilities and supporting inclusion and accessibility in the workplace. Although disabilities are often framed in a negative light, Starbucks has found just the opposite. Partners with a range of disabilities serve to enhance the company through their innovation, creativity and unique skillset and this philosophy should be at the forefront of all disability-inclusive programs and initiatives moving forward.

Diversity and Inclusion

Mending the Leaky Pipeline: Retention Interventions for Women in STEM

Presenters discussed the tendency for women to self-remove from STEM fields. For example, in several fields, women declare and begin studies in almost equivalent numbers. However, women are more likely to either not complete the program or to remove themselves from the field. Some methods discussed to reduce the number of women who fall out of traditionally male dominated professions included: working to minimize alienating language and imagery (i.e., posting pictures of women draped over a NASCAR vehicle in the break room can send the wrong message), downplaying the stereotype in task performance, creating peer groups, encouraging self-affirmation, and identifying mentors. Finally, it’s important to retain women in STEM fields so there are future role models for women thinking about or entering STEM fields.

Uncharted Waters: Employees with Disabilities

Per Section 503 of the Rehabilitation Act, federal contractors are required to establish a utilization goal of 7% employment for qualified individuals with disabilities. However, recent findings among contractors and non-contractors show that only an average of 3% of employees have identified as having a disability, highlighting the challenge employers continue to face with employee self-disclosure. The panel discussed potential reasons for low disclosure rates:

  • It may not always be clear to people whether or not they have a disability.
  • Language in the self-identification form is framed negatively instead of positively.

Asking someone whether they have an apparent or non-apparent disability may change how they respond. Likewise, when requesting participation, continually using language such as “this will not hurt your chances” instead of “this could help your chances” could deter employees from disclosing their disability. It is more than checking a box – employees have to understand and accept their disability, and then make the choice to disclose.

AttenTION: Integrating Military Veterans in to the Workforce

Several presentations focused on veterans in the civilian workforce; most addressing problems and solutions in three main areas: recruiting veterans into the workforce, getting them through the hiring process and retaining them in the workforce.


There are a variety of both national and local level outreach resources available. Some best practices in this area involved emphasizing local level resources and involving veteran employees in the outreach effort. Some organizations with locations close to military installations initiated efforts to recruit transitioning military members before discharge is complete.

Hiring/Applicant Experience

It is often difficult for veterans and civilian recruiters and hiring managers to identify how military training and skills translate to company positions. Spending the time up front to define the knowledge, skills and abilities needed and the military skills and training that align will allow for more effective targeting efforts, resulting in a better job fit. Some areas where this has been successful included translating military leadership and supervisory skills, CDL transfer programs for drivers, and identifying that military skills translated well to competencies required for a sales position.


Veteran retention is receiving a lot of focus, as the turnover rate for 21-29 year old veterans is much higher than non-veterans, specifically during the first few years of employment. Female veterans leave at a higher rate than male veterans. Communication efforts can help with retention. Specifically, ensuring that top leadership is vocal about supporting veterans, communicating the value of the veteran’s position to achieving the organization’s mission, repeatedly providing information about resources available, and communicating to non-veteran employees to dispel “preference” myths have helped in some companies. Resources that can help reduce turnover include employee resource groups, mentoring, defining clear career paths, and offering a variety of resources to meet the diverse needs of veterans.

Lesbian, Gay, Bisexual, Transgender (LGBT) committee (ad hoc)

DCI staff attended the LGBT committee (ad hoc) meeting. A mission of this committee is to encourage research on LGBT issues. DCI will continue to share information that comes from continued involvement with this committee.

EEO Analytics

Current Issues in EEO Law

In this roundtable discussion, experts focused on four current issues in EEO law: recruitment, adverse impact, sexual harassment, and retaliation. After each topic was briefly introduced, the floor was opened up for an audience led discussion and question session. High-level discussion points included:

  • Although the Uniform Guidelines on Employee Selection Procedures (UGESP) do not indicate recruitment as a selection procedure, mishandling of recruitment can lead to selection violations such as adverse impact and the pattern or practice of discrimination.
    • As an example, an attempt to recruit a skilled laborer may lead you to recruit from local training schools. However, if those training schools do not have a diverse population of students, this pipeline of applicants may reduce diversity.
    • Background checks were discussed in light of legal considerations, including adverse impact and potential employer liability. It was stressed in this discussion the importance of considering the specific requirements of the job and also the nature of the business when assessing risk.
    • Employers are encouraged to take appropriate steps to prevent and correct unlawful harassment, including establishing a complaint or grievance process, providing anti-harassment training, and taking immediate action when complaints are reported.
    • Anti-discrimination laws prohibit harassment against individuals as retaliation for filing a discrimination charge, testifying, or participating in any investigation.
      • Harassment that is not severe or pervasive enough to interfere with terms and conditions of employment can lead to retaliation violations because criteria for retaliation claims are less than criteria for harassment claims.

Data Aggregation and EEO Analytics

This symposium provided an analysis of problems with data aggregation in three EEO scenarios: Adverse impact analysis, criterion-related validation analysis, and compensation analysis. In all three presentations, presenters demonstrated problems that aggregating unlike data can introduce in terms of arriving at correct conclusions in legal scenarios.

  • Using data from the Lopez v. City of Lawrence ruling, presenters demonstrated how conclusions regarding hiring/promotion discrimination can differ depending on whether hiring events are analyzed separately, analyzed in the aggregate without appropriate tests accounting for the multiple events combined, and analyzed in the aggregated contingent on Breslow-Day statistical results and using a Mantel Haenszel estimator. A take home conclusion from the presentation is that data should never be aggregated if there are conceptual reasons to keep the data separate (i.e., data from multiple locations representing distinct phenomena), but if there are no serious obstacles to aggregating, appropriate multi-event tests (such as the Mantel Haenszel) should be used.
  • Criterion-related validation research involves collecting selection assessment scores (e.g., written test scores, simulation scores, interview scores, some combination of different scores) from a group of individuals applying for or performing a job and establishing the degree to which those scores correlate, at a statistically significant level, with job performance ratings for the individuals. A statistically and practically significant correlation coefficient demonstrates that those who perform better on the assessment also tend to perform the job better. However, to the extent that different supervisors have particular rating tendencies (e.g., some tend to be lenient while others tend to be strict), the observed correlation coefficient between assessment scores and performance ratings will artificially decreased. The presenters offered an application of cluster-centered regression in such scenarios to demonstrate that such a technique is superior to ordinary least squares regression in criterion-related validation studies that involve supervisor ratings of performance.

Finally, presenters offered a clear demonstration of how the application of aggregation strategies offered in OFCCP’s Directive 307 are problematic in EEO pay analyses when they extend beyond the level of similarly situated data. Using simulated data from six known populations of similarly situated individuals (population parameters were established by the presenters as part of the simulation), the presenters shared results that demonstrate, definitely, false positive indicators of discrimination increase dramatically when similarly situated groups are aggregated in an EEO pay analysis.


Performance Appraisals

Does Your Performance Appraisal System “Meet Expectations”?

There were several sessions discussing whether formal performance appraisal (PA) systems should be abandoned. Those in favor of jettisoning formal PA systems argued that such systems involve a lot of time and money but there is no evidence that they actually result in financial benefits to the organization.  Those in favor of keeping formal PA systems argued that, although most PA systems need improvement, they are important to motivating and developing employees.

Several of the organizations that no longer use performance ratings, concentrate on goal accomplishment instead.  One panelist pointed out that effective goals are related to improving the organization rather than being related to day-to-day routine work activities. Another panel member commented that any effective PA system should be about helping the employee get better.

It was interesting that several organizations said that they no longer use performance ratings but in the descriptions of their new systems, it seems as if they still do.  For example, one organization said that it no longer uses performance ratings but instead, places employees into one of three categories: Driving the business, performing, and not performing. Isn’t this a rating scale?

Regardless of the panelists’ view of performance ratings themselves, one point on which everyone agreed was that feedback should not be an annual event. Instead, an ongoing cycle of feedback is critical to making any impact on employee performance.

Based on the number of sessions and the big turnout for each of these sessions, this promises to be a hot topic in the coming years.

Testing and Selection

Mobile Assessment: The Horses Have Left the Barn…Now What? 

Many organizations are moving away from traditional paper pencil assessments to high tech software programs to test applicants. Key differences in technology platforms (e.g. tablets, laptops and mobile phones) shared in a pre-conference workshop are listed below:

  • Small screen sizes may result in lower scores, primarily because of increased cognitive demands (e.g. smaller fonts and page manipulations required to read sentences).
  • Younger applicants prefer taking tests on a mobile device where older applicants typically prefer laptops or desktops.
  • Personality tests are easier to complete on a mobile phone than cognitive ability tests, which often contain diagrams.
  • Although a general reduction in scores is seen using smaller devices, subgroup differences stay the same across different device platforms.

20 Years of Changes in Pre-Employment Testing: Experiences and Challenges

Additional information on the changes in pre-employment testing due to technological advances was shared in this session. Key points from this session to consider when assessing applicants or integrating assessments in the applicant tracking system (ATS) are listed below.

  • Determine whether assessments are or should be mobile-friendly (i.e. consider the applicant experience).
  • Face validity is important in the context of technologically driven applicant processes (i.e. ask at the end whether the applicant was able to share their skills).
  • Involve relevant parties in integrating assessments in an ATS (e.g., Industrial-Organizational psychologists, compliance, legal, HR, recruiters, programmers, assessment developer, ATS vender).

Advancing Test Development Practices: Modern Issues and Technological Advancements

Part of the session explored adding game-like aspects to traditional cognitive assessments. Introducing game aspects to computer-based cognitive ability tests did not significantly impact testing times, which is good. However, the study also found that providing game-like feedback could impact applicant reactions to and performance on the assessment.

Although more research is needed, it’s important to be aware of different technology platforms for assessments and inform applicants of potential drawbacks associated with mobile device testing.

Using Background Checks in the Employee Selection Process

Although prevalent in the employee selection process, the use of background checks as a tool for applicant screening continues to draw heavy scrutiny from both the EEOC and plaintiffs attorneys. In spite of legal risk, approximately 86% of employers consider criminal history for at least some applicants (SHRM, 2012). During a SIOP tutorial session, Mike Aamodt of DCI Consulting and Richard Tonowski of the EEOC discussed legal implications and best practices for employers using background checks in employee selection.

Background checks, including credit and criminal history, have shown evidence of adverse impact against racial minorities when used to screen out candidates for employment. For this reason, employers should consider several factors when determining whether and how to use background checks. See the list below for several best practice recommendations for minimizing risk in the use of employee background checks:

  • Avoid blanket policies (e.g., policy that company will not hire applicants with past convictions without exception).
  • Demonstrate clear link between the purpose of conducting the check and specific requirements of the job.
  • Consider both the length of time since the conviction and the nature of the crime.
  • Notify any applicants who were rejected based on the background check and offer the opportunity to provide an explanation.
By Eric Dunleavy, Principal Consultant; Rachel Gabbard, Associate Consultant; Bryce Hansell, HR Analyst; Kristen Pryor, Consultant; Keli Wilson, Senior Consultant; Brittany Dian, HR Analyst; Emilee Tison, Consultant; Mike Aamodt, Principal Consultant; Kayo Sady, Senior Consultant; and Vinaya Sakpal, Consultant at DCI Consulting Group 

As other blogs have noted, the Notice of Proposed Rulemaking (NPRM) for the long-awaited revisions to the Sex Discrimination Guidelines (RIN 1250-AA05) included some very interesting ideas. Some of those relate to the role of performance measurement systems, which, when used to make employment decisions like promotion, merit increases, bonuses, and termination, can be considered a selection procedure under the Uniform Guidelines on Employee Selection Procures (1978). The new regulations cite the Supreme Court ruling in Lewis v City of Chicago to support this notion, but it is an intuitive one; performance ratings (or objective performance metrics if they are available) used as part of a promotion or compensation decision process are no different than a test, interview or experience/education screen used as part of a hiring process.

Interestingly, performance ratings are mentioned in the proposed Sex Discrimination Guidelines both in the context of disparate treatment and disparate impact. Related to disparate treatment, the proposed regulations prohibit:

“Distinguishing on the basis of sex in apprenticeship or other formal or informal training programs; in other opportunities such as networking, mentoring, sponsorship, individual development plans, rotational assignments, and succession planning programs; or in performance appraisals that may provide the basis of subsequent opportunities”

Related to disparate impact, the proposed regulations note that:

“Contractors may not implement compensation practices, including performance review systems that have an adverse impact on the basis of sex and are not shown to be job related and consistent with business necessity.”

Performance ratings are probably not a new topic for readers that have been conducting regression-based EEO pay analyses. In this situation, it is often the case that performance is related to compensation outcomes, and as such should be included in an EEO pay analysis as a legitimate factor predicting pay. However, those of you that have been audited are also likely familiar with the allegation that performance ratings can be “tainted” by discrimination, and as such may not be appropriate as a legitimate factor explaining pay in a regression equation. This is often a complicated issue requiring sophisticated statistical analyses.

We suggest that two additional points are worth noting:

  1. Performance appraisals themselves can be assessed for adverse impact. Similar to hiring data, there are various statistical significance tests and practical significance measures that can be used to determine (a) whether a difference in performance across EEO group (e.g., sex, race/ethnicity, age) is likely due to chance, and (b) if it isn’t likely due to chance, how large the difference is.  This is often a useful analysis to conduct, particularly when performance measures are used in part to make employment decisions like promotion, merit increases, bonuses, and terminations. In the last year, DCI has conducted adverse impact analyses on performance measures for a wide range of federal agency and private sector clients. It appears that employers are realizing the usefulness of such an analysis.
  1. Should statistically significant and practically meaningful disparities in performance measures exist between EEO protected groups, the next question is whether the performance measurement system is defensible. This is often a function of job-relatedness. Having an Industrial/Organizational Psychologist objectively review the performance appraisal system can help you understand the likelihood that your system would survive EEO scrutiny. In this context the following considerations are often important:
    • Are the performance dimensions supported by some type of job analysis?
    • Is the system structured such that dimensions are clearly defined, a quantitative and/or qualitative scale is used, and behavioral benchmarks are available to help the rater?
    • Are raters trained?
    • Is there a system in place for a high-level management review of performance ratings to determine if there are any patterns/inconsistencies that need to be reviewed?
    • Is there some structured guidance on how to use performance ratings in employment decisions (e.g., merit increases, promotions)?
    • Is there an appeal process for employees who believe their performance ratings are not accurate?
    • Is there a well-developed feedback system through which employees can receive information about their performance that will promote their future development and enable them to improve job performance?

If you have any questions about the above issues please feel free to contact us. We have a feeling that EEO analyses of performance rating systems will become an important piece of your EEO/AA compliance puzzle……….if they aren’t already.

By Emilee Tison, Ph.D., Consultant and Eric Dunleavy, Ph.D., Principal Consultant, DCI Consulting Group 

Facebook Twitter Linkedin
21 January 2015

DCI predicts that the OFCCP will continue to stay busy in 2015.  Here are the top stories and trends we expect to see this year from the agency.

1.) Sex Discrimination Guidelines
2.) Construction Regulations
3.) Equal Pay Report
4.) VETS 4212
5.) 503/VEVRAA Implementation
6.) Steering
7.) Itemized Listing
8.) FAQs (LGBT and More)
9.) EEO is the Law Poster

Sex Discrimination Guidelines

In November 2014, the OFCCP sent a Notice of Proposed Rulemaking to the OMB for review and approval on the revised Sex Discrimination Guidelines.  We should expect to see the proposed rule for public notice and comment in early-mid 2015.  As soon as the NPRM is released from OMB and made public, DCI will notify contractors immediately.

Construction Regulations

The OFCCP mentioned revising the construction regulations in the FY 2015 Congressional Budget Justification. It is anticipated that an NPRM will be issued in FY 2015 addressing the affirmative action plan requirements for federal and federally assisted construction contractors and subcontractors. DCI will keep the contractor community updated on any developments from the OFCCP.

Equal Pay Report

Following the August 8, 2014 publication of the NPRM for a new Equal Pay Report, a public comment period was open through January 5, 2015 (extended from the original November 6, 2014 deadline). In brief, the proposed rule suggests that the OFCCP will annually collect summary W-2 compensation data from contractors, utilizing the data to determine an industry standard for identifying potential discrimination. Publication of the finalized rule is anticipated in late August 2015. Based on prior implementation timeframes for new regulatory data obligations and given the need for systems updates, it can be reasonably expected that the first equal pay reporting requirement under the new regulation would not occur until the start of 2017.

VETS 4212

As we reported in a prior blog, the Veterans’ Employment and Training Service (VETS) of the U.S. Department of Labor issued a final rule in September 2014 updating the reporting requirements for federal contractors under the Vietnam Era Veterans’ Readjustment Assistance Act of 1974 (VEVRAA). While the rule became effective on October 27, 2014, contractors will actually see the impact of these changes in 2015.

The primary change will be that VETS-100 and VETS-100A reporting will be replaced with the VETS-4212. In what we anticipate will be a welcome change to most contractors, the VETS-4212 requires that contractors will report protected veterans in the aggregate rather than by protected veteran category. This should ease the administrative burden of completing the report as well as provide more meaningful and practical data to VETS.

As we announced this morning, the OFCCP has released two FAQs related to the self-identification requirements under VEVRAA. Because VEVRAA requires contractors to invite applicants at the post-offer stage to self-identify whether “he or she belongs to one or more of the specific categories of protected veteran for which the contractor is required to report pursuant to 41 CFR Part 61-300” (VETS-4212), contractors are now only required to solicit whether those offered a job wish to identify as a protected veteran, regardless of category. However, the FAQs do clarify that contractors may continue to solicit information regarding the four categories of protected veteran if they wish to continue doing so, as long as they are reporting this information in the aggregate on the VETS-4212.

While there is no requirement to collect and maintain data on specific protected veteran categories, contractors may wish to do so to help identify disabled veterans who may require an accommodation, as well as to more easily identify when “recently separated” veterans are no longer a member of this protected category.

503/VEVRAA Implementation

This year will see the full implementation of the revised Section 503 and VEVRAA regulations covering individuals with a disability and protected veterans, respectively.  In addition to the changes that were effective as of March 2014, the remainder of the 503/VEVRAA provisions must be implemented as of each contractor’s next affirmative action date.  The following are some big picture NEW requirements that are being implemented for many January AAPs right now:

  • Voluntary self-identification of applicant disability (using the OFCCP form) and protected veteran status, both pre- and post-offer,
  • Voluntary self-identification of employee disability sometime during the plan year,
  • New external policy dissemination requirements,
  • EEO/AA policy statement changes,
  • Implementation of procedures to track outreach and recruitment efforts to assist with required annual evaluation requirements (if not already in place),
  • Formally documenting audit and reporting activities, and
  • Tracking data points for the “44k” analytics, including the number of job openings, jobs filled, total applicants and hires, as well as counts of disabled and protected veteran applicants and hires.

Please note that the new itemized listing requires contractors to submit documentation on some of these new requirements in addition to some of the existing 503/VEVRAA requirements. Contractors should verify these pieces are in place and up to date before an audit occurs.


The uptick in Steering allegations by the OFCCP in 2014 stimulated many DCI blog posts on the subject, multiple ILG conference discussions, and too many client conversations to count. And that is not going to change in 2015. The fact is, steering cases are low-hanging fruit for the OFCCP, and the fact pattern in steering cases is often very similar to the most common failure-to-hire settlements: placement into low-skilled, high applicant volume jobs using an unstructured, unstandardized system that is not evidently job-related. Thus, we expect steering allegations to increase in 2015 and encourage our clients to take stock of their selection and placement processes. In particular, we suggest that our clients audit their selection and placement systems (especially for lower level jobs) to ensure that they have appropriate standardization and validation evidence in place to defend against OFCCP steering claims.

Itemized Listing

For 2015 audit activity, be prepared to produce a lot more data at the desk audit stage, and endure a longer audit process. As we mentioned in a previous blog, the newly released scheduling letter and itemized listing revealed twenty-two requirements, compared to the eleven item listing that was previously used. In addition, enhanced data submission requirements were introduced for 503 (Items 7 – 10), VEVRAA (Items 11 – 14) and compensation (Item 19).

Although our December blog included information on the 503 and VEVRAA partial year request, we do want to note that this may not affect federal contractors who are now entering their transitional AAP year. So if you have a January or February 2015 plan date, and are now implementing subpart C, you would not be in the position to comply with this request. However, you should be prepared to produce your organization’s implementation plan.

For a quick comp review, please read our “compensation guidance summary” released in November.

FAQs (LGBT and More)

In an effort to help contractors navigate the sea of changes brought about in 2014, the OFCCP released a number of new FAQs near the end of the year. Several topics covered in the new FAQs included submission of partial year and compensation data under the new scheduling letter, options for storing self-identified disability data,  and defining employer-employee relationships for AAP purposes. DCI predicts that the OFCCP will continue to issue new FAQs throughout 2015 to address questions from contractors as they work to translate written requirements into tangible strategies for implementation. One topic that DCI expects to see covered in upcoming FAQs is guidance on implementing the new LGBT regulations. The final rule, which adds “sexual orientation” and “gender identity” as protected classes under Executive Order 11246, was published in the Federal Register on December 5, 2014. Although the rule states that data collection/analysis will not be required on the basis of sexual orientation or gender identity, contractors should be on the lookout for FAQs outlining the required changes for LGBT under the new rule.

EEO is the Law Poster

During the 2014 NILG conference, Debra Carr, the OFCCP’s Director of the Division of Policy and Program Development stated that the agency will be coordinating with the EEOC to update the EEO is the Law poster.  The updates are necessary to reflect the new regulatory landscape, including the new LGBT regulations.  However, no timeline has been put forth for this change as of yet.

By Joanna Colosimo, Senior Consultant; Keli Wilson, M.A., Senior Consultant; Jana Garman, M.A., Consultant; Dave Sharrer, M.S., Consultant; Kristen Pryor, M.S., Associate Consultant; Kayo Sady, Ph.D, Senior Consultant; Yesenia Avila, HR Analyst; Yevonessa Hall, M.P.S., Consultant; Rachel Gabbard, M.A., HR Analyst; and Eric Dunleavy, Ph.D., Principal Consultant, DCI Consulting Group 


Art Gutman recently summarized the ruling in Lopez v, City of Lawrence, a police promotion disparate impact case.   In this case, a district court judge ruled in favor of the employer along multiple dimensions, and rejected aggregation of test-taker data across multiple municipalities and years. Disregarding several violations of the 4/5 rule due to insufficient sample size, the judge focused more on statistical significance tests, which showed no evidence of adverse impact. Even though combining data to achieve a larger sample may have produced a different result, the judge rejected aggregation both across municipalities (questioning the similarity of the applicant pools) and across years (questioning the possibility of duplicate records).

The issue of data aggregation is one that may be very important to federal contractors conducting adverse impact statistics across different units (e.g., job, location) and time periods. Below we describe two ways in which data aggregation decisions can affect results of contractor adverse impact analyses. Aggregating data from multiple groups is generally reasonable when the groups are “similarly-situated,” meaning the individuals and the circumstances surrounding the selection process are reasonably similar. However, combining data from dissimilar groups can lead to what is known as “Simpson’s Paradox.” When groups are dissimilar (e.g., qualification level among applicants, state of the economy among years), combining data under a single analysis can lead to the opposite conclusion when compared to conclusions from separate group analyses. This result could include scenarios where statistically significant disparities are masked by aggregation, and other scenarios where statistically significant disparities are inflated by aggregation.

For example, data from multiple contractor facilities could be combined into one table, resulting in statistically significant adverse impact against women compared to men. However, when analyses are conducted separately by facility, there may be no statistically significant adverse impact against anyone, and in fact females may be the higher selected group at some facilities. Such a finding would exemplify Simpson’s paradox, and in such cases considerable thought is needed to understand whether a “by facility” approach is more reasonable than an aggregate approach.

Data aggregation can also lead to trivial conclusions based on statistical significance tests when very large sample sizes are involved. When analyses involve groups that are extremely large, statistical significance will be triggered regardless of selection rates. As an example, even a 1% difference in selection rates will be statistically significant in very large pools. As such, federal contractors will likely find statistically significant results when pools include thousands of applicants. When data are aggregated across time or level, pools become larger and the results of adverse impact analyses are more likely to become statistically significant.

It is important for contractors to think carefully about how to best group individuals when analyzing personnel processes. Analyses should consist of groups of similarly-situated employees (or applicants) and should be reflective of the contractor’s selection process. For example, a contractor who uses well defined applicant requisitions may find that analyzing applicant data at the requisition level best mirrors reality. When submitting analyses for a compliance evaluation, contractors should use the group structure that best captures the contractor’s selection process. Contractors should not attempt to mask adverse impact through analysis groups. The goal is to identify potential barriers to equal employment for protected groups. Contractors should strive to create groups that best mirror the selection process. Additionally, for the reasons described above, contractors must be prepared to speak up when inappropriate data aggregation is used to make adverse impact claims during an audit.

By Rachel Gabbard, M.A.,HR Analyst and Eric Dunleavy, Ph.D., Principal Consultant, DCI Consulting Group 

Facebook Twitter Linkedin

On September 12th, a General Electric subsidiary in Ohio agreed to pay $537,000 to settle a sex discrimination allegation with OFCCP. The agency alleged that the company used a set of employment tests that produced adverse impact against female applicants to attendant positions and was not validated in accordance with the Uniform Guidelines (41 C.F.R. part 60-3). The employment tests, an off-the-shelf set of tests called WorkKeys, measure a series of cognitive abilities, including applied math, locating information, reading for information, and applied technology. According to OFCCP, test content did not adequately match job content and the test cut score was not related to performance differentiation, and as such the requirements of the Guidelines were not met.

The OFCCP press release also noted that the agency settled another case back in 2011 that focused on WorkKeys.  That 2011 settlement was with Leprino Foods, and the set of tests were alleged to have adverse impact against minority applicants to laborer jobs, which is an allegation that is generally consistent with the personnel selection research literature assessing subgroup differences on cognitive tests. Once again, OFCCP alleged that the validation evidence did not meet the requirements of the Uniform Guidelines.

This settlement is a reminder that OFCCP can allege and litigate unintentional discrimination under a disparate impact theory. In this scenario, any facially neutral step in the selection process may be challenged. The Uniform Guidelines, which were developed in 1978 and are jointly enforced by OFCCP, EEOC, and DOJ, require that employers justify any selection procedure that produces adverse impact by demonstrating that it is “job-related and consistent with business necessity”. This is often accomplished via a validation study that uses scientifically rigorous research methods and shows persuasive results that the tool allows for meaningful inferences about candidates to be made.

This settlement is a reminder for federal contractors to monitor their employment testing programs. Employment tests and other professionally developed selection procedures can be an important competitive advantage to organizations, but they can be challenged. We suggest that contractors keep in mind the following:

  • The Guidelines identify a number of employee selection procedures that could be challenged under a disparate impact theory, including:
    • Job requirements (physical, education, experience)
    • Application forms
    • Interviews
    • Work samples/simulations
    • Paper and pencil tests
    • Performance in training programs or probationary periods.
  • Any facially neutral step in a selection process can be evaluated for adverse impact via differential “pass/fail” results.
  • Be aware of what tests and other selection procedures are being used in your organization;
    • Consider an independent test audit to help you understand what tests are being used, whether they were professionally developed, are psychometrically sound, have been validated for similar jobs and are likely to produce adverse impact.
      • If the answer to any of the above questions is “I don’t know”, then there is likely potential risk.
  • If you are thinking about identifying and implementing tests or selection procedures in your organization, consider conducting a job analysis to identify what work duties are performed in a job and what worker characteristics are needed to perform that job well. Hopefully there are available tests and other selection tools to simulate those duties and/or measure those characteristics.
  • Should any adverse impact be identified, consider formal validation research conducted by an industrial/Organizational Psychologist or other measurement expert.


By: Eric Dunleavy, Ph.D., Principal Consultant, and Emilee Tison, Ph.D., Consultant at DCI Consulting Group

Facebook Twitter Linkedin

The 29th Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held May 15-17, 2014 in Honolulu, HI. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information. Many sessions cover topics of interest to the federal contractor community, including employment law, testing, and new regulations for individuals with a disability. DCI Consulting Group staff members were involved in a number of SIOP presentations and attended a variety of sessions. Session summaries and highlights can be found below.

Fisher v. University of Texas: The Future of Affirmative Action

Making the Most of SMEs: Strategies for Managing SME Interactions

How Big of a Change Will Big Data Bring?

Recruitment of Individuals with Disabilities: Regulatory, Research, and Employer Perspectives

Meta-analysis Methods for Messy, Incomplete, and Complex Data

Predictive Analytics: Evolutionary Journey from Local Validation to Big Data

Using and interpreting statistical corrections in high-stakes selection contexts

Cruising the Validity Transportation Highway: Are We There Yet?

Within-Group Variability: Methodological and Statistical Advancements in the Legal Context

What Goes Unseen: Mental Disabilities in the Workplace

Fraud on Employment Tests Happens: Innovative Approaches to Protecting Tests

Employees with Disabilities Section 503 Changes: Implications and Recommendations

New Developments in Biodata Research and Practice


Fisher v. University of Texas: The Future of Affirmative Action

This panel discussion included several experts in the field of high-stakes selection and covered the recent Supreme Court ruling for Fisher v. University of Texas. The case focused on the use of race in college admissions. As discussed in a recent blog, this Supreme Court ruling was essentially a pass to the district courts to reevaluate their decision to determine whether the University of Texas at Austin’s inclusion of race in the admission process was tailored enough to pass the standard set by Grutter v. Bollinger (2003). They felt the previous standard used for summary judgment in favor of the university was incorrect.

During the panel, experts covered a number of questions largely focused on the future of affirmative action for academics, but also in employment settings. Much time was spent discussing the intent of universities in using race as a factor and the ultimate goal of a broader diversity. Many felt that race was commonly used because it is an accessible factor, whereas other factors like worldliness, team performance, self-confidence, etc. are more difficult to systematically measure. The session chair, Dr.  James Outtz, commented that “race is a distraction from figuring out better variables,” to which many panelists agreed that improvements should be made to develop selection measures in education to obtain a diverse class that many institutions desire. Overall, panelists reached consensus that the current ruling will have no effect on the federal contractor community and likely won’t have much effect if the lower courts rule differently. The larger effect will be felt by the education system and may require a search for alternatives or better ways to measure the diversity they seek for incoming classes.


Making the Most of SMEs: Strategies for Managing SME Interactions

Many organizations rely on subject matter experts (SMEs) to gather information for a variety of projects such as assessment development, training programs and job analyses. This panel of experts, including Dr. Dunleavy from DCI, discussed best practices and approaches to working with SMEs. Several issues were covered, such as the amount of information communicated to the SME, getting their buy-in and understanding time constraints. The panel agreed that organizations should only share information that will help SMEs understand their involvement in the project and the overall objectives and outcomes. Other recommendations from the panel included:


  • Setting expectations early so SMEs can bring the true dynamics of the organization
  • Recognizing indicators SMEs do not understand (e.g., instructions, language)
  • Being proactive in identifying and managing disengagement
  • Recognizing rating patterns
  • Providing feedback for performance ratings


How Big of a Change Will Big Data Bring?

I/O psychologists discussed many pertinent issues at SIOP this year, but one topic in particular seemed to surface again and again: “big data.” I/O practitioners and researchers alike raised a number of questions related to the growing availability of data and the seemingly limitless potential for analysis. The following questions, among others, were discussed by an esteemed panel of I/O practitioners during a debate on the impact of big data on I/O psychology.

What is big data? Big data has been described using four terms: variety, volume, velocity, and veracity. “Variety” refers to the diversity of data sources to pull from and data types to explore. “Volume” describes the vast quantity of data available for analysis. “Velocity” refers to the speed with which we are capable of finding patterns in the data. Finally, “veracity” is a reference to the general accuracy of the data and, as a result, the outcomes of big data analyses.

What are the implications for psychology and business at large? Big data analyses are commonplace in industries such as insurance and finance. However, where does I/O psychology come into play? Some psychologists argue that while the skills needed to conduct such analyses are evolving, I/O practitioners have been working with big data for some time now (e.g., large-scale validity studies). Others argue that the influx of big data at our fingertips will mean revolutionary changes for the field: new skills, new techniques, and a new set of ethical issues to be wary of. Regardless of stance, I/O psychologists tend to agree that we have something unique to offer: an understanding of human behavior that can shed light on why we are seeing a particular pattern, which goes a step beyond describing what the pattern is.

Is there potential for ethical dilemma? With big data, there comes the potential for big problems. The more data we have to analyze, the more likely we are to find some sort of pattern by chance alone. Dr. Scott Erker advised that, with the increased potential for Type I error (false positives), I/O practitioners must be the “informed skeptics” who raise the point that a finding may not be “as significant as you might think.”

In spite of concerns, big data analyses are becoming more and more prevalent in research across a variety of industries. With a dual role as experts in both data analytics and human behavior, I/O psychologists undoubtedly have a lot to offer.


Recruitment of Individuals with Disabilities: Regulatory, Research, and Employer Perspectives

This session focused on a variety of issues regarding the recruitment and selection of individuals with disabilities in the context of new OFCCP regulations. The panel (including Dr. Dunleavy from DCI) included a diverse set of experts from academic, internal HR, test vendor, and HR risk management settings. The panelists covered topics including the new regulations, an update on available research regarding subgroup differences, and strategies for promoting and retaining individuals with disabilities. From a more practical perspective, the panelists also discussed the potential consequences of being under-utilized, whether adverse impact analyses of applicant flow data are worth pursuing, and how to conduct analyses in ways that mirror reality. Practitioners in federal contractor companies shared resources to help employers determine reasonable accommodations and discussed strategies for ensuring that processes are perceived as fair by individuals with disabilities. All of the panelists noted that the new regulations should promote a contemporary research agenda of value because applicant and employee data will now be available that had previously been illegal to collect. It will be interesting to see follow this line of research as data become available in the upcoming years.


Meta-analysis:  Methods for Messy, Incomplete, and Complex Data

Meta-analysis is a procedure for combining results over many different studies to obtain more stable research results than might be achieved in any one research study. The general idea behind meta-analysis is that all studies suffer from idiosyncratic problems, which affects statistical results, but averaging results across studies allows the study problems to cancel each other out, such that the average effect provides a realistic picture of the true relationships between variables. In this symposium, methods for improving the accuracy of meta-analytic results were presented.

Meta-analysis is an important methodology in the personnel selection research, and the presenters highlighted important considerations for those conducting meta-analyses. Of particular relevance were considerations of the importance of (1) aggregating comparable statistics, (2) appropriately contending with study outliers that may corrupt meta-analytic estimates, and (3) including the comprehensive set of research on the topic and recognizing the limitations of the meta-analysis if only a subset of research is included.


Predictive Analytics: Evolutionary Journey from Local Validation to Big Data

A panel of experts discussed the advantages and challenges for the two different approaches of local validation and utilizing “big data.” Several of the highlights are summarized below.

Local Validation

  • Local validation done poorly is worse than not doing a study at all
  • Quality of the study will largely depend on how good the construct validity is
  • Often a  good criterion measure isn’t good/available and will need to supplement (e.g., use vendor performance appraisal and (re)survey managers
  • Limited resources can largely impact quality of study
  • Can increase buy-in (i.e., perceived well locally)
  • Many of the experts will use validity generalization in addition to content validity, or will use it to test theories (more academic).

Big Data

  • With the increase in technology, many organizations are privy to “big data” which can be utilized to answer important business questions and guide strategic planning if harnessed properly
  • Timeliness is very important – can consider using longer periods of data, but placing more weight on recent data.
  • Challenges mostly center on integrating systems and/or using non-traditional sources. This often requires a lot of manual input if the “smoke stacks don’t talk” (i.e., no unique identifier across systems). It is recommended to work with IT to address, but start small – with a specific organization issue and work backwards (a lot of lessons learned to carry over).
  • If you can get big data right/clean, this can prove to be better than using meta-analyses or publications. Published research often has “publication bias,” whereas your internal database will allow for multiple findings, including those that are null
  • Important to keep in mind the importance of reliability and validity when conducting analytics – an area where I/O psychologists are a great asset.
  • Additionally, need to keep in mind the influence of statistical power with big data – statistical significance does not necessarily mean important (or practical significance)
  • A panelist recommended the website flowingdata.com for ideas using big data


Using and interpreting statistical corrections in high-stakes selection contexts

In criterion-related validity studies, validity coefficients provide evidence of the job relatedness of selection procedures, as they represent the degree to which scores on a selection procedure are related to performance on the job. The higher the validity coefficient, the stronger the evidence that the assessment is job related. In practice, statistical corrections are often applied to validity coefficients to account for the fact that observed validity coefficients often underestimate the relationship between the selection procedure and performance. In a well-attended symposium chaired by Dr. Kayo Sady, four presenters highlighted four lines of research exploring issues of applying statistical corrections.

Dr. Dana Dunleavy and colleagues highlighted medical school admissions research that illustrated the importance of accurately defining applicant populations and their characteristics in order to derive accurate validity coefficient estimates that have been corrected based on presumed population characteristics. Dr. Jeff Johnson presented cutting edge research that extends statistical correction formulas to synthetic validity coefficients, thus enhancing the available tools for evaluating validity based on non-traditional validation strategies. Dr. Lorin Mueller presented test construction research that underscored how final test quality is influenced by the particular correction procedure applied to item-level statistics. Finally Dr. Kayo Sady and David Morgan from DCI Consulting presented a legal risk matrix that introduced a method for determining selection procedure legal risk based on concurrent evaluation of the uncorrected and corrected coefficient characteristics. A common theme across the four presentations was that the assumptions underlying correction procedures should be carefully evaluated to help ensure accurate calculation and interpretation of corrected validity coefficients.


Cruising the Validity Transportation Highway: Are We There Yet?

This session of noted experts from academia and practice discussed selection procedure validation strategies that borrow validity evidence from other sources. There are many scenarios where selection tools may provide substantial value to organizations, but there is no opportunity to conduct local validation research. A lack of research may prevent the organization from demonstrating that the tools predict important work outcomes, which can insulate the organization from EEO risk. In these scenarios, borrowing validity evidence from other sources may be a worthwhile strategy.  There are a variety of strategies, yet there are few clear standards for understanding the appropriateness and persuasiveness of each. Toward that end, this panel evaluated contemporary strategies including validity transportability, synthetic validation methods, and meta-analysis as a validity generalization strategy. One common theme centered on the disconnect between practices accepted by the broad I/O psychology community and those that are regularly endorsed in EEO enforcement settings, where it appears that local research is favored. The session closed with some practical considerations regarding strengths and weaknesses of each approach.


Within-Group Variability: Methodological and Statistical Advancements in the Legal Context

When businesses fall under legal scrutiny, the question that is oftentimes raised is: Was one group of employees or applicants treated differently than another group? But what constitutes a group in these situations? It is critical to determine that the individuals in question share certain characteristics that justify grouping them together for purposes such as pay analysis and class certification. In this forum, I/O psychologists discussed a number of advancements in answering the question: Are these individuals similar enough to be treated as a group?

Drs. Kayo Sady and Mike Aamodt of DCI Consulting presented a research method for determining pay equity analysis groupings. The method aims to balance research rigor with time and effort, such that data might be collected to inform the grouping process without requiring an exorbitant amount of time and effort. That said, the method may require more time and effort than is reasonable for a typical proactive analysis, but under audit circumstances in which OFCCP is pursuing aggregations beyond the job title level, the method may be used to evaluate the appropriateness of groupings. In the presented method, the lowest level of meaningful aggregation (e.g., job title) is determined first followed by two steps to justify aggregation. In Step 1, subject matter experts (SMEs) rate all job title pairs on similarity of duties, skills, qualifications, and level of responsibility. To the extent that groups of jobs are rated to be similar on those four characteristics, the groups of jobs are evaluated at Step 2. Step 2 involves determining whether pay is influenced by the same factors (e.g., merit variables such as education, experience, or time in company) and in the same way by common factors across the groups. Two methods for completing Step 2 were presented. The first method for Step 2 involves evaluating the similarity of regression coefficients for a simple set of predictors across the groups. To the extent that the coefficients are similar across groups, there may be justification for aggregation. (Note, this method does not necessarily involve statistical tests of equivalence, like the Chow Test, as parsimony may be valued over analytic rigor in some circumstances.) The second option presented for Step 2 is to have SMEs rate, using a structured rating scale, the projected influence of the common factors on compensation. Once such data are collected, common similarity indices such as the squared Euclidean distance between ratings for different jobs can be used to evaluate the similarity of influence ratings across job titles, thus informing whether a common pay model exists across groups.

Other presenters discussed new methods for determining the appropriateness of class certification. For instance, Dr. Chester Hanvey explored class certification through the technique of time-and-motion observational methodology, which considers whether employees in a group allocate different amounts of time to different tasks, even within the same job. Hanvey proposed that variability in time spent on individual tasks demonstrates that employees are in fact not necessarily doing the same job. Similarly, Dr. David Dubin discussed the use of cluster analysis to show that time spent by employees on tasks can vary, which can serve as evidence against class certification. Finally, Dr. Kevin Murphy discussed a useful statistic for making the degree of group variability easy to understand: the coefficient of variation (CV). For groups with a normal amount of variability, CV is around .33. The CV increases as groups become more variable and decreases as groups become less variable. In comparison to significance tests that simply convey whether or not variability is greater than zero, the CV gives descriptive information on the degree of variability.


What Goes Unseen: Mental Disabilities in the Workplace

With the release of the new regulations under Section 503 of the Rehabilitation Act, discussions of disability in the workplace have become increasingly prevalent. The impact that these regulations will have on individuals with invisible disabilities is of particular concern among researchers and practitioners alike. The new regulations require federal contractors to begin soliciting disability status on a voluntary basis at the pre-offer phase of employment. Due to the oftentimes concealable nature of invisible disability, many are questioning whether applicants will disclose at the risk of potential stigmatization. In a forum dedicated to the topic of mental disability, Dr. Adrienne Colella, who has conducted extensive research on disability in the workplace, advised that I/O psychologists begin learning more about disability. Several researchers explored the topic of invisible disability in the workplace, shedding light on some of the recent progress being made in this area.

Presenters included researchers Anna Hullet and Christine Nittrouer and Dr. Sam Hunter. Specifically, research addressed issues surrounding adult attention deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and severe disability in an employment setting. The researchers discussed potential barriers for employees with disabilities as well as methods for producing positive work outcomes. For example, Nittrouer discussed her research on the use of goal-setting and self-monitoring as techniques for allowing employees with disabilities to better stay on task and complete tasks at work. She discussed the particular success of self-monitoring as a means for improving work performance within her study. Dr. Hunter raised an interesting question regarding selection barriers for applicants with ASD: Because ASD is a social disorder, will selection processes with social aspects (e.g., interviews) disadvantage applicants who are otherwise qualified for a position?

As a whole, the forum called attention to the reality that, from an I/O perspective, there is a lack of understanding of disability in the workplace. It is becoming more and more apparent that our knowledge of this topic needs to extend beyond the clinical setting.


Fraud on Employment Tests Happens: Innovative Approaches to Protecting Tests

Representatives from Microsoft, Caveon Test Security, CEB, and Select International participated in a panel discussion about protecting the integrity of tests and testing programs. The experts discussed several questions surrounding the prevalence of cheating and test piracy and approaches to protecting and/or addressing security breaches. The major take-home message was that practitioners need to be prepared and aware of the security risks for the specific test(s) used – they are constantly changing and likely to become more prevalent as technology changes. All felt that cheating wasn’t a “big” issue in the aggregate; however, there is little research in the area to truly know the prevalence and/or effects. It was noted that if cheating will have a large impact on the test (or selection decision), then unproctored testing is not an option.

Several of the key points from the session are summarized below:

  • Security tends to be a continuum, ranging from one end (less secure) with a fixed form with no item bank to a computer adaptive test (CAT) with large item bank on the other end (more secure). Certain item types are more vulnerable to cheating; for example, multiple choice questions. A performance-based item would be much less vulnerable.
  • As technology changes (e.g., mobile testing), reliance on less-secure item types may increase
  • Tips for unproctored internet testing (and testing in general):
    • Build dynamic tests (not standard form for each candidate)
    • Single use links
    • Have a plan for information monitoring (e.g. patrolling the web)
    • Utilize data forensics to identify anomalies (e.g., utilizing a data warehouse)
    • Cheating is more of a validity issue than a behavior issue
      • It takes a large amount of inflation by a large number of test-takers to affect the validity
      • Cheating has a greater impact on scoring that uses cut scores rather than rank ordering
      • It is recommended to include internal consistency measures, items to detect misbehavior, and increase length of the test
      • If using a CAT, it is recommended that a fixed length form is used (rather than variable). Candidate reactions tend to be more favorable for the fixed length forms.
      • Be creative – assume your test or content will be stolen
      • Have an action plan for security breaches (e.g., quickly correcting compromised item/form or content, dealing with candidate, etc.)
      • Budget ahead of time for security and prioritize areas of security (e.g., different forms, authenticate candidates via ID checks, test monitors, etc.)
        • Recommended to do a risk analysis ahead of time


Employees with Disabilities Section 503 Changes: Implications and Recommendations

This panel focused on the new Section 503 regulations which became effective on March 24, 2014. This rule prohibits federal contractors and subcontractors from engaging in employment discrimination against individuals with disabilities, and requires various affirmative action practices in recruitment, hiring and retention of protected individuals. Panelists discussed the impact of the new rule, ways organizations can help implement the law and why it is important. Peter Rutigliano, Senior Consultant at Sirota Consulting discussed ongoing research related to individuals with disabilities and referred to this group as the forgotten diversity segment. His research showed that some of the larger differences between individuals with disabilities (IWD) and individuals without disabilities (IWOD) were found in perceived treatment of the employee by the company, satisfaction with physical working condition, job support and job achievement. Smaller differences were found in compensation, management feedback and local environment (e.g., team). The session presenters discussed the following best practices and recommendations an organization can focus on as they come into compliance with the regulation.


Advice individuals with disabilities give employers:

  • Make sure job description is representative
  • Focus on ability and not disability
  • Consider flexible working conditions
  • Treat candidates equally
  • Allow candidates to prove themselves (trial period)
  • Prepare when interviewing candidates with disabilities
  • Invest in sensitivity and interview training
  • C-level involvement


Where to invest:

  • Training
  • Marketing
  • Real estate
  • Structured interviews
  • Job description (not limiting or unnecessary qualifications)
  • Performance management


New Developments in Biodata Research and Practice

Several presenters summarized research and practical recommendations surrounding the use of biodata as a selection assessment. Biodata is biographical information, which is typically collected through a questionnaire that asks about life and work experiences, and may also ask questions about opinions, values, attitudes, etc.  The session discussant argued that biodata items should focus on a life event and include a past tense verb; otherwise, biodata may be assessed using personality items. The presenters all agreed that there is great empirical support for assessments of biodata; however, biodata remains an undervalued technique. Two areas that are typically of concern for employers using or considering biodata assessments are applicant faking and test security. Presenters recommended having a large item bank and also exploring different item types as a method of content generation (which would also increase the pool). One example was to utilize response elaboration, which requires the applicant to provide open-ended responses to a previous multiple-choice question. To further decrease the likelihood of faking, it was recommended that the employer follow-up on certain score groups; word will get out that there is follow-up and the open-ended responses will be taken more seriously.  For employers considering the use of a biodata assessment in selection, it was recommended to not use validity generalization, but rather transportability as biodata is a process tapping different latent constructs and not a construct itself. Overall, the presenters encouraged the use of biodata in making employment decisions and hoped to see the field of research grow.


By Yesenia Avila, M.P.S., HR Analyst; Eric Dunleavy, Ph.D., Principal Consultant; Rachel Gabbard, M.A., HR Analyst; Kayo Sady, Ph.D., Senior Consultant; and Amanda Shapiro, M.S., Consultant, DCI Consulting Group


One interesting and unanswered question, related to the new 503 and VEVRAA regulations, concerns what information would be required for submission to OFCCP as part of the desk audit. Because OFCCP’s scheduling letter defines what is required as part of the initial submission, it is unclear how the new regulations will change such submissions. For example, the current scheduling letter requests the following:

For the desk audit, please submit the following information (1) a copy of your Executive Order Affirmative Action Program (AAP) prepared in accordance with the requirements of 41 CFR 60-1.40 and 60-2.1 through 2.17*; (2) a copy of your Section 503/38 U.S.C. 4212 AAP(s) prepared in according to the requirements of 41 CFR Parts 60-741 and 60-250 and/or 60-300, respectively; and (3) the support data specified in the enclosed Itemized Listing.

Currently, contractors submit just the written narrative in response to # 2 from above. After March 24, 2014 (or the next AAP compliance date), are contractors required to submit the new 503 and VEVRAA analytics (e.g., disability utilization goals, hiring benchmark and 44k data analytics)? Note: Those items are not listed in the itemized listing referenced in #3 above.

As per their FAQs on the new 503 and VEVRAA regulations, OFCCP has noted that they have no intentions to revise the current scheduling letter:

1. Must OFCCP amend the Scheduling Letter in order to obtain from contractors the data and information required in the Final Rule? 

No, OFCCP does not anticipate needing to amend the Scheduling Letter to obtain this new data. The current Scheduling Letter is clear that, when selected for a compliance evaluation, the contractor must provide OFCCP with their VEVRAA Affirmative Action Program (AAP) “prepared according to the requirements of …41 CFR Part 60-300.” The VEVRAA AAP requirements are contained in the Final Rule in Subpart C of 41 CFR Part 60-300. Accordingly, any new data and information required by Subpart C of the Final Rule must be included in the documents provided to OFCCP in response to the Scheduling Letter.

Interested in how the federal contractor community would interpret this FAQ concerning desk audit submissions related to 503 and VEVRAA, the OFCCP Institute recently administered a survey to a subset of the contractor community. The survey asked questions regarding what federal contractors were and were not planning to include in initial desk audit submissions. One hundred and twenty-two federal contractors participated. The following results were particularly interesting:

  • 72% would not submit the disability utilization nor the VEVRAA hiring benchmark analyses;
  • 67% would not submit the data analytics report (e.g., applicants, job openings, jobs filled, hired) for protected veterans or individuals with disabilities;
  • 75% would submit the Section 503 and VEVRAA narratives

In addition, sixty-seven percent of contractors estimated that it would take more than 12 hours to compile and submit the materials noted above.

 As your organization prepares to come into compliance with the new regulations, have you discussed what would be submitted in the event of an audit? If not, this is a great topic to add to your “must discuss list” before the regulations become live next week.

It will be interesting to see what, if anything, OFCCP expects you to submit during the desk audit. Stay tuned because it is going to be an interesting year ahead for both OFCCP and the contractor community.

by Yevonessa Hall, M.P.S., Associate Consultant, Eric Dunleavy, Ph.D., Principal Consultant, and David Cohen, President, DCI Consulting Group

Facebook Twitter Linkedin

In our prior posts, we have addressed the (a) definition of selection procedures, (b) combinations of different measures into compensatory or non-compensatory systems, (c) specific validation pitfalls associated with criminal history and background checks, (d) important aspects of selection system implementation, and (e) issues concerning reasonable efforts to identify suitable alternative selection measures. In this installment of the UGESP series, we take a step back and evaluate the foundation for almost all thorough and legally defensible validation efforts: job analysis.

Job analysis refers to the systematic process of collecting and interpreting job-related information for a given purpose and is the cornerstone of effective validation efforts. To effectively determine whether a selection procedure is appropriate for a job, it is crucial to understand (a) what is done on the job (i.e., what are the critical job behaviors), and (b) what individual characteristics are required to effectively perform the job (i.e., what are the critical Knowledge, Skills, Abilities, and Other Personal Characteristics). Once such job information is well understood, validation research can be conducted to determine whether selection procedure scores provide an indication of either the critical KSAOs or actual job performance.

The UGESP address job analysis research appropriate for three validity “types” (content, criterion-related, and construct). Although it is widely acknowledged that content, criterion-related, and construct validation do not represent different types of validity, but rather different bases of evidence for supporting decisions based on selection procedure scores, evaluating differences in job analysis guidance across the three strategies is illuminating.

The primary difference between the sets of guidance is that the job analysis requirements are lighter for criterion-related studies, in certain circumstances, compared to content or construct studies. Namely, if the outcome/criterion in a criterion-related study is obviously important to the particular employment context, a review of extant job information may suffice to meet the job analysis burden under the UGESP. Such criteria are typically objective business outcomes such as production rate, error rate, absenteeism, tenure, etc. Although the UGESP don’t specifically require a full job analysis in such situations, thorough job analysis information may be required to meet the burden of searching for suitable alternatives. That is, without detailed job analysis information, it may be difficult to search for and evaluate selection procedures considered suitable alternative.

There are a number of job analysis research features addressed in the UGESP that represent best practices:

  • Determining the important/critical work behavior(s) required for successful job performance
  • Defining the measures used to define importance and/or criticality, including the basis on which they were categorized as difficulty level, frequency performed, time spent, and consequences of error
  • Evaluating the relative importance of the defined work behaviors
  • Determining the knowledge, skills, abilities, and other work characteristics used to perform the critical work behaviors
  • Differentiating between KSAOs required upon entry versus those learned on the job
  • Indicating the relationships between the critical KSAOs and the critical/important work behaviors (most important for content strategies, but useful given enough time and resources, for the other strategies).

Additional Thoughts

We would be remiss if we did not briefly address two additional points about the UGESP and job analysis research.

First, the UGESP are clear that there is no one correct way to conduct a job analysis and that the specific information collected and procedures employed may vary as a function of the particular research or business context. As stated in the UGESP, “Any method of job analysis may be used if it provides the information required for the specific validation strategy used.” Further, professional judgment is always involved to determine the most appropriate and feasible job analysis method.

Second, job analytic purposes and methods have changed dramatically in the approximately 35 years since the UGESP were published. Increased reliance on HR as not simply a support function but rather a key strategic function has resulted, partly, in an increased consideration of job analysis methods for purposes other than validation. Many practitioners have moved away from “job-based” techniques, such as task analysis, to “organization-based” or “function-based” techniques that are more generalizable across the organization. Movement in many companies toward the use of competency modeling reflect such a change. Although a macro-level perspective may be appropriate for achieving business objectives, it may not be the best approach for achieving compliance objectives.

by Kayo Sady, Ph.D, Consultant, Eric Dunleavy, Ph.D., Principal Consultant, and Mike Aamodt, Ph.D., Principal Consultant, DCI Consulting Group

Facebook Twitter Linkedin

We are back! After a brief hiatus, we continue our UGESP Series by exploring the UGESP concept of suitable alternative selection procedures addressed by the UGESP. This piece explores the difficulty in determining (a) whether two selection procedures are truly alternatives to one another and (b) what level of effort to identify a suitable alternative is considered reasonable.

To the extent that a selection procedure results in adverse impact against a protected class (e.g., women, men), it is incumbent on the employer to (a) ensure that decisions based on selection procedure scores are valid and (b) evaluate the availability of alternative selection procedures that are equally valid for the intended purpose and result in lower adverse impact. As the UGESP state:

Where two or more selection procedures are available which serve the user’s legitimate interest in efficient and trustworthy workmanship, and which are substantially equally valid for a given purpose, the user should use the procedure which has been demonstrated to have the lesser adverse impact… Whenever the user is shown an alternative selection procedure with evidence of less adverse impact and substantial evidence of validity for the same job in similar circumstances, the user should investigate it to determine the appropriateness of using or validating it in accord with these guidelines.

In short, if two selection procedures have similar levels of validity evidence but different levels of adverse impact, the procedure associated with lower adverse impact should be used. Thus, consideration of multiple selection procedures is an important aspect of quality validation studies. As stated in the UGESP:

…whenever a validity study is called for by these guidelines, the user should include, as a part of the validity study, an investigation of suitable alternative selection procedures and suitable alternative methods of using the selection procedure which have as little adverse impact as possible, to determine the appropriateness of using or validating them in accord with these guidelines.

Such guidance is sensible and should serve to result in the employment of selection procedures that maximize the validity of selection decisions while minimizing adverse impact against protected classes. However, two primary questions must be adequately answered to ensure that (a) selection procedures considered alternatives are truly alternatives and (b) adequate effort is devoted to identifying suitable alternatives. Those question are:

  • What constitutes an alternative?
  • What level of effort is reasonable?

Is Procedure B truly an alternative to Procedure A? A suitable alternative is defined as a selection procedure that is substantially equally valid to a different selection procedure. Unfortunately, the UGESP and case law do not clearly address what constitutes substantially equally valid. As addressed in an earlier post, questions of validity refer to whether the decisions made based on a set of selection procedure scores are valid. That is, validity refers to whether the conclusions made based on a set of scores are justified. Thus, it is reasonably inferred that substantially equally valid refers to whether the validity research on each of the procedures is equally robust and indicates that:

  • The two procedures measure the same or very similar characteristics
  • The two procedures predict the same work-related behaviors equally well

Unfortunately, the idea of suitable alternatives is often over-simplified and thought to mean that two selection procedures have similar validity coefficients. Such a perspective ignores that the two procedures may be measuring substantially different characteristics and predicting substantially different aspects of work performance. For example, cognitive ability tests are typically associated with higher racial subgroup differences than are personality inventories, but in a particular situation, the validity coefficients associated with each may be similar. If one were to consider a personality inventory a suitable alternative for a cognitive ability test, one would be ignoring that the two selection procedures predict different on-the-job behaviors. In our view, such a perspective is inconsistent both with UGESP guidance and with contemporary selection research and guidance.

Take for example a police department that is using a reading comprehension test to ensure that its newly hired officers can understand the material presented in the academy (e,g., constitutional law, agency policies and procedures). The validation study shows that scores on the reading test predict performance in the academy as well as on-the-job performance in such dimensions as report writing, court testimony, and decision making. Because the test results in high levels of adverse impact, a civil rights group suggests that the department use a personality inventory instead of the cognitive ability test. The group’s argument is that both correlate .30 with academy performance but the personality inventory has less adverse impact. Is the personality inventory a reasonable alternative? No, because the two tests are tapping completely different characteristics.

If a particular selection procedure measuring cognitive ability is proposed to be used, other selection procedures measuring cognitive ability, but with smaller subgroup differences, should be considered as suitable alternatives. For example, if a traditional academic-based test measuring verbal and mathematical reasoning is being considered for selection purposes, a suitable alternative might be a test that measures reasoning via questions that ask about patterns in sets of symbols or diagrams. Each test is intended to measure reasoning ability, and the latter may be equally valid for selection into the target job yet may have smaller subgroup differences. This assumes that verbal and mathematical reasoning ability, per se, are not the characteristics critical to job performance, but rather the broader characteristic of reasoning ability is what really matters.

Essentially, a suitable alternative to a selection procedure under consideration is one that taps the same job-relevant characteristics but taps fewer job-irrelevant characteristics that exacerbate subgroup differences. Questions that require answers in the search for suitable alternatives include:

  • Is there an alternative procedure that is intended to measure the same or a highly similar characteristic(s)?
  • Is there sufficient evidence that the procedure effectively measures such characteristics?
  • Is there approximately equal job-related evidence for that procedure as for the initial procedure?
  • Are sub-group differences in scores obtained from that procedure meaningfully smaller than those for the procedure I am considering?

What is the burden to seek and identify an alternative?The UGESP note that use of a selection procedure producing adverse impact is lawful if, “a user has made a reasonable effort to become aware of such alternative procedures and validity has been demonstrated in accord with these guidelines…” They further note that:

…preliminary determination of the suitability of the alternative selection procedure for the user and job in question may have to be made on the basis of incomplete information…the investigation should continue until the user has reasonably concluded that the alternative is not useful or not suitable, or until a study of its validity has been completed.

But what is a reasonable effort? Thankfully, the UGESP Q&As provide explicit guidance concerning the limits of a user’s effort and outline a series of steps to take to meet the burden.

  • “A reasonable investigation of alternatives would begin with a search of the published literature (test manuals and journal articles) to develop a list of currently available selection procedures that have in the past been found to be valid for the job in question or for similar jobs.” 
  • “A further review would then be required of all selection procedures at least as valid as the proposed procedure to determine if any offer the probability of lesser adverse impact.” 
    • Note the earlier discussion concerning the definition of “equally valid.”
  • “Where the information on the proposed selection procedure indicates a low degree of validity and high adverse impact, and where the published literature does not suggest a better alternative, investigation of other sources (for example, professionally-available, unpublished research studies) may also be necessary before continuing use of the proposed procedure can be justified.” 
  • “In any event, a survey of the enforcement agencies alone does not constitute a reasonable investigation of alternatives.” 
  • “Professional reporting of studies of validity and adverse impact is encouraged within the constraints of practicality.” 

The composite caveat One question that remains unanswered by UGESP and case law, to our knowledge, is whether a composite selection procedure, in which scores from different tests of different job-related characteristics are combined, presents a suitable alternative to just one test of one type of job-related characteristic. For example, it is unclear whether a composite selection procedure combining scores from a job-related cognitive ability test and a job-related personality inventory presents a suitable alternative to the cognitive ability test alone.

If the composite has stronger evidence of validity and smaller subgroup differences than the cognitive ability test alone, is the composite a suitable alternative? Perhaps it is because, unlike a situation in which a personality inventory is proposed as a suitable alternative to a cognitive ability test, the composite predicts both work behaviors associated with cognitive ability and work behaviors associated with personality.

A more complex scenario is one in which the composite has stronger evidence of validity but higher subgroup differences than the cognitive ability test alone. Such a scenario could arise if the personality test has subgroup differences, however small, in the same direction as the cognitive ability test. Because the two selection procedures are indicators of different characteristics, the subgroup differences may compound rather than average out. The question is whether the composite should be used because it has stronger validity evidence or whether the cognitive ability test should be used alone because it has smaller subgroup differences. Without clear guidance from the UGESP or case law, such a choice may come down to the values and objectives of the organization considering the selection procedures.

Of course, considerations of cost, timing, and other practical realities are relevant to whether investigations of available composites exceed the threshold of reasonable effort. Additionally, we have not addressed questions of how different components might be weighted, how to determine appropriate cut-scores, and whether different configurations of elements in a composite reflect suitable alternatives to one another. The answers to such questions are context dependent and are not readily explored in a single blog. What is clear from the UGESP, however, is that combining different selection procedures into composite measures is not prohibited by the suitable alternatives search requirement. The UGESP state:

Whenever the user is shown an alternative selection procedure with evidence of less adverse impact and substantial evidence of validity for the same job in similar circumstances, the user should investigate it to determine the appropriateness of using or validating it in accord with these guidelines. This subsection is not intended to preclude the combination of procedures into a significantly more valid procedure, if the use of such a combination has been shown to be in compliance with the guidelines” (emphasis in bold added).

Conclusion The requirement to seek and evaluate suitable alternative selection procedures can be a complex issue in practice. Unfortunately, guidance on the topic is relatively scarce, and there is widespread misinterpretation of what constitutes a suitable alternative and what efforts are required to seek out a suitable alternative. DCI’s Kayo Sady is co-editor of an upcoming handbook (to be published in 2014) that will provide concrete, practitioner-oriented guidance on a number of HR legal issues. The volume will include a thorough treatment of the suitable alternatives requirement including evaluation of the UGESP, case law, the research literature, case studies, best practice guidance, and attorney commentary.

by Kayo Sady, Ph.D, Consultant, Eric Dunleavy, Ph.D., Principal Consultant, and Mike Aamodt, Ph.D., Principal Consultant, DCI Consulting Group

Facebook Twitter Linkedin


Really, I Come Here for the Food: Sex as a BFOQ for Restaurant Servers

Michael Aamodt, Principal Consultant at DCI Consulting Group, wrote an article featured in SIOP’s TIP publication, January 2017.