Header Image
Kristen Pryor

Kristen Pryor, M.S.

Contributing Author
Facebook Twitter Linkedin

Kristen Pryor ’s Recent Posts

On May 11, OFCCP announced that Gordon Food Service (GFS) settled with the agency, agreeing to pay $1.85 million to female applicants to resolve allegations of hiring discrimination for laborer positions at four separate locations:  two in Michigan, one in Kentucky, and one in Wisconsin. Specifically, there were 926 affected female applicants across locations, and GFS agreed to hire 37 of those applicants as positions become available.

The allegations centered on findings of adverse impact at the job group level and referenced a strength test, lacking validation in accordance with UGESP, as the source of the adverse impact. OFCCP evaluated three separate time periods across the four facilities, the shortest being approximately 19 months of data at two facilities (Kenosha, Wisconsin and Grand Rapids, Michigan) and the longest being approximately 23 months of data at the Kentucky facility.

Each conciliation agreement indicated that the strength test adversely impacted female applicants and was not validated in accordance with the Uniform Guidelines on Employee Selection Procedures (UGESP). The conciliation agreements mentioned that an interim validation study was conducted, but it failed to include “an investigation of suitable alternative selection procedures and suitable alternative methods of using the selection procedures which have as little adverse impact as possible.” In other words, the agency determined that the interim validation study did not provide any or enough information about potential reasonable alternatives to the strength test. GFS has agreed to cease use of the strength test until it has been validated in accordance with UGESP or a comparable UGESP validated assessment is identified.

This case serves as a reminder to research the underlying cause of statistical disparities. Per UGESP, this research should take the form of breaking down the hiring process into the steps or points where decisions are made. If a step is identified to be driving the disparity, sufficient validation evidence is then required to support the continued use of that step, in spite of the statistical disparity. Personnel psychology research literature generally reports that tests of physical strength and/or endurance are likely to produce adverse impact against females. As such, federal contractors using these types of assessments should seriously consider formal validation research in accordance with UGESP.

By Kristen Pryor, Consultant, and Jeff Henderson, Associate Consultant, DCI Consulting Group

Facebook Twitter Linkedin

The 31st Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held April 14-16, 2016 in Anaheim, California. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information. Many sessions cover topics of interest to the federal contractor community, including employment law, testing, diversity and inclusion, big data, and regulations for individuals with a disability. DCI Consulting Group staff members were well represented in a number of high profile SIOP presentations and also attended a variety of other sessions worth sharing. Notable session summaries and highlights can be found below.


Beyond Frequentist Paradigms in Legal Scenarios: Consideration of Bayesian Approaches

High-stakes employment scenarios with legal ramifications historically rely on a frequentist statistical approach that assesses the likelihood of the data assuming a certain state of affairs in the population. This, however, is not the same as the question that is usually of interest, which is to assess the likelihood of a certain state of affairs in the population given the data. This session explored the use of a Bayesian statistical approach, which answers the latter question, across different high-stakes employment scenarios. In each of the presented studies, data were simulated and analyzed, and results between the Bayesian and frequentist approaches compared:

  • David F. Dubin, Ph.D., and Anthony S. Boyce, Ph.D., illustrated the application of Bayesian statistics for identifying selection test cheaters and fakers.
  • Chester Hanvey, Ph.D., applied a Bayesian approach for establishing whether jobs are correctly classified as exempt in wage and hour questions.
  • Kayo Sady, Ph.D., and Samantha Holland, Ph.D., demonstrated the advantages of a Bayesian analysis in compensation scenarios with difficult-to-detect subgroup differences.

In each of the studies, the results suggested the utility of a Bayesian analysis in some specific circumstances. Overall, the presenters agreed that the Bayesian analysis should supplement more traditional frequentist analyses and noted specific issues to consider when designing these analyses. Given the lack of legal precedent and difficulties introducing a new set of statistical interpretations into the courtroom, the takeaway was that the best current value-add for Bayesian approaches is in proactive, non-litigation applications.


Contemporary Issues in Occupational Credentialing

The opportunity for credentialing or micro-credentialing is ever increasing, with credentials popping up in many professional fields that previously had none. What it takes to develop and maintain these credentialing exams, however, is something that many people know little about. In this session led by Samantha Holland (DCI), panelists from both private and public sector credentialing programs shared their experiences with issues such as maintaining test security, developing test content, and establishing validation evidence for their exams. Some highlights are noted below:

  • John Weiner, from PSI, noted the many security aspects to consider when administering exams online, a situation that requires additional measures beyond those described by other panelists.
  • Rebecca Fraser, from the Office of Personnel Management, shared her experience using methods beyond practice analysis to establish the content domain for specialized, low sample size domains.
  • Lorin Mueller, from the Federation of State Board of Physical Therapists (FSBPT), discussed the need for clearer boundaries when it comes to regulation of certification boards: the line between what is good for a profession, and what is good for business, can sometimes become blurred.
  • Alex Alonso, from the Society of Human Resource Management (SHRM), provided his experiences of building a certification program from the ground up for his organization’s newly minted HR certification program.


A View from the Trenches: EEOC/OFCCP Practitioner Update

DCI’s Joanna Colosimo moderated this panel, featuring DCI’s Mike Aamodt, Michelle Duncan of Jackson Lewis, Eyal Grauer of Starbucks, and David Schmidt of DDI, providing an update on recent regulatory changes, enforcement trends, and other topics related to compliance.

In fiscal year 2015, the OFCCP completed fewer compliance evaluations, but the duration of audits has increased as a result of the revised scheduling letter and more in-depth follow-up requests, particularly related to compensation. The panel also discussed the increase in steering allegations and settlements where whites and/or males were the alleged victims of systemic hiring discrimination.

Dr. Aamodt spoke about two hot topics: the EEOC’s proposed pay data collection tool and the use of criminal background checks for employment decisions. With regard to the EEO-1 pay data collection tool, he highlighted the burden of reporting pay data for 10 EEO-1 categories, 12 pay bands, 7 race/ethnicity categories, and 2 sex categories, as well as some of the limitations of using W-2 data. Additionally, he discussed how difficult it would be for the EEOC to use the resulting data to identify pay issues. For employers using criminal background checks, Dr. Aamodt recommended that contractors adopt narrowly-tailored policies that consider the nature of the offense, the duration of time since the offense, and the nature of the job being sought.


Strategically Evaluating Outreach for Individuals with Disabilities and Veterans

This session presented research conducted by DCI’s Kristen Pryor, Rachel Gabbard, and Joanna Colosimo to investigate best practices amongst federal contractors in complying with the 503-VEVRAA formal evaluation of outreach and recruitment obligations. Representatives from 77 federal contractor organizations provided survey feedback on current methods and prospective strategies for evaluation. Results identified strategies such as tracking resource specific metrics on qualified referrals and hires as well as ROI analysis for evaluating the success of outreach efforts. Results also suggest general frustration among federal contractors due to insufficient and ambiguous regulatory guidance on this requirement. The full white paper is available here. In addition, DCI will be conducting follow-up research in the near future to determine if further progress has been made in this area, now that the regulations have been in effect for over two years.


No Longer an Afterthought? Reasonable Alternatives and Title VII Litigation

DCI’s Emilee Tison moderated this session where panelists discussed their perspectives and experiences related to identifying and evaluating reasonable alternatives. Panelists included Winfred Arthur, Jr (Texas A&M Univ.), Theodore Hayes (FBI), James Kuthy (Biddle Consulting Group, Inc.), and Ryan O’Leary (PDRI, a CEB Company).

Discussion topics included:

  • The Uniform Guidelines text related to the “reasonable effort” in identifying alternatives with “equal” validity and “lesser” adverse impact
  • Strategies for identifying and considering alternatives, including the impact this will have on two selection goals: validity and diversity
  • The potential impact of recent case law on discussions of reasonable alternatives
    • Lopez v. City of Lawrence, 2014
    • Johnson v. City of Memphis, 2014
    • Documenting a consideration of alternative selection procedures

Panelists ended the session with a few parting words, including:

  • Clearly identify what you are considering an alternative
    • Note that not all alternatives are created equally
    • Put in the effort to identify and document your search for alternatives
    • When documenting alternatives, steer clear of ‘stock language’ by providing justification for your choice(s)


Competencies and Content Expertise for I/O Psychology Expert Witnesses

In light of recent developments in case law and updated regulatory guidance, panelists provided competencies and strategies for expert witness testimony, focusing on three main topics: social framework analysis (SFA), new measures for test validation, and wage and hour concerns related to revised FLSA regulations on exempt status employees. Panelists included DCI’s Eric Dunleavy and Arthur Gutman, in addition to Margaret Stockdale of IUPUI, Cristina Banks of Lamorinda Consulting, Caren Goldberg of Bowie State University, and David Ross of Seyfath Shaw.

The goal of SFA as it relates to expert witnesses is to educate the court and jury on the processes underlying cognitive bias and other socially constructed concepts like gender inequality. Panelists cited the 2011 Supreme Court case of Walmart v. Dukes as a prime example of applying SFA methodology to diagnose discrimination in personnel practices. Although SFA has been met with some criticism, it can be said that there is a certain degree of subjectivity in many employment processes that have the potential to lead to discrimination. For this reason, experts are encouraged to look at seemingly neutral factors that may have a disproportionate impact on members of a protected group.

Shifting focus to standards regarding test validation, panelists commented on the outdated nature of the Uniform Guidelines on Employee Selection Procedures (UGESP), which have not been updated in nearly 40 years.  Although the panel was not aware of any initiatives to update the guidelines, it was noted that several SIOP representatives have met with the Equal Employment Opportunity Commission (EEOC) regarding the guidelines and other topics of mutual interest. Panelists also advised the audience to rely on both the SIOP Principles and APA Standards as supplemental, more contemporary resources regarding test validation standards. Additionally, SIOP will be publishing a white paper on minimum qualifications and adverse impact analyses that addresses data aggregation concerns and other testing considerations.

The final topic discussed focused on wage and hour issues concerning the revised FLSA regulations. The panel discussed the difficulties that many employers face in accurately classifying jobs as exempt or non-exempt, and also when determining whether independent contractors should be considered employees. It was recommended that job analyses be done for individual positions, rather than general ones, to help determine exempt status and how much time is spent doing each type of work. Employers should also be aware of any differences regarding state law.


Opening the “Black Box”: Legal Implications of Big Data Algorithms

The subject of “big data” has become a hot topic as access to increasingly large amounts of data provides employers with new opportunities to make informed decisions related to recruitment, selection, retention, and other personnel decisions. However, “data scientists” often overlook the legal implications of using big data algorithms within an employment context, especially when it comes to employee selection. Panelists discussed several issues emerging from the use of big data algorithms, including the potential for discrimination, Title VII consequences, and strategies for mitigating risk.

As suggested by DCI’s Eric Dunleavy, many of the “big data” models really do not differ from empirically keyed biodata, which is not a new concept. What is new are methods of collecting larger amounts of data from new sources. Like empirically keyed biodata, big data can be very effective in predicting work-related outcomes. However, if the employer cannot explain how the algorithm works or illustrate that it is job-related, it may be difficult to justify use of the algorithm if facing a legal challenge.

In addition to traditional adverse impact concerns related to women and minorities, some big data techniques may have the potential to discriminate against other protected groups. For example, one panelist mentioned a computer program that can automatically score an applicant’s body movements and analyze vocal attributes from a video recording of an interview. Several other panelists noted that certain body movements or vocal attributes may be related to protected class status, in particular individuals with disabilities. The main takeaway here is that if an employer is using data algorithms, it is imperative that they not only validate the model, but also understand how it is making decisions.


Big Data Analytics and Employment Decisions: Opportunities and Challenges

In this session, speakers highlighted the increasing popularity of the use of big data techniques (e.g., machine learning) within organizations to predict work outcomes , pointing out both benefits and challenges inherent to these approaches.

As one example of a big data “win”, Facebook’s David Morgan described how data collected on the current workforce can be used to identify employees at risk of turnover. More caution is required, however, when using big data to inform selection decisions. Many big data algorithms are essentially “black boxes”: data goes in and results come out with little transparency of the how or the why. Not being able to explain the “why” makes these approaches very difficult to defend in court. Rich Tonowski, representing the EEOC, advised that companies be knowledgeable and comfortable with the process being used as the agency will obtain access to the algorithm. Similarly, companies should be able to explain how the information being used is job-related, especially when data have been mined from social media or other Internet sources.

A final caveat was that machine learning tools may use data that is correlated with protected-class status in some way.  Dave Schmitt of DDI suggested one way to test for this is to determine if the model can predict the race or sex of applicants. If so, then it may be subterfuge for discrimination. This may be especially compounded by the “digital divide,” where minorities may be less likely to have regular access to the Internet due to lower socio-economic status.


Applied Criterion-Related Validation Challenges: What We Weren’t Taught in Textbooks

This panel, which included DCI’s Art Gutman, discussed a variety of challenges faced when working to conduct criterion-related validation studies for client organizations. Challenges included study design issues, data collection problems, determinations regarding appropriate analysis, and meeting reporting requirements. Specifically, presenters discussed the criteria problem (obtaining appropriate and accurate measures of job performance), problems with predicting low base rate events, issues of range restriction and the appropriateness of applying corrections, among others. The panelists hypothesized that upcoming issues in criterion validation will include dealing with big data (“messy predictors”), processes for validating non-psychometric assessments, addressing validity equivalence (or lack thereof) in multi-platform or mobile assessments, and the eventuality of court cases evaluating validity generalization.


Implications of Revisions to FLSA Exemptions for Organizations and Employees

In this session, a panel of experts provided insights on the proposed changes to the FLSA exemption criteria.  The panel discussed the salary test for exemption, which would increase from $455 a week to the 40th percentile of weekly earnings for full-time salaried workers (estimated at $970 for 2016) and the implied potential changes to the job duties test. Regarding the salary test, panelists agreed that a change is overdue. However, they argued that a phased approach would be more appropriate and that the regulation should not be set at a dollar value, but instead aligned to a value that will allow it to stay in line with inflation. The NPRM’s discussion of the job duties test did not propose a change, but asked for feedback on whether a quantitative threshold, like the 50% “primarily engaged” test in California, should be implemented. The DOL estimated that approximately 20% of exempt employees would be impacted by the salary changes alone. Implications for employers are staggering, especially in light of the potential for a 60 day implementation window. First, employers must assess the extent to which they are comfortable with their exempt/nonexempt classifications and reasoning and plan to re-evaluate where needed. Second, budgeting and cost scenarios for moving exempt positions to non-exempt, realigning duties, or increasing pay should be evaluated. Finally, internal messaging and communication plans should be in place to outline the changes, reasoning, and any new procedures.


Novel Approaches for Enhancing Diversity Training Effectiveness in the Workplace

In this session, four different presenters provided insights on diversity training. Three presented information from academic research, and one presenter provided information from an organization context. A full 67% of organizations provide some form of diversity training, though research into the impact of that training on the job is varied. One series of studies found that individuals who are high in social dominance orientation (e.g., high preference for hierarchy in a social system and dominance over lower-status groups) tend to be more resistant to diversity training, but that this resistance can be mitigated when the training is endorsed by an executive leader. Another series of studies found that men are more likely to place importance on gender issues addressed when those issues are put forth by other men, and that this holds in both written context and in-person contexts. A Google employee presented on the training Google has implemented as part of new hire on-boarding on implicit or unconscious biases. The training focuses first on increasing awareness and understanding of the topic, to provide a common language, and initial suggestions for mitigation. Follow-up training has focused more on role playing type scenarios to cement the behavior change and mitigation aspect, increasing employee comfort level with calling out biases when and where they are observed.


Why Survey Data Fail – and What to Do About it

Panelists discussed their experiences conducting surveys, times when things went wrong, and recommendations for a successful survey. Anyone can use and develop a survey, but issues can arise when multiple stakeholders are involved, each with a different opinion. For this reason, it is important to communicate the purpose of the survey and how the results will be used. Branding can be beneficial to help develop awareness, generate interest, and increase participation. Positive changes implemented based on survey results can also lead to increased participation the following year. Additionally, it is important to research any null or opposite findings between survey iterations to give you a better understanding of any issues that may be present within your organization.

Panelists also addressed problems they have encountered when implementing results, including trying to do too much with the findings, or slicing the data so many ways that your results become less reliable. It was also emphasized that results should be presented in a way that leaves little room for subjective interpretation to avoid making conclusions that are not supported by the data.

Finally, the panel provided a few recommendations for a successful survey:

  • Make responding easy
  • Get people excited about data by telling a good story
  • Provide insights and summaries when reporting results
  • Make an effort to understand your audience in order to keep participants engaged year after year


Can Technology Like Deep Learning Eliminate Adverse Impact Forever?

This debate-style session posed the question of whether or not big data techniques (specifically deep learning or machine learning) could/should be used to eliminate adverse impact during selection. The panel included data scientists and I/O psychologists to present their perspectives. The I/O psychologists opposing this technique – including DCI’s Emilee Tison – presented the following high-level points:

  • The identification of adverse impact alone is not synonymous with illegal discrimination
    • The blind elimination of it may eliminate meaningful differences that exist due to legitimate job-related factors – impacting the validity of the selection procedure
    • Adverse impact is the prima facie standard for a disparate impact case; however, procedures that produce adverse impact have two additional considerations:
      • The job relatedness or business necessity of the procedure
      • The consideration of reasonable alternatives
  • Making selection decisions based on protected class status is illegal according to the CRA 91 and, as supported in recent case law, selection decisions should not be based on adverse impact alone (Ricci v. DeStefano, 2009)
  • Data scraping techniques – that learn and pull in factors to use in predicting important outcomes (such as information from Facebook) – call into question the job-relatedness of the selection procedure

In summary, the panelists came from very different perspectives and foundational knowledge bases; however, it was the start of what hopefully becomes meaningful cross-discipline dialogue.



By: Kayo Sady, Senior Consultant; Samantha Holland, Consultant; Brittany Dian, Associate Consultant; Dave Sharrer, Consultant; Kristen Pryor, Consultant; Rachel Gabbard, Associate Consultant; Joanna Colosimo, Senior Consultant; Emilee Tison, Senior Consultant; and Bryce Hansell, Associate Consultant at DCI Consulting Group 



A recently updated OFCCP infographic illustrates the jurisdictional thresholds triggering the requirement for contractors to comply with nondiscrimination and affirmative action requirements. Every five years, the Federal Acquisition Regulatory Council (FAR Council) is required to review the dollar threshold amounts in some federal agency procurement related laws to determine if adjustments need to be made for inflation. The FAR Council does not review EO 11246 thresholds, but does determine thresholds for Section 503 and VEVRAA.

Effective October 1, 2010, the FAR Council implemented inflationary adjustments for Section 503; changing the supply and service contractor threshold for a written affirmative action plan (AAP) from $10,000 in contracts to $15,000. Effective October 1, 2015, inflationary adjustments have been implemented for VEVRAA; changing the supply and service contractor threshold for a written affirmative action plan (AAP) from $100,000 in contracts to $150,000. Notably, this increase in the VEVRAA threshold applies to VETS-4212 filing requirements, as well. Additional information on the effect of the FAR adjustment on these filing requirements can be found here.

By Jana Garman, Consultant, and Kristen Pryor, Consultant at DCI Consulting Group 

Facebook Twitter Linkedin

On March 16, 2016, the EEOC held a public hearing on its proposed revisions to the Employee Information Report (EEO-1). The hearing opened with statements from each of the EEOC commissioners.  Following the EEOC opening statements, OFCCP Director, Patricia Shiu, and OFCCP’s Director of Policy and Program Development, Debra Carr, provided testimony on the collaboration between the agencies in preparing this proposal, and how the proposed revisions to the EEO-1 will serve the purposes of the OFCCP’s previously proposed Equal Pay Report, without a separate reporting requirement.

The hearing then proceeded to move through three separate panels of five individuals each. Each panel began with a brief introduction and summary of their written testimony, which was provided in advance of the hearing and is available online. EEOC commissioners then took turns asking questions.

The invited panelists were almost evenly split between those strongly in favor of the proposed EEO-1 revisions (e.g., the NAACP, the US Women’s Chamber of Commerce, and academic researchers) and those with significant reservations (e.g., National Federation of Independent Business, SHRM, EEAC and US Chamber of Commerce). The reservations can be bucketed into four broad categories: issues with the EEOC’s burden estimate, issues with the type of data being collected, issues with the proposed use of the data, and confidentiality concerns.

As mentioned above, panelists voiced concerns to the EEOC commissioners about the burden, especially as it pertains to small employers (e.g., 101 employees). The EEOC commissioners seemed particularly open to considering calendar year reporting, to reduce the burden of pulling W-2 data off-cycle. Regarding the type of data being collected, some panelists suggested that annualized base pay would be a better type of data to use, as it would both reduce the burden of pulling data from multiple systems and decrease the error that may be introduced based on time in company with W-2 data (i.e., if two employees make the exact same salary, but one started 2 months before the pull date and the other started 10 months before the pull date, they would be erroneously reported in different pay bands using W-2 earnings, but not so using annualized base pay).

DCI predicts this proposal will move forward and the EEOC will likely try to make this final by September. It seems increasingly likely that they may be willing to move to a calendar year reporting for EEO-1 reports to reduce burden, but it is still unclear how many of the other alternatives proposed in comments will be applied. Remember, the EARLIEST this requirement would become effective is September, 2017. Stay tuned!

By Kristen Pryor, Consultant, and Bryce Hansell, Associate Consultant at DCI Consulting Group 

Facebook Twitter Linkedin

Most scientific studies include people talking about “statistically significant” results that support whatever effect they are examining, typically mentioning p-values less than .05 as sufficient evidence. This is true in adverse impact analyses as well. A previous blog in the statistical significance series outlines what the term means, and mentions an issue that we will focus on in this part of the series: identifying when the principles of chance may be at play.

In the most commonly used statistical models, p-values indicate the likelihood that a set of observed data is compatible with the absence of whatever effect is specified in a given hypothesis (for example, that mean differences exist between protected classes ).  This is called null hypothesis testing. So, a p-value of .05 indicates a 5% chance that the data is compatible with the null hypothesis (i.e., no mean differences between protected classes), and a 95% chance that the data is incompatible with the null hypothesis (i.e., there are  mean differences between protected classes).

Those are pretty good odds, and we typically take them: when p<.05, we conclude the observed pattern of data is “significant” evidence against the null hypothesis, and, therefore, infer support for the hypothesized effect.  In other words, this “significant” evidence does not tell us why  there are differences, only that we are fairly confident that differences exist based on the data.

Things get trickier when running multiple statistical tests on your data. The 5% margin of error on each test (i.e., using a p-value of .05) means that if you were to run 100 analyses, you’d expect to incorrectly conclude, based on the data, that the hypothesized effect exists about five times. In other words, some results may be false alarms (i.e., the pattern in the data may seem to be related to the hypothesized effect but is in fact due to random variation).  If you’re only running a few adverse impact analyses, this possibility is generally ignored. It’s a different story, however, when running proactive AAP analyses, where the number of individual analyses can easily run into the thousands!

For example, assume a company has 100 AAPs and 30 job groups. Running male/female adverse impact analyses on applicants could be up to 3,000 analyses. By the time analyses are also run on promotions and terminations, we are up to 15,000 analyses. Statistically speaking, we’d expect to see around 150 applicant analyses (3,000 x 5%) and 750 overall analyses (15,000 x 5%) with potentially false positive results at a p-value of .05. A recent statement from the American Statistical Association (ASA) cautions that when multiple analyses are run, only reporting p-values that surpass a significance threshold, without reporting the number, types, and hypotheses behind all statistical analyses run, makes those “reported p-values essentially uninterpretable.” So, how do we know if similar results are real disparities, or just due to luck of the draw?

One option, and the way OFCCP previously recommended in a 1979 Technical Advisory Committee (TAC) manual, is by applying a statistical correction that accounts for the Type I error rate (i.e., false alarms) when running repeated analyses. The Bonferroni correction is one of the most widely used, though there are others as well. These essentially work by adjusting the required significance level based on the total number of analyses run. For example, if 20 tests are being run with a p-value of 0.05, applying the Bonferroni correction means that significance would only be asserted when the p-value is less than or equal to 0.0025.

The field has yet to converge on clear guidance as to which correction is best or when they must be used. However, knowing what “statistically significant” actually means coupled with the practical realities of running hundreds or thousands of tests will put you in a good position to ask the right questions when working through proactive AAPs.

By Kristen Pryor, Consultant, and Sam Holland, Consultant at DCI Consulting Group 

Facebook Twitter Linkedin

Nearly two full years have passed since the release of the revised Section 503 and VEVRAA which introduced a number of new requirements for federal contractors involving affirmative action for protected veterans and individuals with disabilities (IwDs). Although March 24, 2014 will go down in history as the date marking the start of the new requirements, many federal contractors had started working to prepare for the upcoming changes well before this date. Of the many changes underway, contractors voiced particular concerns about the new requirement to formally evaluate efforts taken to attract and retain qualified IwDs and protected veterans. Questions on implementing the new requirement were largely unanswered by the regulatory text. For this reason, DCI Consulting Group developed a survey to engage the contractor community in a discussion of current practices and future plans for effectively evaluating outreach and recruitment for IwDs and protected veterans. Contractor responses were reviewed by DCI staff, and used to compile a comprehensive resource on best practices amongst contractors for conducting an effective evaluation of outreach and recruitment of IwDs and protected veterans, and released in this white paper.

By Joanna Colosimo, Senior Consultant; Rachel Gabbard, Associate Consultant; and Kristen Pryor, Consultant at DCI Consulting Group

Facebook Twitter Linkedin

OFCCP has released a new voluntary poster: Opening Doors of Opportunity for ALL Workers. The poster emphasizes the agency’s goals of diversity and equal opportunity, as well as expectations that federal contractors “must treat workers fairly and without discrimination” and “pay all workers fairly.”

Although the intention of this poster may be good and bring awareness to OFCCP’s mission, the use of fairness language may be misleading in comparison to actual regulatory requirements. For us here at DCI and other experts in selection and compensation practices, fairness could mean a wide variety of things depending on context. For example, fairness may speak to people’s perceptions of decisions, or processes used to arrive at decisions, in terms of how just they seem. This may be independent of the actual objective equal treatment of different individuals. Importantly, OFCCP’s regulations only require the latter, not the former.

Though this may seem like a matter of semantics, the distinction between fairness and non-discrimination is important to keep in mind. While what constitutes discrimination is very well-defined within the body of OFCCP regulations, finding a way to ensure the perception of fairness would be very difficult indeed.

By Samantha Holland, Consultant and Kristen Pryor, Consultant at DCI Consulting Group

Facebook Twitter Linkedin

The new California Fair Pay Act (CFPA) is summarized in depth in a previous blog in this series. The CFPA includes strict stipulations for employing “[a] bona fide factor other than sex, such as education, training, or experience” to explain wage differentials. If we look at the key components of the requirements, excerpted below, we see the CFPA is really just calling for a validation process for compensation factors.

“This factor shall apply only if the employer demonstrates that the factor is not based on or derived from a sex-based differential in compensation, is job related with respect to the position in question, and is consistent with a business necessity… This defense shall not apply if the employee demonstrates that an alternative business practice exists that would serve the same business purpose without producing the wage differential.”

Demonstrating validity for a selection procedure is a core tenet of defensible employer selection decisions (see here and here for more on that). But how does the concept of selection procedure validation translate to the compensation arena, in light of the CFPA?

Let’s consider educational background, a commonly used pay factor. Education may be a clear “bona fide factor” for jobs where a degree is required to practice (e.g., MD for doctors), but what about for other jobs for which degrees are simply considered beneficial (e.g., MBAs for managers)? For the latter, the CFPA may require a rigorous analysis linking education to core job duties or outcomes to justify its use as a valid pay factor. Furthermore, it is unclear if establishing job-relatedness is enough to also satisfy the business necessity component, or if an employee could put forth a reasonable alternatives argument that certain experience or certifications would be comparable to the educational background, meeting the same business purpose.

Demonstrating the validity of pay factors has come up before in the EEO context. As an example, see Jock et. al. v Sterling Jewelers (Part 1 and Part 2), where  an arbitrator recently certified a class for a disparate impact claim alleging that job experience factors used to set starting salary were not job-related. The merits of the claim have yet to be decided. It appears that the CFPA will increase the challenges for California employers. We are all still working to understand how exactly the CFPA will be interpreted by the courts, so stay tuned.

By Samantha Holland, Consultant and Kristen Pryor, Consultant at DCI Consulting Group

Facebook Twitter Linkedin

In 2013, when the VEVRAA implementing regulations were revised, many contractors were confused by the “active duty wartime or campaign badge” category. The regulations define an “active duty wartime or campaign badge veteran” as a veteran who served on active duty in the U.S. military, ground, naval or air service during a war, or in a campaign or expedition for which a campaign badge has been authorized under the laws administered by the Department of Defense.  The confusion stems from the fact that Congress has not declared “war” since World War II, though there have been several “periods of war” since that time. Absent a clear definition, different Federal agencies apply different definitions, depending on the purpose (e.g., OPM’s definition for Veteran preference vs. Veteran’s Affairs definitions for certain benefits). Under the new infographic, OFCCP revised the language of the active duty wartime definition to “Did you serve on active duty during one or more of the periods of war outlined in 38 U.S.C. 101?”

Our friends at the Equal Employment Advisory Council (EEAC) recently pointed out that OFCCP’s infographic for determining if you are a “protected veteran” has adopted the “period of war” phraseology versus the more narrowly defined “during a war”. Under this guidance, the infographic states that individuals who served on active duty during the following “periods of war” are covered:

  • Korean Conflict (June 27, 1950-January 31, 1955),
  • Vietnam Era (February 28, 1961-May 7, 1975 for veterans serving in the Republic of Vietnam or August 5, 1964-May 7, 1975 for all others), and
  • Persian Gulf War (August 2, 1990-present).

The other campaigns and expeditions for which a campaign badge was authorized also still apply to this category, but are likely redundant for most veterans seeking employment. The practical impacts of this change include 1) based upon this recent expansion the “recently separated” category has been moot for almost 25 years, 2) “Vietnam Era” veterans are still covered (despite the rescission of 41 CFR 60-250), and 3) it is likely that protected veteran representation has been under-reported by contractors on the VETS-4212 and the VEVRAA 44k analytics . The third point stems from the likelihood that applicants and employees used a more strict application of the category “active duty wartime or campaign badge” than OFCCP apparently intended.

The next question is whether contractors will want to revise their self-identification forms yet again to clarify the more broad definition.

By Dave Cohen, President, and Kristen Pryor, Consultant at DCI Consulting Group 

Facebook Twitter Linkedin

In previous years, DCI has noted that the end of the Federal government’s fiscal year tends to also be a busy time for OFCCP settlements. In our 2014 blog, we identified six settlements with press releases in the month of September alone.  It is interesting that there were a total of seven press released settlements in the entire January-September 2015 timeframe. The table below outlines the 2015 settlements published in press releases on the OFCCP website through September.

Along with the reduction in settlements published in press releases, there was also a reduction in the number of new audit scheduling letters received in 2015. Though the number of new audits is down, we are seeing more active and intense audits. This is likely due both to the increased volume of information required for the initial audit submission, per the revised scheduling letter, and to the increase in robust and detailed follow-up data requests.


Settlement Amount

OFCCP Allegation(s)

(Impacted Group)

Press Release Date

United Mail Services


  • Failure to Hire (African Americans)


Savannah River Nuclear Solutions


  • Pay Discrimination (Women and African Americans)


Lac Pac Manufacturing


  • Failure to Hire (Whites and African Americans)


Oral Arts Laboratory, Inc.


  • Failure to Hire (Women, Men, and African Americans)




  • Steering (Women)
  • Failure to Hire (African Americans, Asians, and Hispanics)


Lahey Clinic


  • Pay Discrimination (Women)


Johns Hopkins University


  • Harassment/retaliation (African American Women – complaint based)
  •  Pay Discrimination (Women)


By Kristen Pryor, Consultant and Brittany Dian, Analyst at DCI Consulting Group

Facebook Twitter Linkedin


Really, I Come Here for the Food: Sex as a BFOQ for Restaurant Servers

Michael Aamodt, Principal Consultant at DCI Consulting Group, wrote an article featured in SIOP’s TIP publication, January 2017.