Header Image
Emilee B. Tison

Emilee B. Tison, Ph.D.

Senior Consultant
SHARE:
Facebook Twitter Linkedin
Emilee B. Tison is a Senior Consultant at DCI Consulting Group, where she is involved in employee selection and equal employment consulting. Emilee's primary areas of expertise are in employment testing, job analysis and validation strategies, and quantitative methods in the equal employment context.

Prior to joining DCI Consulting Group, Emilee worked at the U.S. Office of Personnel Management (OPM) as a Personnel Research Psychologist in the Selection and Promotion Assessment Group. At OPM, Emilee led job analysis/competency modeling, gap analysis, and assessment development projects; delivered trainings, including assessor/interviewer training; and conducted adverse impact analyses.

Emilee received her M.S. and Ph.D. in Industrial/Organizational Psychology from Virginia Polytechnic Institute and State University (Virginia Tech).

Emilee B. Tison ’s Recent Posts

The 32nd Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held April 26-29, 2017 in Orlando, Florida. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information.

Many sessions cover topics of interest to the federal contractor community, including employment law, testing, diversity and inclusion, big data, and regulations for individuals with a disability.

DCI Consulting Group staff members were well represented in a number of high profile SIOP presentations and also attended a variety of other sessions worth sharing. Notable session summaries and highlights can be found below; you may use the list below to navigate to a particular summary.

Analytics has a Seat at the Table: Now What?

A panel of I/O practitioners discussed considerations and challenges when building workforce analytics functions, including critical partnerships, cultural change, understanding stakeholders, and maintaining a business focus. These experts shared tips and tricks for implementing workforce analytics in their organizations, including:

  • Need an understanding of the systems, stakeholders, and IT in order to shape the input and pulls to improve data output
  • If organizations are siloed, it’s important to make connections across groups to ensure you can make the case that workforce analytics are needed
  • The role of the HR Business Partner is changing – analytics will be a part of their role if not already. Companies can provide skill building workshops to ensure knowledge and skills are appropriate. Online courses for statistics that can also be utilized.
  • If certain HR groups are opposed to utilizing analytics, advertise the benefit to HR stakeholders
    • Frontline HR: analytics can streamline work, provide better results, give better insight into workforce, candidates, etc.
    • Strategic HR: analytics can help balance a culture of constant deliverables, defining relationships by leadership and analytics, translating vision into action.
    • HR leadership: analytics can build the business case, define ROI, provide a common language
  • If rolling out a new process or program, use pilots to your advantage to make the business case, design research for short and long term results, and have two-way feedback loops with stakeholders. Leverage interviews to round out research and implementation – know your users and stay up-to-date on what’s important.

Burden of Proof: Can I-Os and Employment Counsel Successfully Collaborate?

A panel of labor attorneys and I/O psychologists, both practitioners and educators, spoke on the complexities of working on employment law issues and challenges in organizations. Listed below are several recommendations, mostly focused on considerations when developing, validating, or selecting a (off-the-shelf) test.

  • Federal agencies are focused on the search for less adverse alternatives for tests. I/O panelists shared that this has been a challenge in getting this search addressed fully in the technical report; this is especially a challenge with vendors who only have one or few tests and they don’t have alternates available.
  • Panelists cautioned about using automated resume screens and machine learning. Without perfect correlations there’s a high likelihood that at least one resume will make it through that is not good – this can limit defensibility of process. Or the tools could be weighing for words like church, Africa, etc. which would create risk. Also, how do you validate a test that is constantly changing? A panelist from PepsiCo noted that they opted to not do this – they don’t want to be the first in the courtroom.
  • Panelists also cautioned about working with vendors who specialize in gaming assessments – these vendors often don’t have I/O or validation knowledge and skill set.
  • Panelists recommended that organizations get legal involved early when selecting a vendor and/or new test – involve them in the RFP process.
  • It was also noted not to forget about international considerations, especially data privacy. It’s recommend that the organization involve counsel from each country.

Making Better Business Decisions? Risks and Rewards in Big Data      

This session was moderated by DCI’s Dr. Emilee Tison and highlighted both the risks and rewards in using big data techniques to inform employment decisions. As data analytic techniques continue to evolve and incorporate increasingly sophisticated methodologies, employers are cautioned in using such techniques with little to no transparency on the how or why results are being calculated. Although big data approaches in employment decision making can offer great benefits in terms of overall cost, time constraints, more positive candidate experiences, and better statistics, it is imperative for employers to also weigh these benefits against any legal and practical considerations. Such considerations may include privacy/confidentiality concerns, and also the increased potential for such techniques to lead to adverse impact against protected groups if variables are not fully validated or researched at the onset.

Big data techniques are only increasing in popularity and will only continue to evolve at a rapid pace moving forward. Although these techniques can seem very appealing to an employer in informing decisions on the front-end, they can prove very difficult to defend in court at a later date. For this reason, companies are advised to be cautious in implementing any new and improved techniques, as they will be asked to explain how the information being used is job-related. The major takeaway here is for employers to ensure they have the right justification for what they are doing before they incorporate any big data approach into business decisions.

Solving the Law Enforcement Staffing Crisis

DCI’s Dr. Michael Aamodt, together with representatives from both the U.S. Secret Service and U.S. Customs and Border Protection, lead an open discussion on the challenges that many law enforcement agencies are experiencing present day in both attracting qualified candidates to their organizations and meeting demanding staffing needs. Often times, demand is high, but agencies struggle to fill positions as a result of small applicant pools and applicants who do not pass the background check stage in the process.

Discussions focused primarily on causes of the applicant shortage (i.e., working conditions, job location, and strict policies/requirements), strategies for assessment and recruitment, and changes that may be warranted with regard to the background check process. It was also suggested that agencies review their current policies and procedures, and update where possible any strict policies that appear to exclude otherwise qualified applicants (i.e., work to reduce a strict tattoo or piercing policy).

I/O’s Role in Advancing HR in the Big Data Charge

Panelists in this session included DCI’s Dr. Eric Dunleavy and others from diverse backgrounds in both applied research and analytics departments who discussed recommendations with regard to how the I/O community can advance the current state of human resources management.

As big data continues to become increasingly prevalent in the world of HR, I/O psychologists find themselves in a position to lead the big data charge and contribute their knowledge and expertise in this realm. Companies are tasked with balancing a great deal of risk associated with the use of big data techniques in the employment decision making process, and establishing meaningful and legally defensible models requires a lot of human touch and input. New tools may make compiling and analyzing big data simple, but the tools themselves don’t tell us why something occurred and what we should do based on those results. This fact represents an opportunity for I/O psychologists to assist HR professionals in turning results into actionable information.

Optimizing Validity/Diversity Tradeoffs in Employee Selection

The session entitled “Optimizing Validity/Diversity Tradeoffs in Employee Selection” included three presentations that discussed alternative ways to handle the tradeoff between selection procedure validity and adverse impact. Typically, high validity is associated with high adverse impact, and this session focused on ways to maximize validity while keeping adverse impact at a tolerable level. Methods included algorithms to identify biodata scoring systems that balance this tradeoff, pareto optimal methods for weighting the selection components in a composite, and methods to estimate the extent to which pareto-optimal weights established in one sample generalize to other samples.

This session highlighted the challenges inherent in developing a selection system that is both highly predictive and free of subgroup differences. Art Gutman, the discussant for this session and a frequent contributor to DCI’s blog, commented on the approaches presented within the context of legal precedent. He noted that while these approaches may seem reasonable from an academic/research perspective, there may be sizeable hurdles to overcome in the courtroom. DCI will be on the lookout for the utilization of these approaches.

O*NET Based Research: Leading Edge or Wasted Opportunity

This symposium entitled “O*NET Based Research: Leading Edge or Wasted Opportunity?” showcased novel uses of O*Net data. A presentation by DCI’s Dr. Kayo Sady examined the importance of various personality characteristics in predicting salary across different industries. For example, he found that extraversion is highly valued in the tech industry. Dr. Sam Holland presented a tool that explored O*Net data from a network perspective to help job seekers find jobs that offer many potential career options. Using this tool, one can identify both the most advantageously situated jobs and the patterns of characteristics associated with those jobs.

Leading the Charge: IGNITING Veteran–Workforce Integration Solutions

A diverse panel consisting of representatives from academia, The SHRM Foundation, the military, consulting, and employer perspectives led a discussion touching on a specific timepiece in a veteran’s transition into the civilian work life. Challenges facing veteran transitions are broad in nature, and the lack of available data to assist with researching veteran outcomes as they transitioned to civilian life was discussed. Resources to help recruit and retain veterans have been published by the SHRM Foundation.  Future research is underway to help understand retention challenges for veterans in organizations.

Annual EEOC/OFCCP Practitioner Update

DCI’s Mike Aamodt and Joanna Colosimo were joined by colleagues from Fortney Scott, LLC and Capital One to update the SIOP community on current EEOC and OFCCP enforcement trends and implications from the presidential election. The panel focused on current pay equity enforcement trends, strategic outreach and recruitment for protected groups, and selection issues from an EEO perspective.  Best practice takeaways from the session highlighted the importance of collaborating with legal counsel, conducting proactive pay equity studies, and proactively monitoring the effectiveness of selection, recruitment and outreach programs on protected groups.

Mentoring for Women in I/O: Career Changes, Interruptions, and Transitions

In a moderated panel session, the presenters discussed issues for women in I/O that arise due to non-linear career trajectories. For example, job changes often are seen as resulting from indecision rather than strategy. Panelists and moderators were:

  • Silvia Bonaccio – University of Ottawa
  • Irini Kokkinou – SCAD
  • Kea Kerich – Marriott International
  • Alison L O’Malley – Deere & Company World Headquarters
  • Tatana M. Olson – United States Navy
  • Kristen M. Shockley – University of Georgia
  • Jane Wu – IBM
  • Lynda Zugec – The Workforce Consultants

The primary focus of the session was on anecdotes of the panelists’ own career trajectories. Additionally, the panelists were asked to respond to the following questions:

  1. What factors led you to a non-linear career path?
  2. What challenges did you face in pursuing a non-linear career path? How did you handle these challenges?
  3. What opportunities resulted from your non-linear career path?
  4. What skills did you develop from implementing your career change(s), interruption(s), or transition(s)?
  5. What advice do you have for graduate students going on the job market or for more experienced professionals considering interrupting/changing careers?

In the final portion of this session, panelists met with groups of audience participants to discuss in more detail their experiences and advice.

Innovative Adverse Impact Analysis

This expert panel covered a variety of topics related to complex adverse impact analyses. The panelists shared innovative approaches to constructing analytics when responding to intricate personnel decision-making situations. Moderators of the panel were Scott B. Morris (Illinois Institute of Technology) and Eric M. Dunleavy (DCI Consulting Group). Panelist topics included the following:

  • Donald R. Deere (Welch Consulting) discussed options for accounting for non-neutral analysis of cases in which multiple applicant records exist for a candidate.
  • Daniel C. Kuang (Biddle Consulting Group) discussed use of a composition analysis through binomial statistics to measure the difference of % selected versus % expected.
  • Fred Oswald (Rice University) discussed alternative measures to using impact ratio. Such measures included: odds ratio, Phi, absolute selection rate difference, Cohen’s h, and shortfall.
  • Richard F. Tonowski (U.S. Equal Employment Opportunity Commission) discussed the utility of measuring practical significance, sharing examples of when the addition of practical significance is critical to cases seen by the EEOC.

Alternative Session:  New Directions:  Enhancing Diversity and Inclusion Research and Practice

In a thought-provoking, alternative session entitled, “New Directions:  Enhancing Diversity and Inclusion Research and Practice,” five scholars and practitioners took stage to discuss the current state of diversity and inclusion research and how to better align research with practice. The following quotation, which was cited twice during the session, really resonated:  “Despite a few new bells and whistles, courtesy of big data, companies are basically doubling down on the same diversity and inclusion approaches they’ve used since the 1960s.” (Frank Dobbin, Harvard University; Alexandra Kalev, Tel Aviv University). Dr. Alice Eagly, Northwestern University, argued that right now there is a large gap between what research shows (academia) and what generalizations policy makers and practitioners are using. She challenged researchers and practitioners to be better, more honest stewards of diversity and inclusion knowledge so that “fake news”-esque generalizations propagated by advocacy groups, in particular, do not hinder forward progress in this field.

Up next was Julie Nugent, Vice President of Research at Catalyst, who led a discussion on what inclusion and exclusion feels like for employees in organizations. The Catalyst study found that employees feel included when they are valued for their specific contributions (uniqueness) and are welcomed among their peers (belongingness). In contrast, feelings of exclusion in employees arise if they are devalued/dismissed for their unique characteristics, especially their gender, race/ethnicity, nationality, age, religion, and sexual orientation.

Last, Dr. Gabriela Burlacu, SAP SuccessFactors, wrapped up the session by explaining how each step of the employee life cycle and the personnel decisions made therein, from applying to being hired, paid, trained, promoted, and terminated, can be better tracked and managed by technology – technology that when used appropriately can mitigate the threat of unconscious bias.  She spoke of “technology nudges,” such as defaulting bonuses and pay increases to absolute values rather than percentage increases based on base salary.

At DCI, mitigating unconscious bias through the creation and administration of structured personnel systems is something we assist our clients with every day, especially as it relates to EEO risk. We will continue to follow developments related to this type of technology, as well as other forms of “big data,” used to make personnel decisions and keep you posted with our recommendations.

Symposium/Forum:  Novel Workplace Diversity Interventions:  Field Experiments with Promising Results

In this well-attended session entitled “Novel Workplace Diversity Interventions:  Field Experiments with Promising Results,” five researchers and practitioners presented on the effectiveness of four field experiments in promoting positive diversity-related outcomes and improving diversity management in organizations. Dr. Alex Lindsey’s research focused on diversity interventions such as perspective taking (to produce empathy), goal setting (to increase internal motivation) and reflection (to produce guilt) and their effects on pro-diversity attitudes and behaviors. He found that the reflection intervention was most effective in increasing internal motivation and pro-diversity behaviors, but it also promoted anger and frustration a week later. Dr. Lindsey admitted future research (perhaps into more hybrid reflection and goal-setting activities) might be necessary to reach resistant diversity trainees in organizations.

In one more example, Jose David and Carolyn Fotouhi from Merck presented on their company-wide women’s sponsorship program. Jose explained that Merck had recently come out with key healthcare products in oncology, HPV, and insomnia, but their sales were lagging, so they decided to revamp their operating model. After some internal research, Merck found that females comprised less than 1/3 of incumbents in critical roles and slightly over 1/3 female incumbents in director-level roles, yet females make 90% of healthcare expenditure decisions and make up more than 50% of the patient base. Thus, Merck developed an advancement-focused women’s sponsorship program that allowed women protégés of all ranks to interface with women in leadership positions through one-on-one virtual sessions and networking circles.  They found that women protégés experienced a higher rate of internal movements (9.5% more females in critical roles and 3.5% more females in Director-level roles) and greater representation of females in succession planning slates. It will be exciting to note if increases in female representation in critical/director-level roles translates into increased key product sales. Perhaps only time will tell.

Caught on Video: Best Practices in One-Way Interviewing

The “Caught on Video: Beat Practices in One-Way Interviewing” session kicked off with a definition of one-way interviewing and how it differs from traditional two-way interviewing.  In a nutshell, one-way interviewing is the practice of utilizing video recording to capture applicant’s responses to interview questions that can then be scored at a later time.

One-way interviewing is still relatively new and not widespread in practice.  Therefore, the panelists recommended some best practices based on their experiences including:

  • Filming actual recruiters (as opposed to actors playing recruiters) asking the interview questions
  • Creating a behavioral indicators checklist for recruiters to quantitatively and systematically rate applicants
  • Developing questions through a rigorous process which may include a job analysis
  • Continually rotating questions to prevent question-sharing among applicants

According to the panelists, initial reactions from applicants have been positive.  For example, they like the flexibility of one-way interviewing.  Recruiters also enjoy the method for its flexibility (some recruiters watch the videos while on the treadmill!) and like that the interviews have a clear scoring rubric.

What is Machine Learning? Foundations and Introduction to Useful Methods

The session entitled “What is Machine Learning? Foundations and Introduction to Useful Methods” was targeted at individuals with basic to intermediate understanding of machine learning.  Supervised vs. unsupervised models of machine learning were discussed.  As a best practice, the panelists recommended cross-validation to estimate the R2 shrinkage.  Optimally, the data would be broken into a training, validation, and test set so that the researcher may develop, train, and then test the final model.

There are several important concerns that may impact machine learning models.  For example, overfitting occurs when the model predicts data too well for the sample and does not generalize.  Another concern with machine learning is the bias/variance tradeoff.  This was likened to reliability/validity in that high reliability is akin to low variance and high validity is akin to low bias.  Finally, the curse of dimensionality refers to the fact that more predictors require a bigger sample.  As tempting as it may be to add many predictors, it’s prudent to keep in mind what your sample size is when building a machine learning model.

Applicant Reactions During Selection: Overview and Prelude to a Review

With the rise in technology-based selection tools, the panelists in the “Applicant Reactions During Selection: Overview and Prelude to a Review” session claim that research on applicant reactions (ARs) has not kept pace with the advances in the types of selection tools used in practice.  Research results suggest that applicants favor technology-based testing over traditional media (e.g., Potosky & Bobko, 2004).  However, when it comes to face-to-face interactions (e.g., interviews), individuals still prefer they be in-person without a technology interface (Straus, Miles, & Levesque, 2001).

Another topic centered on why employers should be concerned with ARs.  The argument is that because ARs are not directly tied to measurable performance on the job for hired employees, it may not matter to employers how applicants perceive the selection process.  However, the panelists argue that ARs do matter.  Further, AR research has been limited in that there are several moderating variables, including selection context variables (e.g., hiring expectations and selection ratio, job desirability), organizational context variables (e.g., organizational size), and individual-level variables (e.g., personality) that have not been examined but that may significantly impact ARs in differing ways.

The Pre-conference Master’s Consortium: Advantages & Strategies for Building Your Business Acumen

A pre-conference session on “Advantages & Strategies for Building Your Business Acumen” was presented by Keli Wilson with DCI Consulting Group. The basis for this talk was to help I-O psychologists entering the workforce to understand the following three concepts: (1) the bottom-line; (2) business development; and (3) the request for proposal process.

Specifically for the bottom-line discussion, information was shared regarding how to communicate and tie organizational outcomes to savings (e.g., minimizing turnover, mitigating risks such as discrimination claims by understanding the mission of EEOC and OFCCP, and leveraging press releases of other companies to demonstrate purpose and savings for embracing I-O practices).

As I-O psychologists advance in their careers and understand the organization they work for or consult with, there may be opportunities to identify potential gaps and communicate opportunities for efficiencies and business growth. A high-level overview of what this process entails was covered during the session (e.g., conduct market research, recognize stakeholders, have a strategic vision, prepare and implement a business plan, market and create buy-in). The guidance provided was to hone presentation skills and learn how to pitch and sell ideas.

Finally, given I-O psychologists commonly go into consulting careers, there is a need to understand the request for proposal process in order to win and partner with clients on projects. An overview of the typical stages of a proposal process were shared with the graduating students (e.g., sales call, proposal, interview, question/answer, sales demo, negotiation of pricing, and signed contract). It was discussed that the scope of the project within the proposal should clearly state the identified problem and provide a proposed solution. Additionally, a tutorial on proposal pricing was provided in this session (e.g., pros/cons of hourly, daily, or project based pricing).

In summary, students graduating with an I-O Master’s degree were provided with the opportunity to be exposed to the business aspects that may not be covered in I-O graduate programs.

Industry Differences in Talent Acquisition

This SIOP conference session was led by a panel of speakers from various organizations: Jenna C. Cox, IBM; Amanda Klabzub, IMB; Mary Amundson, Land O Lakes; Jennifer M. Dembowski, The Home Depot; Nicole Ennen, Google; Hailey A. Herleman, IBM; and Lisa Malley, DDI. The focus of the discussion was on the similarities and differences of talent acquisition across industries. The similarities across industries is in the area of attracting and selecting the talent that supports business initiatives, but differences were noted in the scarcity of talent and need to grow specific talent (e.g., through educational programs), as well as the geography in which the company operates in (e.g., the talent mix in different markets). Also, unemployment rates may be great for job candidates, but not for retailers because of the reduction of qualified pools. It was shared that manufacturing jobs that pay very well can be difficult to fill due to schedules that are unappealing, unpredictable work, and often dirty working conditions. Furthermore, it was communicated that agriculture is very hard to staff, particularly middle management. As for the tech industry, it was stated that you may need to source candidates more than other industries because some of the most qualified and best people for the job already have jobs.

The panelists mentioned that a key differentiator in attracting talent is organizational culture and branding. In addition, the panelists touched on the candidate experience and how they strive to bring down the median days to selection. Some examples of how this was achieved were through cutting out layers of approval, using selection tools, and trimming down the number of interviews as long as a fewer number of interviews was just as predictive. In regards to the candidate experience, the company desire to want people to come back to them, as well as to understand what went well and what can be improved, which is gathered through a candidate experience survey.

Finally, a way to make your company more appealing to candidates seeking employment is to focus on the career site (e.g., social life at company and organizational culture). The goal would be to make it easy to use the career site and to allow candidates to find the jobs that they want to apply for. When it comes to the selection process, it was shared that having a hiring committee can help mitigate individual unconscious bias in the hiring process (e.g., come in and review all the materials of the entire process and feedback/scores from structured interviews).

“That Company is Great!” Best Practices for Improving Candidate Experience

This SIOP conference session was panel style with the following guest speakers: Brittany J. Marcus-Blank, University of Minnesota; Sarah A. Brock, Johnson & Johnson; Pamela Congemi, Medtronic; Jim Matchen, Target Corporation; Marina Pearce, Ford Motor Company; and Amy Powell Yost, Capital One. The topic of discussion was on how to create a positive candidate experience.

Taking action to improve candidate experiences not only helps to secure top talent, but it also benefits brand loyalty. The following are some of the practices shared amongst the panelists to increase candidate experiences:

  • setting candidate expectations (i.e., transparency of the selection process and communication of their status at each step);
  • calling each declined candidate (i.e., receive a personal call from talent acquisition);
  • being aware of the physical space in which the interviewees will spend time (e.g., have an identified space that candidates will not be distracted and at the same time impressed by what the candidate can see of the company);
  • picking the candidate up from the airport;
  • planning a welcome session and/or tour of facility;
  • assigning an onsite coordinator to welcome the candidate being interviewed;
  • providing the candidate with information about the interviewer(s) beforehand;
  • empowering recruiters to use discretion in sending gifts on behalf of the company (e.g., send a small gift to someone who just graduated with an MA degree or who just dropped out of the selection process due to a devastating life event);
  • implementing a candidate reaction survey;
  • monitoring Glassdoor and similar websites to discover feedback; and
  • training employees on good behavior (e.g., actions that will be appreciated by the candidates, such as a recruiter remembering their name).

Physical Abilities Testing: Lessons Learned in Test Development and Validation

This panel, which included DCI’s Emilee Tison, discussed unique challenges associated with physical abilities testing. Panelists identified challenges encountered in the field and shared lessons learned in this area of work. Specifically, the presenters addressed the following topics:

  • Test Development – how developing a physical abilities test is different from other types of selection tests
  • Adverse Impact – typical assumptions of existing sub-group differences and methodologies to reduce adverse impact
  • Validation – strategies typically used for physical abilities tests and how this differs from other types of selection tests
  • Criteria for Validation – typical criteria used for criterion-related validation evidence and challenges faced during these analyses

Panelists cautioned organizations to ensure a full understanding of the physical requirements of the job as well as the types of physical abilities tests available for implementation. Physical abilities testing is not a ‘one-size-fits-all’ process; considering it as such increases risk of a mismatch between the physical requirements of the job and the test being implemented, and increases legal risks.

Master Tutorial: R Shiny Apps in I/O

In this session spearheaded by DCI’s Sam Holland, the use of R’s Shiny package was showcased to demonstrate its usefulness in sharing and visualizing analytic results. It allows R users with no programming background the ability to deploy web-ready applications to showcase results. After walking participants through the basic concepts and principles needed to leverage the package, presenters demonstrated how quickly basic R scripts can be transformed into interactive dashboards.

Everything UGESP Forgot to Tell You About Content Validity

This panel, moderated by Emilee Tison, Ph.D. (DCI Consulting Group), discussed the importance, usefulness, and practicality of content-oriented validation methodologies, which is the extent to which the content of the selection procedure reflects important performance domains of the job. This methodology, however, is often criticized for having limited application and questioned as to whether it increases the likelihood of actual prediction of job performance.

Each panel member spoke about a different topic and the role played by content-oriented validation methodologies:

  • James Sharf, Ph.D. (Sharf and Associates) provided a history of the development of EEOC’s Uniform Guidelines on Employment Selection Procedures (UGESP). Of particular interest was the use of validity generalization and whether or not it was appropriate to use.
  • Mike Aamodt, Ph.D. (DCI Consulting Group) discussed background checks and the various aspects to address when linking risk areas to specific tasks performed.
  • Damian Stelly, Ph.D. (Flowserve Corporation) addressed using content validation as a potential alternative to criterion validation for personality assessments. The applicability will often depend on the specific context of the situation.
  • Deborah Gebhardt, Ph.D. (HumRRO) discussed a number of important considerations when using content validation for physical ability tests. Some of these include: a reflection of essential job tasks and work behaviors, feasibility of the simulation, using only basic skills and not those learned on the job or in training, safety, ability to standardize test components, using a meaningful scoring metric, and the reliability of test components.

By Amanda Shapiro, Senior Consultant; Brittany Dian, Associate Consultant; Samantha Holland, Consultant; Joanna Colosimo, Director of EEO Compliance; Jana Garman, Senior Consultant; Jeff Henderson, Associate Consultant; Julia Walsh, Consultant, Keli Wilson, Senior Manager of EEO Compliance, D&I; Cliff Haimann, Consultant; Bryce Hansell, Associate Consultant; and Emilee Tison, Associate Principal Consultant, at DCI Consulting Group

CONTINUE READING

Two recent OFCCP settlements emphasize the importance of monitoring adverse impact throughout all steps of the hiring process: Gordon Food Service (GFS) and The Aqualon Company.

GFS, a food distribution company, implemented a hiring process that included a strength test that resulted in adverse impact against women seeking entry-level laborer positions. Lacking validation evidence in accordance with the Uniform Guidelines on Employee Selection Procedures (UGESP), GFS settled with OFCCP. As part of the conciliation agreement, GFS will hire 37 women, pay $1.85 million to impacted applicants, and stop using the strength test until it can be properly validated. For more details on this settlement, see our previous blog.

The next settlement focuses on The Aqualon Company, a subsidiary of Ashland Inc. OFCCP alleged discrimination against 660 African Americans who applied for entry-level positions and noted that “Aqualon used a discriminatory test” that “was not job-related” and “did not meet the requirements of UGESP.” As part of the conciliation agreement, Aqualon will pay $175,000 in back pay and interest to the impacted applicants and will hire four of the African American applicants. The organization will also stop using the test and will revise its hiring procedures. See OFCCP’s press release here.

These settlements serve as a reminder that OFCCP will focus on testing cases. As such, it is critical that contractors monitor their selection systems for adverse impact and research the underlying causes of statistical disparities. According to the UGESP, if a statistical disparity is identified for the overall selection process, contractors should evaluate each individual step or point in the hiring process where decisions are made. An individual step that shows statistically significant disparities must have sufficient validation evidence to support the continued use of that step.

By Cliff Haimann, Consultant; Emilee Tison, Senior Consultant; and Kayo Sady, Senior Consultant at DCI Consulting Group. 

CONTINUE READING
SHARE
Facebook Twitter Linkedin

The 31st Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held April 14-16, 2016 in Anaheim, California. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information. Many sessions cover topics of interest to the federal contractor community, including employment law, testing, diversity and inclusion, big data, and regulations for individuals with a disability. DCI Consulting Group staff members were well represented in a number of high profile SIOP presentations and also attended a variety of other sessions worth sharing. Notable session summaries and highlights can be found below.

 

Beyond Frequentist Paradigms in Legal Scenarios: Consideration of Bayesian Approaches

High-stakes employment scenarios with legal ramifications historically rely on a frequentist statistical approach that assesses the likelihood of the data assuming a certain state of affairs in the population. This, however, is not the same as the question that is usually of interest, which is to assess the likelihood of a certain state of affairs in the population given the data. This session explored the use of a Bayesian statistical approach, which answers the latter question, across different high-stakes employment scenarios. In each of the presented studies, data were simulated and analyzed, and results between the Bayesian and frequentist approaches compared:

  • David F. Dubin, Ph.D., and Anthony S. Boyce, Ph.D., illustrated the application of Bayesian statistics for identifying selection test cheaters and fakers.
  • Chester Hanvey, Ph.D., applied a Bayesian approach for establishing whether jobs are correctly classified as exempt in wage and hour questions.
  • Kayo Sady, Ph.D., and Samantha Holland, Ph.D., demonstrated the advantages of a Bayesian analysis in compensation scenarios with difficult-to-detect subgroup differences.

In each of the studies, the results suggested the utility of a Bayesian analysis in some specific circumstances. Overall, the presenters agreed that the Bayesian analysis should supplement more traditional frequentist analyses and noted specific issues to consider when designing these analyses. Given the lack of legal precedent and difficulties introducing a new set of statistical interpretations into the courtroom, the takeaway was that the best current value-add for Bayesian approaches is in proactive, non-litigation applications.

 

Contemporary Issues in Occupational Credentialing

The opportunity for credentialing or micro-credentialing is ever increasing, with credentials popping up in many professional fields that previously had none. What it takes to develop and maintain these credentialing exams, however, is something that many people know little about. In this session led by Samantha Holland (DCI), panelists from both private and public sector credentialing programs shared their experiences with issues such as maintaining test security, developing test content, and establishing validation evidence for their exams. Some highlights are noted below:

  • John Weiner, from PSI, noted the many security aspects to consider when administering exams online, a situation that requires additional measures beyond those described by other panelists.
  • Rebecca Fraser, from the Office of Personnel Management, shared her experience using methods beyond practice analysis to establish the content domain for specialized, low sample size domains.
  • Lorin Mueller, from the Federation of State Board of Physical Therapists (FSBPT), discussed the need for clearer boundaries when it comes to regulation of certification boards: the line between what is good for a profession, and what is good for business, can sometimes become blurred.
  • Alex Alonso, from the Society of Human Resource Management (SHRM), provided his experiences of building a certification program from the ground up for his organization’s newly minted HR certification program.

 

A View from the Trenches: EEOC/OFCCP Practitioner Update

DCI’s Joanna Colosimo moderated this panel, featuring DCI’s Mike Aamodt, Michelle Duncan of Jackson Lewis, Eyal Grauer of Starbucks, and David Schmidt of DDI, providing an update on recent regulatory changes, enforcement trends, and other topics related to compliance.

In fiscal year 2015, the OFCCP completed fewer compliance evaluations, but the duration of audits has increased as a result of the revised scheduling letter and more in-depth follow-up requests, particularly related to compensation. The panel also discussed the increase in steering allegations and settlements where whites and/or males were the alleged victims of systemic hiring discrimination.

Dr. Aamodt spoke about two hot topics: the EEOC’s proposed pay data collection tool and the use of criminal background checks for employment decisions. With regard to the EEO-1 pay data collection tool, he highlighted the burden of reporting pay data for 10 EEO-1 categories, 12 pay bands, 7 race/ethnicity categories, and 2 sex categories, as well as some of the limitations of using W-2 data. Additionally, he discussed how difficult it would be for the EEOC to use the resulting data to identify pay issues. For employers using criminal background checks, Dr. Aamodt recommended that contractors adopt narrowly-tailored policies that consider the nature of the offense, the duration of time since the offense, and the nature of the job being sought.

 

Strategically Evaluating Outreach for Individuals with Disabilities and Veterans

This session presented research conducted by DCI’s Kristen Pryor, Rachel Gabbard, and Joanna Colosimo to investigate best practices amongst federal contractors in complying with the 503-VEVRAA formal evaluation of outreach and recruitment obligations. Representatives from 77 federal contractor organizations provided survey feedback on current methods and prospective strategies for evaluation. Results identified strategies such as tracking resource specific metrics on qualified referrals and hires as well as ROI analysis for evaluating the success of outreach efforts. Results also suggest general frustration among federal contractors due to insufficient and ambiguous regulatory guidance on this requirement. The full white paper is available here. In addition, DCI will be conducting follow-up research in the near future to determine if further progress has been made in this area, now that the regulations have been in effect for over two years.

 

No Longer an Afterthought? Reasonable Alternatives and Title VII Litigation

DCI’s Emilee Tison moderated this session where panelists discussed their perspectives and experiences related to identifying and evaluating reasonable alternatives. Panelists included Winfred Arthur, Jr (Texas A&M Univ.), Theodore Hayes (FBI), James Kuthy (Biddle Consulting Group, Inc.), and Ryan O’Leary (PDRI, a CEB Company).

Discussion topics included:

  • The Uniform Guidelines text related to the “reasonable effort” in identifying alternatives with “equal” validity and “lesser” adverse impact
  • Strategies for identifying and considering alternatives, including the impact this will have on two selection goals: validity and diversity
  • The potential impact of recent case law on discussions of reasonable alternatives
    • Lopez v. City of Lawrence, 2014
    • Johnson v. City of Memphis, 2014
    • Documenting a consideration of alternative selection procedures

Panelists ended the session with a few parting words, including:

  • Clearly identify what you are considering an alternative
    • Note that not all alternatives are created equally
    • Put in the effort to identify and document your search for alternatives
    • When documenting alternatives, steer clear of ‘stock language’ by providing justification for your choice(s)

 

Competencies and Content Expertise for I/O Psychology Expert Witnesses

In light of recent developments in case law and updated regulatory guidance, panelists provided competencies and strategies for expert witness testimony, focusing on three main topics: social framework analysis (SFA), new measures for test validation, and wage and hour concerns related to revised FLSA regulations on exempt status employees. Panelists included DCI’s Eric Dunleavy and Arthur Gutman, in addition to Margaret Stockdale of IUPUI, Cristina Banks of Lamorinda Consulting, Caren Goldberg of Bowie State University, and David Ross of Seyfath Shaw.

The goal of SFA as it relates to expert witnesses is to educate the court and jury on the processes underlying cognitive bias and other socially constructed concepts like gender inequality. Panelists cited the 2011 Supreme Court case of Walmart v. Dukes as a prime example of applying SFA methodology to diagnose discrimination in personnel practices. Although SFA has been met with some criticism, it can be said that there is a certain degree of subjectivity in many employment processes that have the potential to lead to discrimination. For this reason, experts are encouraged to look at seemingly neutral factors that may have a disproportionate impact on members of a protected group.

Shifting focus to standards regarding test validation, panelists commented on the outdated nature of the Uniform Guidelines on Employee Selection Procedures (UGESP), which have not been updated in nearly 40 years.  Although the panel was not aware of any initiatives to update the guidelines, it was noted that several SIOP representatives have met with the Equal Employment Opportunity Commission (EEOC) regarding the guidelines and other topics of mutual interest. Panelists also advised the audience to rely on both the SIOP Principles and APA Standards as supplemental, more contemporary resources regarding test validation standards. Additionally, SIOP will be publishing a white paper on minimum qualifications and adverse impact analyses that addresses data aggregation concerns and other testing considerations.

The final topic discussed focused on wage and hour issues concerning the revised FLSA regulations. The panel discussed the difficulties that many employers face in accurately classifying jobs as exempt or non-exempt, and also when determining whether independent contractors should be considered employees. It was recommended that job analyses be done for individual positions, rather than general ones, to help determine exempt status and how much time is spent doing each type of work. Employers should also be aware of any differences regarding state law.

 

Opening the “Black Box”: Legal Implications of Big Data Algorithms

The subject of “big data” has become a hot topic as access to increasingly large amounts of data provides employers with new opportunities to make informed decisions related to recruitment, selection, retention, and other personnel decisions. However, “data scientists” often overlook the legal implications of using big data algorithms within an employment context, especially when it comes to employee selection. Panelists discussed several issues emerging from the use of big data algorithms, including the potential for discrimination, Title VII consequences, and strategies for mitigating risk.

As suggested by DCI’s Eric Dunleavy, many of the “big data” models really do not differ from empirically keyed biodata, which is not a new concept. What is new are methods of collecting larger amounts of data from new sources. Like empirically keyed biodata, big data can be very effective in predicting work-related outcomes. However, if the employer cannot explain how the algorithm works or illustrate that it is job-related, it may be difficult to justify use of the algorithm if facing a legal challenge.

In addition to traditional adverse impact concerns related to women and minorities, some big data techniques may have the potential to discriminate against other protected groups. For example, one panelist mentioned a computer program that can automatically score an applicant’s body movements and analyze vocal attributes from a video recording of an interview. Several other panelists noted that certain body movements or vocal attributes may be related to protected class status, in particular individuals with disabilities. The main takeaway here is that if an employer is using data algorithms, it is imperative that they not only validate the model, but also understand how it is making decisions.

 

Big Data Analytics and Employment Decisions: Opportunities and Challenges

In this session, speakers highlighted the increasing popularity of the use of big data techniques (e.g., machine learning) within organizations to predict work outcomes , pointing out both benefits and challenges inherent to these approaches.

As one example of a big data “win”, Facebook’s David Morgan described how data collected on the current workforce can be used to identify employees at risk of turnover. More caution is required, however, when using big data to inform selection decisions. Many big data algorithms are essentially “black boxes”: data goes in and results come out with little transparency of the how or the why. Not being able to explain the “why” makes these approaches very difficult to defend in court. Rich Tonowski, representing the EEOC, advised that companies be knowledgeable and comfortable with the process being used as the agency will obtain access to the algorithm. Similarly, companies should be able to explain how the information being used is job-related, especially when data have been mined from social media or other Internet sources.

A final caveat was that machine learning tools may use data that is correlated with protected-class status in some way.  Dave Schmitt of DDI suggested one way to test for this is to determine if the model can predict the race or sex of applicants. If so, then it may be subterfuge for discrimination. This may be especially compounded by the “digital divide,” where minorities may be less likely to have regular access to the Internet due to lower socio-economic status.

 

Applied Criterion-Related Validation Challenges: What We Weren’t Taught in Textbooks

This panel, which included DCI’s Art Gutman, discussed a variety of challenges faced when working to conduct criterion-related validation studies for client organizations. Challenges included study design issues, data collection problems, determinations regarding appropriate analysis, and meeting reporting requirements. Specifically, presenters discussed the criteria problem (obtaining appropriate and accurate measures of job performance), problems with predicting low base rate events, issues of range restriction and the appropriateness of applying corrections, among others. The panelists hypothesized that upcoming issues in criterion validation will include dealing with big data (“messy predictors”), processes for validating non-psychometric assessments, addressing validity equivalence (or lack thereof) in multi-platform or mobile assessments, and the eventuality of court cases evaluating validity generalization.

 

Implications of Revisions to FLSA Exemptions for Organizations and Employees

In this session, a panel of experts provided insights on the proposed changes to the FLSA exemption criteria.  The panel discussed the salary test for exemption, which would increase from $455 a week to the 40th percentile of weekly earnings for full-time salaried workers (estimated at $970 for 2016) and the implied potential changes to the job duties test. Regarding the salary test, panelists agreed that a change is overdue. However, they argued that a phased approach would be more appropriate and that the regulation should not be set at a dollar value, but instead aligned to a value that will allow it to stay in line with inflation. The NPRM’s discussion of the job duties test did not propose a change, but asked for feedback on whether a quantitative threshold, like the 50% “primarily engaged” test in California, should be implemented. The DOL estimated that approximately 20% of exempt employees would be impacted by the salary changes alone. Implications for employers are staggering, especially in light of the potential for a 60 day implementation window. First, employers must assess the extent to which they are comfortable with their exempt/nonexempt classifications and reasoning and plan to re-evaluate where needed. Second, budgeting and cost scenarios for moving exempt positions to non-exempt, realigning duties, or increasing pay should be evaluated. Finally, internal messaging and communication plans should be in place to outline the changes, reasoning, and any new procedures.

 

Novel Approaches for Enhancing Diversity Training Effectiveness in the Workplace

In this session, four different presenters provided insights on diversity training. Three presented information from academic research, and one presenter provided information from an organization context. A full 67% of organizations provide some form of diversity training, though research into the impact of that training on the job is varied. One series of studies found that individuals who are high in social dominance orientation (e.g., high preference for hierarchy in a social system and dominance over lower-status groups) tend to be more resistant to diversity training, but that this resistance can be mitigated when the training is endorsed by an executive leader. Another series of studies found that men are more likely to place importance on gender issues addressed when those issues are put forth by other men, and that this holds in both written context and in-person contexts. A Google employee presented on the training Google has implemented as part of new hire on-boarding on implicit or unconscious biases. The training focuses first on increasing awareness and understanding of the topic, to provide a common language, and initial suggestions for mitigation. Follow-up training has focused more on role playing type scenarios to cement the behavior change and mitigation aspect, increasing employee comfort level with calling out biases when and where they are observed.

 

Why Survey Data Fail – and What to Do About it

Panelists discussed their experiences conducting surveys, times when things went wrong, and recommendations for a successful survey. Anyone can use and develop a survey, but issues can arise when multiple stakeholders are involved, each with a different opinion. For this reason, it is important to communicate the purpose of the survey and how the results will be used. Branding can be beneficial to help develop awareness, generate interest, and increase participation. Positive changes implemented based on survey results can also lead to increased participation the following year. Additionally, it is important to research any null or opposite findings between survey iterations to give you a better understanding of any issues that may be present within your organization.

Panelists also addressed problems they have encountered when implementing results, including trying to do too much with the findings, or slicing the data so many ways that your results become less reliable. It was also emphasized that results should be presented in a way that leaves little room for subjective interpretation to avoid making conclusions that are not supported by the data.

Finally, the panel provided a few recommendations for a successful survey:

  • Make responding easy
  • Get people excited about data by telling a good story
  • Provide insights and summaries when reporting results
  • Make an effort to understand your audience in order to keep participants engaged year after year

 

Can Technology Like Deep Learning Eliminate Adverse Impact Forever?

This debate-style session posed the question of whether or not big data techniques (specifically deep learning or machine learning) could/should be used to eliminate adverse impact during selection. The panel included data scientists and I/O psychologists to present their perspectives. The I/O psychologists opposing this technique – including DCI’s Emilee Tison – presented the following high-level points:

  • The identification of adverse impact alone is not synonymous with illegal discrimination
    • The blind elimination of it may eliminate meaningful differences that exist due to legitimate job-related factors – impacting the validity of the selection procedure
    • Adverse impact is the prima facie standard for a disparate impact case; however, procedures that produce adverse impact have two additional considerations:
      • The job relatedness or business necessity of the procedure
      • The consideration of reasonable alternatives
  • Making selection decisions based on protected class status is illegal according to the CRA 91 and, as supported in recent case law, selection decisions should not be based on adverse impact alone (Ricci v. DeStefano, 2009)
  • Data scraping techniques – that learn and pull in factors to use in predicting important outcomes (such as information from Facebook) – call into question the job-relatedness of the selection procedure

In summary, the panelists came from very different perspectives and foundational knowledge bases; however, it was the start of what hopefully becomes meaningful cross-discipline dialogue.

 

 

By: Kayo Sady, Senior Consultant; Samantha Holland, Consultant; Brittany Dian, Associate Consultant; Dave Sharrer, Consultant; Kristen Pryor, Consultant; Rachel Gabbard, Associate Consultant; Joanna Colosimo, Senior Consultant; Emilee Tison, Senior Consultant; and Bryce Hansell, Associate Consultant at DCI Consulting Group 

 

CONTINUE READING

In another blog, Art Gutman provides an overview of the California Fair Pay Act (CFPA). The CFPA prohibits California employers from paying employees differently due to sex. This is not new, given existing law; however, some of the specifics outlined in the CFPA are unique.

One example relates to grouping employees for analysis based on “substantially similar” work. More specifically:

An employer shall not pay any of its employees at wage rates less than the rates paid to employees of the opposite sex for substantially similar work, when viewed as a composite of skill, effort, and responsibility, and performed under similar working conditions

For federal contractors, this is likely not a new concept in some ways. If you have interacted with OFCCP regarding pay analyses, you may have worked to identify similarly situated employee groups (SSEGs) for your pay analysis groups (PAGs). SSEG development is often a combination of subjective and objective considerations, but the CFPA adds more structure around dimensions of job similarity. As such, how do you determine whether or not jobs are substantially similar?

Industrial and Organizational (I/O) Psychologists are particularly well suited to address this question. I/O Psychologists often collect job-related information as part of a process called job analysis, which can be used to determine job similarity.

A job analysis is the systematic process of collecting and interpreting job-related information for a given purpose – such as assessment development, validation efforts, and/or determining job similarity. This systematic review is accomplished through a variety of data collection methods, such as job observations, documentation review, interviews, focus groups, and surveys. For additional details on job analysis, see this previous DCI blog.

Data collected from a correctly focused job analysis can allow for job similarity analyses to directly and empirically test how similar roles are in terms of skills, effort, responsibility, and working conditions. Given the court’s general acceptance of job analysis as a viable research methodology, a job analysis may be the most defensible approach to determining similarity of jobs along the mandated criteria.

For those of you that will be dealing with the CFPA in 2016, we recommend that you contemplate job similarity along these dimensions, as it could have enormous consequences for the results of EEO pay analyses under the CFPA.

By Eric Dunleavy, Director and Emilee Tison, Consultant at DCI Consulting Group 

CONTINUE READING
SHARE
Facebook Twitter Linkedin

OFCCP recently revealed a new lawsuit in this press release. It is important to note that this case has not been decided, but a complaint has been filed with the Office of Administrative Law Judges (ALJ). The allegations set forth are serious, if founded, and include allegations of harassment, assault, and abuse of Hispanic employees. OFCCP also alleges disparity in pay and hours against other protected groups. However, some of the allegations in the press release are troubling from a regulatory enforcement perspective.

First, the press release states that the contractor discriminated against “non-Hispanic” applicants. This is an interesting point given the next logical question of whether or not “non-Hispanic” can be considered a protected group for evaluation of discrimination. There is not a clear-cut answer to this question and the issue is further complicated by the fact that the specific details of this case were not made available in the press release. In general terms, determination of whether or not “non-Hispanic” is a viable group for analysis purposes depends on a variety of factors, including (a) whether the “Hispanic” ethnic group is treated as a race or national origin group (see VF Jeanswear for existing precedent on this issue when dealing with racial categories), (b) the availability of anecdotal evidence that may support the designation of “Hispanic” and “non-Hispanic” comparator groups to mirror the realities of the selection process, and (c) whether or not the collection and retention of demographic data allows for a meaningful evaluation of ethnicity/national origin at all.

Second, the complaint alleges that “laborers” were the affected positions, but makes no reference to job titles. This implies that potentially different positions may have been rolled into the EEO-1 Category of laborer to analyze pay and hours. This potential aggregation may not reflect positions with similar content, opportunity, and wage.

Third, the contractor in question is a sub-contractor supplying staff to construction contractors. The press release does not make it clear whether this case is being put forth under the supply and service or the construction regulations.

Finally, it is important to flesh out the allegations of harassment, assault, and abuse of Hispanic employees in this lawsuit. The named contractor is being charged with knowing that Hispanic employees were being harassed, assaulted, and abused while working on construction contracts for other contractors, yet doing nothing about the situation. It is unclear whether the construction contractors responsible for the alleged harassment, assault, and abuse of the Hispanic employees are also being investigated by either the OFCCP or another appropriate agency.

Stay Tuned!

By Emilee Tison, Consultant and Kristen Pryor, Consultant at DCI Consulting Group 

CONTINUE READING
SHARE
Facebook Twitter Linkedin

The 30th Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held April 22-25, 2015 in Philadelphia, PA. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information. Many sessions cover topics of interest to the federal contractor community, including employment law, testing, diversity and inclusion, big data, and regulations for individuals with a disability. DCI Consulting Group staff members were well represented in a number of high profile SIOP presentations and also attended a variety of other sessions worth sharing.

DCI highlights included, but were not limited to, President Dave Cohen presenting a pre-conference workshop with EEOC Chief Psychologist Dr. Rich Tonowski, Dr. Mike Aamodt presenting a master tutorial on background checks with Dr. Rich Tonowski, and Dr. Eric Dunleavy being awarded SIOP fellow status at the plenary session. Additionally, DCI staff members Dr. Art Gutman, Dr. Kayo Sady, Joanna Colosimo, Keli Wilson, and Vinaya Sakpal all presented at the conference. Session summaries and highlights can be found within six major themes as listed below.

    1. Hot Topics
    2. Disability Disclosure
    3. Diversity and Inclusion
    4. EEO Analytics
    5. Performance Appraisals
    6. Testing and Selection

 

Hot Topics

OFCCP and EEOC Enforcement Trends: Practical Tips for Mitigating Risk

DCI’s David Cohen and Dr. Richard Tonowski, Chief Psychologist at EEOC, presented a workshop that reviewed aspects of both the OFCCP and EEOC’s regulatory and enforcement agenda. Several of the highlights are summarized below.

OFCCP Regulatory and Enforcement Agenda

  • Equal Pay Report – current proposal that will require the collection of contractor compensation data (status – NPRM).
  • Pay Transparency – EO 13665 prohibits retaliation against employees and applicants for disclosing, discussing, or asking about compensation information (status – NPRM).
  • LGBT Protections – EO 13672 prohibits federal contractors from discriminating against any employee or applicant because of sexual orientation or gender identity and requires contractors to take affirmative action to ensure applicants and employees are treated without regard to sexual orientation or gender identity (status – Final).

OFCCP Trends

  • When analyzing the last ten years of data, the majority of OFCCP findings of discrimination have been related to a pattern or practice of  intentional discrimination, hiring, and placement (approximately 74%) and compensation issues (approximately 17%).
  • OFCCP’s focus on large analytical units (or aggregation of data) will almost always yield statistically significant differences between groups of interest. In some cases, data aggregation may be improper.
  • Disparity analyses should compare subgroups of interest to the highest selected group. OFCCP has endorsed this approach and recent settlements are reflective of this.

Current EEO Litigation Trends

  • Private-sector charges to EEOC are down.
  • EEOC-initiated litigation is down.
  • Few EEOC cases involve complex psychometric or statistical issues.
  • The current EEOC emphasis is on systemic cases, though most activity still involves single-claimant, disparate treatment issues.
  • The hottest EEOC litigation is over procedural matters (such as the adequacy of complaint, conciliation).

EEOC Strategic Enforcement Plan (2013-2016)

  • Eliminating barriers in recruitment and hiring, including those involving religious discrimination, credit and criminal history, and social media.
  • Protecting immigrant, migrant, and other vulnerable workers. The take-away message was employers should be proactive when there is a ‘vulnerable workforce.’
  • Addressing emerging and developing issues, such as pregnancy accommodation, ADAAA, big data, and LGBT issues.
  • Enforcing equal pay laws, with an emphasis on sex discrimination.
  • Preserving access to the legal system by targeting policies and practices which discourage or prohibit individuals from exercising their rights or impede EEOC’s enforcement efforts.

Preventing harassment via systemic enforcement and targeted outreach. Note that there has been a high volume of recent harassment cases.

Disability Disclosure

Alliance Special Session: Working with Mental Health Issues

In light of new data collection requirements now in effect under Section 503 of the Rehabilitation Act, the self-identification process for individuals with disabilities (IWD) is a hot discussion topic. Many are particularly curious about the decision to disclose from the perspective of applicants and employees with disabilities. During a panel discussion on mental health issues in the workplace, counseling psychologist Susanne Bruyère, Ph.D., shared her research on the factors influencing the decision to disclose or not disclose disability status in an employment setting. Listed below are several highlights from Bruyère’s discussion of her research:

  • Factors identified as most influential in the decision to not disclose a disability (barriers to disclosure)
    • Fear of unfavorable employment outcomes (e.g., not being hired, being terminated)
    • Concern of shifting employer’s focus from employee performance to the disability
  • Factors identified as most influential in the decision to disclose a disability (facilitators to disclosure)
    • Need for a workplace accommodation
    • Positive employee-supervisor relationship
    • Perception of employer’s commitment to disability inclusion
  • Factors identified as significantly less important in the decision process by IWD who ultimately decided against disclosure (in comparison to those who decided to disclose)
    • Including statements in employer recruitment material
    • Having an employee with a disability at a job fair

Implementing Diversity and Inclusion Practice

Panelists discussed a current shift in the way organizations are, and should be, approaching diversity and inclusion. Companies are moving away from just training women, for example, and moving toward training the managers who have the power to promote those women. One challenge that still remains is identifying any biases or stereotypes that may be present and learning how to overcome them.

Furthermore, it was emphasized that diversity is more than you can see. It is not about “how do I manage or teach minorities?” but rather, “how do I tailor my teaching style to each individual, no matter their background?” With a decrease in external pressures on employers, motivation to improve diversity and inclusion programs must ultimately come from within the organization.

Attracting and Retaining Qualified Individuals with Disabilities: A Contemporary Update

DCI’s Joanna Colosimo moderated this session which focused on a variety of issues regarding the recruitment, selection, and retention of individuals with disabilities in the context of the new reporting requirements that went into effect March 24, 2014. Employer, researcher, and practitioner panelists including Keli Wilson and Arthur Gutman of DCI covered a range of topics including the voluntary self-identification requirements pre- and post-offer, workforce metrics and the 7% utilization goal for individuals with disabilities, and potential legal considerations. Additionally, panelists addressed the challenges that employers continue to face in attempting to foster inclusive environments in which employees feel comfortable disclosing their disability status and shared best practices on outreach and selection.

In one example, Eyal Grauer, the Manager of Equal Opportunity Initiatives at Starbucks Coffee Company, shared that his employer has long been committed to recruiting, hiring and retaining people with disabilities and supporting inclusion and accessibility in the workplace. Although disabilities are often framed in a negative light, Starbucks has found just the opposite. Partners with a range of disabilities serve to enhance the company through their innovation, creativity and unique skillset and this philosophy should be at the forefront of all disability-inclusive programs and initiatives moving forward.

Diversity and Inclusion

Mending the Leaky Pipeline: Retention Interventions for Women in STEM

Presenters discussed the tendency for women to self-remove from STEM fields. For example, in several fields, women declare and begin studies in almost equivalent numbers. However, women are more likely to either not complete the program or to remove themselves from the field. Some methods discussed to reduce the number of women who fall out of traditionally male dominated professions included: working to minimize alienating language and imagery (i.e., posting pictures of women draped over a NASCAR vehicle in the break room can send the wrong message), downplaying the stereotype in task performance, creating peer groups, encouraging self-affirmation, and identifying mentors. Finally, it’s important to retain women in STEM fields so there are future role models for women thinking about or entering STEM fields.

Uncharted Waters: Employees with Disabilities

Per Section 503 of the Rehabilitation Act, federal contractors are required to establish a utilization goal of 7% employment for qualified individuals with disabilities. However, recent findings among contractors and non-contractors show that only an average of 3% of employees have identified as having a disability, highlighting the challenge employers continue to face with employee self-disclosure. The panel discussed potential reasons for low disclosure rates:

  • It may not always be clear to people whether or not they have a disability.
  • Language in the self-identification form is framed negatively instead of positively.

Asking someone whether they have an apparent or non-apparent disability may change how they respond. Likewise, when requesting participation, continually using language such as “this will not hurt your chances” instead of “this could help your chances” could deter employees from disclosing their disability. It is more than checking a box – employees have to understand and accept their disability, and then make the choice to disclose.

AttenTION: Integrating Military Veterans in to the Workforce

Several presentations focused on veterans in the civilian workforce; most addressing problems and solutions in three main areas: recruiting veterans into the workforce, getting them through the hiring process and retaining them in the workforce.

Recruitment/Outreach

There are a variety of both national and local level outreach resources available. Some best practices in this area involved emphasizing local level resources and involving veteran employees in the outreach effort. Some organizations with locations close to military installations initiated efforts to recruit transitioning military members before discharge is complete.

Hiring/Applicant Experience

It is often difficult for veterans and civilian recruiters and hiring managers to identify how military training and skills translate to company positions. Spending the time up front to define the knowledge, skills and abilities needed and the military skills and training that align will allow for more effective targeting efforts, resulting in a better job fit. Some areas where this has been successful included translating military leadership and supervisory skills, CDL transfer programs for drivers, and identifying that military skills translated well to competencies required for a sales position.

Retention

Veteran retention is receiving a lot of focus, as the turnover rate for 21-29 year old veterans is much higher than non-veterans, specifically during the first few years of employment. Female veterans leave at a higher rate than male veterans. Communication efforts can help with retention. Specifically, ensuring that top leadership is vocal about supporting veterans, communicating the value of the veteran’s position to achieving the organization’s mission, repeatedly providing information about resources available, and communicating to non-veteran employees to dispel “preference” myths have helped in some companies. Resources that can help reduce turnover include employee resource groups, mentoring, defining clear career paths, and offering a variety of resources to meet the diverse needs of veterans.

Lesbian, Gay, Bisexual, Transgender (LGBT) committee (ad hoc)

DCI staff attended the LGBT committee (ad hoc) meeting. A mission of this committee is to encourage research on LGBT issues. DCI will continue to share information that comes from continued involvement with this committee.

EEO Analytics

Current Issues in EEO Law

In this roundtable discussion, experts focused on four current issues in EEO law: recruitment, adverse impact, sexual harassment, and retaliation. After each topic was briefly introduced, the floor was opened up for an audience led discussion and question session. High-level discussion points included:

  • Although the Uniform Guidelines on Employee Selection Procedures (UGESP) do not indicate recruitment as a selection procedure, mishandling of recruitment can lead to selection violations such as adverse impact and the pattern or practice of discrimination.
    • As an example, an attempt to recruit a skilled laborer may lead you to recruit from local training schools. However, if those training schools do not have a diverse population of students, this pipeline of applicants may reduce diversity.
    • Background checks were discussed in light of legal considerations, including adverse impact and potential employer liability. It was stressed in this discussion the importance of considering the specific requirements of the job and also the nature of the business when assessing risk.
    • Employers are encouraged to take appropriate steps to prevent and correct unlawful harassment, including establishing a complaint or grievance process, providing anti-harassment training, and taking immediate action when complaints are reported.
    • Anti-discrimination laws prohibit harassment against individuals as retaliation for filing a discrimination charge, testifying, or participating in any investigation.
      • Harassment that is not severe or pervasive enough to interfere with terms and conditions of employment can lead to retaliation violations because criteria for retaliation claims are less than criteria for harassment claims.

Data Aggregation and EEO Analytics

This symposium provided an analysis of problems with data aggregation in three EEO scenarios: Adverse impact analysis, criterion-related validation analysis, and compensation analysis. In all three presentations, presenters demonstrated problems that aggregating unlike data can introduce in terms of arriving at correct conclusions in legal scenarios.

  • Using data from the Lopez v. City of Lawrence ruling, presenters demonstrated how conclusions regarding hiring/promotion discrimination can differ depending on whether hiring events are analyzed separately, analyzed in the aggregate without appropriate tests accounting for the multiple events combined, and analyzed in the aggregated contingent on Breslow-Day statistical results and using a Mantel Haenszel estimator. A take home conclusion from the presentation is that data should never be aggregated if there are conceptual reasons to keep the data separate (i.e., data from multiple locations representing distinct phenomena), but if there are no serious obstacles to aggregating, appropriate multi-event tests (such as the Mantel Haenszel) should be used.
  • Criterion-related validation research involves collecting selection assessment scores (e.g., written test scores, simulation scores, interview scores, some combination of different scores) from a group of individuals applying for or performing a job and establishing the degree to which those scores correlate, at a statistically significant level, with job performance ratings for the individuals. A statistically and practically significant correlation coefficient demonstrates that those who perform better on the assessment also tend to perform the job better. However, to the extent that different supervisors have particular rating tendencies (e.g., some tend to be lenient while others tend to be strict), the observed correlation coefficient between assessment scores and performance ratings will artificially decreased. The presenters offered an application of cluster-centered regression in such scenarios to demonstrate that such a technique is superior to ordinary least squares regression in criterion-related validation studies that involve supervisor ratings of performance.

Finally, presenters offered a clear demonstration of how the application of aggregation strategies offered in OFCCP’s Directive 307 are problematic in EEO pay analyses when they extend beyond the level of similarly situated data. Using simulated data from six known populations of similarly situated individuals (population parameters were established by the presenters as part of the simulation), the presenters shared results that demonstrate, definitely, false positive indicators of discrimination increase dramatically when similarly situated groups are aggregated in an EEO pay analysis.

 

Performance Appraisals

Does Your Performance Appraisal System “Meet Expectations”?

There were several sessions discussing whether formal performance appraisal (PA) systems should be abandoned. Those in favor of jettisoning formal PA systems argued that such systems involve a lot of time and money but there is no evidence that they actually result in financial benefits to the organization.  Those in favor of keeping formal PA systems argued that, although most PA systems need improvement, they are important to motivating and developing employees.

Several of the organizations that no longer use performance ratings, concentrate on goal accomplishment instead.  One panelist pointed out that effective goals are related to improving the organization rather than being related to day-to-day routine work activities. Another panel member commented that any effective PA system should be about helping the employee get better.

It was interesting that several organizations said that they no longer use performance ratings but in the descriptions of their new systems, it seems as if they still do.  For example, one organization said that it no longer uses performance ratings but instead, places employees into one of three categories: Driving the business, performing, and not performing. Isn’t this a rating scale?

Regardless of the panelists’ view of performance ratings themselves, one point on which everyone agreed was that feedback should not be an annual event. Instead, an ongoing cycle of feedback is critical to making any impact on employee performance.

Based on the number of sessions and the big turnout for each of these sessions, this promises to be a hot topic in the coming years.

Testing and Selection

Mobile Assessment: The Horses Have Left the Barn…Now What? 

Many organizations are moving away from traditional paper pencil assessments to high tech software programs to test applicants. Key differences in technology platforms (e.g. tablets, laptops and mobile phones) shared in a pre-conference workshop are listed below:

  • Small screen sizes may result in lower scores, primarily because of increased cognitive demands (e.g. smaller fonts and page manipulations required to read sentences).
  • Younger applicants prefer taking tests on a mobile device where older applicants typically prefer laptops or desktops.
  • Personality tests are easier to complete on a mobile phone than cognitive ability tests, which often contain diagrams.
  • Although a general reduction in scores is seen using smaller devices, subgroup differences stay the same across different device platforms.

20 Years of Changes in Pre-Employment Testing: Experiences and Challenges

Additional information on the changes in pre-employment testing due to technological advances was shared in this session. Key points from this session to consider when assessing applicants or integrating assessments in the applicant tracking system (ATS) are listed below.

  • Determine whether assessments are or should be mobile-friendly (i.e. consider the applicant experience).
  • Face validity is important in the context of technologically driven applicant processes (i.e. ask at the end whether the applicant was able to share their skills).
  • Involve relevant parties in integrating assessments in an ATS (e.g., Industrial-Organizational psychologists, compliance, legal, HR, recruiters, programmers, assessment developer, ATS vender).

Advancing Test Development Practices: Modern Issues and Technological Advancements

Part of the session explored adding game-like aspects to traditional cognitive assessments. Introducing game aspects to computer-based cognitive ability tests did not significantly impact testing times, which is good. However, the study also found that providing game-like feedback could impact applicant reactions to and performance on the assessment.

Although more research is needed, it’s important to be aware of different technology platforms for assessments and inform applicants of potential drawbacks associated with mobile device testing.

Using Background Checks in the Employee Selection Process

Although prevalent in the employee selection process, the use of background checks as a tool for applicant screening continues to draw heavy scrutiny from both the EEOC and plaintiffs attorneys. In spite of legal risk, approximately 86% of employers consider criminal history for at least some applicants (SHRM, 2012). During a SIOP tutorial session, Mike Aamodt of DCI Consulting and Richard Tonowski of the EEOC discussed legal implications and best practices for employers using background checks in employee selection.

Background checks, including credit and criminal history, have shown evidence of adverse impact against racial minorities when used to screen out candidates for employment. For this reason, employers should consider several factors when determining whether and how to use background checks. See the list below for several best practice recommendations for minimizing risk in the use of employee background checks:

  • Avoid blanket policies (e.g., policy that company will not hire applicants with past convictions without exception).
  • Demonstrate clear link between the purpose of conducting the check and specific requirements of the job.
  • Consider both the length of time since the conviction and the nature of the crime.
  • Notify any applicants who were rejected based on the background check and offer the opportunity to provide an explanation.
By Eric Dunleavy, Principal Consultant; Rachel Gabbard, Associate Consultant; Bryce Hansell, HR Analyst; Kristen Pryor, Consultant; Keli Wilson, Senior Consultant; Brittany Dian, HR Analyst; Emilee Tison, Consultant; Mike Aamodt, Principal Consultant; Kayo Sady, Senior Consultant; and Vinaya Sakpal, Consultant at DCI Consulting Group 
CONTINUE READING

As other blogs have noted, the Notice of Proposed Rulemaking (NPRM) for the long-awaited revisions to the Sex Discrimination Guidelines (RIN 1250-AA05) included some very interesting ideas. Some of those relate to the role of performance measurement systems, which, when used to make employment decisions like promotion, merit increases, bonuses, and termination, can be considered a selection procedure under the Uniform Guidelines on Employee Selection Procures (1978). The new regulations cite the Supreme Court ruling in Lewis v City of Chicago to support this notion, but it is an intuitive one; performance ratings (or objective performance metrics if they are available) used as part of a promotion or compensation decision process are no different than a test, interview or experience/education screen used as part of a hiring process.

Interestingly, performance ratings are mentioned in the proposed Sex Discrimination Guidelines both in the context of disparate treatment and disparate impact. Related to disparate treatment, the proposed regulations prohibit:

“Distinguishing on the basis of sex in apprenticeship or other formal or informal training programs; in other opportunities such as networking, mentoring, sponsorship, individual development plans, rotational assignments, and succession planning programs; or in performance appraisals that may provide the basis of subsequent opportunities”

Related to disparate impact, the proposed regulations note that:

“Contractors may not implement compensation practices, including performance review systems that have an adverse impact on the basis of sex and are not shown to be job related and consistent with business necessity.”

Performance ratings are probably not a new topic for readers that have been conducting regression-based EEO pay analyses. In this situation, it is often the case that performance is related to compensation outcomes, and as such should be included in an EEO pay analysis as a legitimate factor predicting pay. However, those of you that have been audited are also likely familiar with the allegation that performance ratings can be “tainted” by discrimination, and as such may not be appropriate as a legitimate factor explaining pay in a regression equation. This is often a complicated issue requiring sophisticated statistical analyses.

We suggest that two additional points are worth noting:

  1. Performance appraisals themselves can be assessed for adverse impact. Similar to hiring data, there are various statistical significance tests and practical significance measures that can be used to determine (a) whether a difference in performance across EEO group (e.g., sex, race/ethnicity, age) is likely due to chance, and (b) if it isn’t likely due to chance, how large the difference is.  This is often a useful analysis to conduct, particularly when performance measures are used in part to make employment decisions like promotion, merit increases, bonuses, and terminations. In the last year, DCI has conducted adverse impact analyses on performance measures for a wide range of federal agency and private sector clients. It appears that employers are realizing the usefulness of such an analysis.
  1. Should statistically significant and practically meaningful disparities in performance measures exist between EEO protected groups, the next question is whether the performance measurement system is defensible. This is often a function of job-relatedness. Having an Industrial/Organizational Psychologist objectively review the performance appraisal system can help you understand the likelihood that your system would survive EEO scrutiny. In this context the following considerations are often important:
    • Are the performance dimensions supported by some type of job analysis?
    • Is the system structured such that dimensions are clearly defined, a quantitative and/or qualitative scale is used, and behavioral benchmarks are available to help the rater?
    • Are raters trained?
    • Is there a system in place for a high-level management review of performance ratings to determine if there are any patterns/inconsistencies that need to be reviewed?
    • Is there some structured guidance on how to use performance ratings in employment decisions (e.g., merit increases, promotions)?
    • Is there an appeal process for employees who believe their performance ratings are not accurate?
    • Is there a well-developed feedback system through which employees can receive information about their performance that will promote their future development and enable them to improve job performance?

If you have any questions about the above issues please feel free to contact us. We have a feeling that EEO analyses of performance rating systems will become an important piece of your EEO/AA compliance puzzle……….if they aren’t already.

By Emilee Tison, Ph.D., Consultant and Eric Dunleavy, Ph.D., Principal Consultant, DCI Consulting Group 

CONTINUE READING
SHARE
Facebook Twitter Linkedin

The EEOC is moving towards the collection of employee compensation data – albeit very quietly. Although no public announcements have been made, the EEOC is conducting a pilot study to investigate issues related to the collection and analysis of this data.

To some, this pilot study may seem unexpected – especially given the ongoing efforts of the OFCCP to move forward with the collection of pay data for Federal Contractors. That said, let’s look at some recent activity in the pay collection arena before discussing the (limited) details of the EEOC’s pilot efforts.

Historical Context

Movement towards the collection of pay data is several years in the making – going back to the Paycheck Fairness Act (PFA). As you may recall, one of the provisions of the PFA was a requirement for employers to provide the EEOC with information on pay by the race, gender, and national origin of their employees. The PFA, however, died in the Senate and was not enacted.  This spurred the White House to create the 2010 National Equal Pay Enforcement Task Force to address key pay-related issues. One notable recommendation from this taskforce was for the EEOC and OFCCP to identify ways to collect pay data from employers. Described below are the (seemingly siloed) efforts of the OFCCP and EEOC.

Initial EEOC Efforts

After receiving the White House taskforce recommendation, the EEOC asked the National Academies’ National Research Council to convene a panel to review methods for measuring and collecting pay information by gender, race, and national origin from U.S. employers. The panel was asked to consider “suitable data collection instruments, procedures for reducing reporting burdens on employers, and confidentiality, disclosure, and data access issues.” In 2012, the National Academies released a report, Collecting Compensation Data from Employers, which included the following recommendations:

  • The OFCCP, DOL, DOJ, and EEOC should prepare a comprehensive plan for use of earnings data before initiating any data collection.
  • After completing the comprehensive plan for use of earnings data, the agencies should initiate a pilot study to test the collection instrument and the plan for the use of the data.
  • The EEOC should enhance its capacity to summarize, analyze, and protect earnings data.
  • The EEOC should collect data on rates of pay, not actual earnings or pay bands, in a manner that permits the calculation of measures of both central tendency and dispersion.
  • The EEOC should consider implementing appropriate data protection techniques, such as data perturbation and the generation of synthetic data to protect the confidentiality of the data.
  • The EEOC should seek legislation that would increase the ability of the agency to protect confidential data.

 

OFCCP Efforts

After receiving the White House taskforce recommendation, the OFCCP also launched their own initiative, separate from EEOC’s work. In August 2011, The OFCCP released an Advance Notice of Proposed Rulemaking (ANPRM) for a new Compensation Data Collection Tool. After public comment and review, this was followed by a Notice of Proposed Rulemaking (NPRM) to Collect Summary Compensation Data from Contractors in New Equal Pay Report (“Equal Pay Report”), released in August 2014 (recently discussed on this blog). This proposed rule would require federal contractors who have more than 100 employees, and who hold federal contracts or subcontracts worth $50,000 or more for at least 30 days, to submit summary employee pay and demographic data to the OFCCP. DCI, in conjunction with the OFFCP Institute, submitted public comments in response to the proposed rule notice highlighting issues and considerations related to the collection and analysis of pay data. Publication of the final rule is anticipated in late August 2015, with a likely effective date in 2017.

Current EEOC Pilot Study

Now, back to the EEOC pilot study. Although the EEOC is taking the National Academies’ recommendation to conduct the pilot study, little is known regarding their current effort. The EEOC has not publicly discussed this contract nor their plans related to the collection of employee compensation data. What we do know is that in September 2014, the EEOC obtained contractor services from Sage Computing Inc., a social science research and information technology company headquartered in Reston, VA. The contract summarized the work as “conduct a Pilot Pay Study for how compensation earning data could be collected from employers for the EEOC.” Perhaps the completion of the pilot study contract (whose current estimation of completion is September 2015) will shed some light on the situation.

What does this all mean?

In conclusion, it looks as if both the OFCCP and EEOC are pursuing efforts to identify ways to collect pay data from employers. However, given the different stages of their efforts – EEOC in a pilot study and OFCCP reviewing public comments (with an assumed outcome of releasing a final rule) – it will be interesting to see how it plays out. Without clear communication between the regulatory agencies, it is also unlikely the two efforts will be complementary. In fact, it is more likely that the two efforts will result in two different data collection requirements – which would further impact Federal Contractors.

We will provide additional updates as information becomes available. Stay tuned.

In the meantime, we recommend reviewing Law360 for another discussion on the EEOC’s pilot pay study.

By Amanda Shapiro, M.S., Senior Consultant and Emilee Tison, Ph.D., Consultant, DCI Consulting Group

CONTINUE READING
SHARE
Facebook Twitter Linkedin

On September 12th, a General Electric subsidiary in Ohio agreed to pay $537,000 to settle a sex discrimination allegation with OFCCP. The agency alleged that the company used a set of employment tests that produced adverse impact against female applicants to attendant positions and was not validated in accordance with the Uniform Guidelines (41 C.F.R. part 60-3). The employment tests, an off-the-shelf set of tests called WorkKeys, measure a series of cognitive abilities, including applied math, locating information, reading for information, and applied technology. According to OFCCP, test content did not adequately match job content and the test cut score was not related to performance differentiation, and as such the requirements of the Guidelines were not met.

The OFCCP press release also noted that the agency settled another case back in 2011 that focused on WorkKeys.  That 2011 settlement was with Leprino Foods, and the set of tests were alleged to have adverse impact against minority applicants to laborer jobs, which is an allegation that is generally consistent with the personnel selection research literature assessing subgroup differences on cognitive tests. Once again, OFCCP alleged that the validation evidence did not meet the requirements of the Uniform Guidelines.

This settlement is a reminder that OFCCP can allege and litigate unintentional discrimination under a disparate impact theory. In this scenario, any facially neutral step in the selection process may be challenged. The Uniform Guidelines, which were developed in 1978 and are jointly enforced by OFCCP, EEOC, and DOJ, require that employers justify any selection procedure that produces adverse impact by demonstrating that it is “job-related and consistent with business necessity”. This is often accomplished via a validation study that uses scientifically rigorous research methods and shows persuasive results that the tool allows for meaningful inferences about candidates to be made.

This settlement is a reminder for federal contractors to monitor their employment testing programs. Employment tests and other professionally developed selection procedures can be an important competitive advantage to organizations, but they can be challenged. We suggest that contractors keep in mind the following:

  • The Guidelines identify a number of employee selection procedures that could be challenged under a disparate impact theory, including:
    • Job requirements (physical, education, experience)
    • Application forms
    • Interviews
    • Work samples/simulations
    • Paper and pencil tests
    • Performance in training programs or probationary periods.
  • Any facially neutral step in a selection process can be evaluated for adverse impact via differential “pass/fail” results.
  • Be aware of what tests and other selection procedures are being used in your organization;
    • Consider an independent test audit to help you understand what tests are being used, whether they were professionally developed, are psychometrically sound, have been validated for similar jobs and are likely to produce adverse impact.
      • If the answer to any of the above questions is “I don’t know”, then there is likely potential risk.
  • If you are thinking about identifying and implementing tests or selection procedures in your organization, consider conducting a job analysis to identify what work duties are performed in a job and what worker characteristics are needed to perform that job well. Hopefully there are available tests and other selection tools to simulate those duties and/or measure those characteristics.
  • Should any adverse impact be identified, consider formal validation research conducted by an industrial/Organizational Psychologist or other measurement expert.

 

By: Eric Dunleavy, Ph.D., Principal Consultant, and Emilee Tison, Ph.D., Consultant at DCI Consulting Group

CONTINUE READING
SHARE
Facebook Twitter Linkedin

News

Really, I Come Here for the Food: Sex as a BFOQ for Restaurant Servers

Michael Aamodt, Principal Consultant at DCI Consulting Group, wrote an article featured in SIOP’s TIP publication, January 2017.

Recent Blog Posts

Fiscal Year 2018 Budget Proposes Merger of OFCCP and EEOC

The Department of Labor’s Fiscal Year 2018 (FY2018) budget proposal was released today, May 23, 2017.  The budget outlines the initiatives and priorities of the new administration, and as predicted by DCI, recommends merging the Office of Federal Contract Compliance Programs (OFCCP) and Equal Employment Opportunity Commission (EEOC) by the end of FY2018.

The proposed budget indicates that the consolidation will provide efficiencies and oversight.  Additionally, the proposed budget allots $88 million for OFCCP, a decrease of $17.3 million from Fiscal Year 2017.  The main cut to the budget appears to be headcount, with a proposed 440 full-time equivalent (FTE) headcount, a reduction from 571 FTEs.  Some other interesting items that have

Read More