AI and Bias in University Admissions
- Written by Sara Stivers
New technologies are rapidly transforming the landscape of higher education across disciplines. Classrooms are harnessing the powers of virtual reality as an innovative approach to learning, and, through the development of MOOCs and open learning platforms online, access to knowledge has expanded globally. Institutions are using predictive analytics to identify struggling students in order to provide early interventions and improve retention.
In the world of university admissions, artificial intelligence is making waves as a new tool to sift through the applicant pool to uncover the best talent and predict which applicants will be the most successful. New software can measure criteria such as test scores, GPA, or grades, and make recommendations for acceptance based on predefined conditions set by the school. College admissions remains highly competitive with schools competing for top students and higher rankings. This automated process helps schools process through huge numbers of applications, saving staff time while providing deeper insight into candidate profiles.
In many ways, AI is being touted as a way to take individual bias out of the admissions process by removing the possibility of human-influenced discrimination of candidates. However, AI in areas such as admissions or hiring is often based on historic data sets that may unintentionally favor a specific candidate profile, therefore propagating existing inequalities.[1] “Bad inputs can mean biased outputs, which led to repercussions for women, the disabled and ethnic minorities,” states Wendy Hall, a UK-based professor of computer science who has studied AI in hiring processes.[2] This could pose significant concerns for university admissions, as AI becomes more and more prevalent. For example, a school may design an algorithm based on the data of their students who have the highest test scores, which could disproportionately represent one group of students. If success standards are defined by the data of a majority group, then the AI will have an implicit bias and default to excluding minority groups.
This is why experts stress the importance of having a diverse group of stakeholders throughout the design, development, and operation of the software.[3] Diverse input can help to offset some of the biases that may be built into algorithms. Many admissions professionals also stress that AI should not replace the human element of the application process. There will still be a need for holistic admissions approaches to evaluate extracurricular activities, personal qualities, or creative talent to find students who are the right fit for the school. The impact AI can have on diversity and inclusion efforts is beginning to be studied in more depth for both its positive and negative impacts. The hope is that AI will work together with these holistic methods to continue to build diverse classes and provide access to higher education to underrepresented populations.
[1] Osoba, O. A., & Welser, W., IV. (2017). An intelligence in our image: The risks of bias and errors in artificial intelligence. Santa Monica, CA: RAND Corporation. doi: https://www.rand.org/pubs/research_reports/RR1744.html
[2] Ram, A. (2018, May 31). AI risks replicating tech’s ethnic minority bias across business. Financial Times.
[3] Guillory, D., & Scherer, M. (2017). Building an unbiased AI: End-to-end diversity and inclusion in AI development. Lecture presented at Artificial Intelligence Conference in CA, San Francisco. Retrieved from https://conferences.oreilly.com/artificial-intelligence/aica-2017/public/schedule/detail/60444
This article originally appeared in the fall 2018 issue of Perspectives.