- Fairness Through Unawareness: The idea of excluding sensitive attributes to prevent bias is known as “anti-classification” and is enforced in EU law through data protection regulations. However, unsuspicious attributes may still be misused to produce discriminatory decisions.
- Chatty Proxies: Sometimes, non-sensitive attributes may be strongly linked to sensitive attributes and serve as substitutes or proxies for them. Thus, it is hard to detect and prevent discrimination without actual knowledge of sensitive attributes.
- Enter Big Data: The decisions in AI systems are based on hundreds or even thousands of attributes that are strongly correlated and not always apparent to humans. Therefore, even after removing sensitive attributes, complex correlations in the data may continue to provide links to protected information.
- Can You Anonymize a Resume? AI systems can discriminate indirectly through unsuspicious proxy variables that correlate with sensitive attributes. It is difficult to establish a sufficient degree of “unawareness” to guarantee discrimination-free decisions.
- Active Fairness: Instead of “fairness through unawareness,” active fairness is a better solution. It is a proactive approach that attempts to compensate for discrimination, reducing its impact rather than ignoring it. Active fairness can improve decision-making and increase trust in AI.