Active fairness is better than unawareness (blindness)

  1. Fairness Through Unawareness: The idea of excluding sensitive attributes to prevent bias is known as “anti-classification” and is enforced in EU law through data protection regulations. However, unsuspicious attributes may still be misused to produce discriminatory decisions.
  2. Chatty Proxies: Sometimes, non-sensitive attributes may be strongly linked to sensitive attributes and serve as substitutes or proxies for them. Thus, it is hard to detect and prevent discrimination without actual knowledge of sensitive attributes.
  3. Enter Big Data: The decisions in AI systems are based on hundreds or even thousands of attributes that are strongly correlated and not always apparent to humans. Therefore, even after removing sensitive attributes, complex correlations in the data may continue to provide links to protected information.
  4. Can You Anonymize a Resume? AI systems can discriminate indirectly through unsuspicious proxy variables that correlate with sensitive attributes. It is difficult to establish a sufficient degree of “unawareness” to guarantee discrimination-free decisions.
  5. Active Fairness: Instead of “fairness through unawareness,” active fairness is a better solution. It is a proactive approach that attempts to compensate for discrimination, reducing its impact rather than ignoring it. Active fairness can improve decision-making and increase trust in AI.

Sign Up for NextBigWhat Newsletter

Curated. Summarized. Important News. For free.

You May Also Like

Don’t Put the Sale First

Find a human connection with potential customers and get to know them. Determine if they’re the right customer for your business and if the services or products you provide will…
View Post