As businesses continue to embrace automated tools to make operations easier and more efficient, employers and employees must remain vigilant against biases that are embedded in these tools. AI biases may be particularly concerning for employees who work for companies that use so-called bossware.
AI in bossware
More of today’s workers perform their jobs remotely and without direct physical oversight than ever before. Because of this, some employers are turning to so-called bossware to monitor their employees’ productivity by tracking their location, internet activity, and even keystrokes.
Employees may already feel that bossware is invasive, but these tools can be even more problematic when they are biased.
But how can AI be biased? While some programs can be very complex, the systems generally use algorithms built by humans. When a programmer relies on information that is not completely impartial, that can create biases. In other words, AI is based on imperfect people creating algorithms containing data that is based on existing biases, which can facilitate discrimination in the workplace.
Tackling real-world consequences of AI bias
When AI tools are faulty, it can be the employer’s responsibility to address the problem. They may need to change algorithms or stop using a specific tool.
Agencies like the Department of Justice and the Equal Employment Opportunity Commission have issued warnings and guidance regarding AI use, but issues continue – and will continue – to arise. In some cases, employers who misuse AI or fail to take proper precautions against bias can face fines, penalties and even lawsuits by victims of discrimination.
The fact is that some employers misuse things like artificial intelligence because they either don’t understand it well enough or they don’t think there will be consequences. Whatever the case may be, an employer that uses tools that discriminate against certain workers can be held liable for any rights violations or damages that result.