AI for Business: Adoption challenges - legal, societal and ethical considerations

This post is part of the AI for Business series

Shedding light on AI from a non-technical business/product/design perspective.

  1. A history of Artificial Intelligence
  2. How to harness AI
  3. AI and automation
  4. AI and cognitive insight
  5. AI and cognitive engagement
  6. Evolution V revolution
  7. Adoption Challenges - People
  8. Adoption Challenges - Legal, Societal & Ethical considerations
  9. Implementation strategy

This is post #8 in our AI for Business series.

After reading last week’s blog, you should hopefully now be in a position to get the people in your organisation onside with your AI-powered transformation. While that is a major hurdle to have overcome, the journey is not yet complete. Like Frodo, you have left the Shire, formed the fellowship and trudged through the Dead Marshes. But you still need to scale Mount Doom.

In our case, Mount Doom represents the abundance of legal, societal and ethical implications surrounding the adoption of AI technology. Let’s take a look at a few of those in this article.


Regulation:

AI regulation is in a constant state of flux. Furthermore, regulations can differ greatly in different geographical regions. This leads to an uncertain and shifting legal landscape.

Staying on top of this will incur costs for your organisation, both in terms of legal counsel and potentially development costs if your systems need to change or be removed, should more stringent regulations be introduced in a given geography. You will need to set aside resources (time, money and people) to account for this.


Bias and Discrimination

AI systems are trained on massive amounts of data, and embedded in that data are societal biases. Consequently, these biases can become ingrained in AI algorithms, perpetuating and amplifying unfair or discriminatory outcomes

Let’s take an example from thoughtburgers (our delectable, AI-powered and entirely fictitious side-business, home to the tastiest and most thought-provoking burgers!). In blog #3 of this series, we looked at how Cognitive RPA could be used to automate delivery driver scheduling. But if we allow this system to schedule drivers without human oversight, we could be exposing ourselves to some large scale problems down the line.

For example, a delivery driver may have religious commitments that make them unavailable to work on certain days, especially on busy weekends or holidays. Our scheduling algorithm might notice this and start taking it into account, reducing the number of shifts this driver is given. Penalising the number of shifts they are offered on religious grounds is a serious breach of ethics and is illegal in many countries.

Similarly, a model might favour experienced drivers over new drivers because experienced drivers have more historical data to their name. However, new drivers need more shifts to get more experience, which could be difficult with a biased algorithm.

It is even possible to overcompensate for bias in a model, so it is crucial that systems are monitored and regularly tested and updated to ensure harmful biases have not crept in and that they are performing as expected.


Transparency and Accountability

AI systems often operate like a “black box,” offering limited interpretability of how they work and how they arrived at certain decisions. Transparency is therefore vital to ascertain how decisions are made.

Furthermore, it is also essential for understanding who bears responsibility when AI systems make errors or cause harm, ensuring appropriate corrective actions can be taken. For example, let’s say our thoughtburgers order triaging system, which we spoke about in blog #4 of this series, makes an error; it incorrectly reduces the number of delivery orders it sends to a particular branch. With this reduction in sales, over time this branch is forced to close. Before we realise the error, the branch is gone and those folks have lost their jobs. Who is culpable for this closure?

Incorporating “explainable AI” to help characterise a model’s fairness, accuracy, and potential bias will be extremely beneficial to your organisation to avoid such questions, as would the continuous monitoring and updating of your systems.


Privacy, Security, and Surveillance

The effectiveness of AI often hinges on the availability of large volumes of personal data. As AI usage expands, concerns arise regarding how this information is collected, stored, and utilised.

Ideally, your customers should have control over whether their data can be used to train models by providing them with an opt-out option.

Robust safeguards against data breaches and unauthorised access to sensitive information is also crucial. This may result in higher cybersecurity costs for your company, which should be factored in during the research phase of your AI project.

If privacy is particularly important to your organisation, check out this article about running an open source LLM locally (or self-hosting an LLM). Generally, there are fewer privacy concerns with this approach than using ChatGPT.


Job Displacement

While the optimistic view is that AI will augment humans, it is inevitable that some job displacement will occur.

For example, in blog #6 of this series we looked at the potential to use drones to supplement our food delivery offering at thoughtburgers. If thoughtburgers were to pursue this strategy, we would need fewer regular delivery drivers on the ground.

While there will be opportunities to retrain some staff as drone handlers who complete pre-flight checks and attach deliveries to drones, or as “Director of Drones” with responsibilities of managing the fleet, we will likely still need fewer delivery drivers. Automation like this tends to disproportionally impact those from poorer socio-economic backgrounds, further exacerbating existing economic inequalities.

Addressing the impacts of job displacement requires proactive measures. In thoughtburgers’ case, this could include retraining programs for displaced drivers and advocacy for policies that facilitate far-reaching social and economic support systems.

Consider what steps your organisation needs to take to mitigate the negative societal implications of your adoption of AI. Can you roll this technology out in the least harmful way possible? Doing so would likely help your team get behind the initiative.


Bottom line:

There are some weighty legal, societal and ethical considerations to adopting AI for your organisation.

AI, security and privacy regulations are in a constant state of flux so be prepared for the goal posts to shift in these areas and budget resources accordingly. Make sure to regularly test and check your algorithms to ensure they are performing as expected and that no harmful biases have crept in. The more transparent and explainable your models are the better, both for your peace of mind and for accountability in case something goes wrong.

Consider also the wider societal impact of your decision to adopt AI and whether you can mitigate some of the negative impacts it can bring. A fragmented and broken society, after all, is good for no business in the long run.

If the ethical dimensions of AI adoption are of particular interest to you but it all feels a little overwhelming, check out this article about thoughtbot’s AI ethics guide, as a starting point.


💡 If you’re ready to start using AI to transform your business, thoughtbot would love to work with you. Let’s talk about making your AI initiative a success!

This blog post is part of a series based on the course Artificial Intelligence (AI) for Business run by University College Dublin (UCD). I took this course and found it so helpful that I’m recapping my top insights. thoughbot has no affiliation with UCD.