When Transform to Thrive host (and our CEO) Lisa Bragg met to talk with WeirFoulds LLP Partner James Kosa about artificial intelligence on LinkedIn Live, there was little of the fearmongering but a lot of great discussion on introducing bias, proprietary data, and the future of employment. Catch some highlights of their talk below, and view the full episode here.
The transformation is rapid – but there’s a way to assess success
The fourth and fifth revolutions are happening back-to-back, remarks Lisa, and with those, sweeping changes to our way of life – including our way of work. James mentions that when WeirFoulds is looking to incorporate machine-learning technology, or when they’re recommending it to clients, they need to settle on the right combination of people, data, talent, and technology. If any of those are missing, it’ll likely lead to a failed adoption of that AI tool.
Privacy in multiple respects
Lisa raises the issue of personal privacy when you’re supplying an AI program with data. James adds that if personal information is involved, it introduces a whole other layer of complexity. That said, data is also valuable and proprietary in its own right – because what you teach the AI is likely unique to your organization. And you don’t want that “brain” you’ve built to be sold to a competitor.
What errors – human and machine – to watch for
While artificial intelligence might not be as “intelligent” as people yet, it does avoid the human errors of fatigue, distraction, and boredom. There’s plenty of work in the legal profession – like litigation or due diligence reading – that can invoke these human reactions. Where AI can fail is if its algorithm isn’t calibrated to get the results that you want, and human judgement is still needed to see if the AI is making any systemic or one-off errors.
Bias is the enemy of being the company you want to be
Building an AI algorithm comes with it the risk of introducing personal or systemic biases. When instructing a machine, you’re doing so to produce the results you want to have, not just the ones you necessarily have right now. James uses the example that if immigration software notes people with red hair were categorically rejected over time, they’ll continue to be rejected unless you teach the machine differently. Lisa adds that the echo chamber needs to be avoided, because we need different perspectives.
Thanks to James for lending his AI insights on our show!
If you’d like to tune into the next episode, make sure to follow Lisa’s LinkedIn to be notified as soon as it airs.