By Joanne Taaffe


Be cautious with AI, urges 4YFN panellists

Danae Vara, product director at Red Points, questioned whether there should be a limit on the use of AI, arguing the technology should only be given small tasks to make life easier.

Speaking on a panel about why trust is a major reason to push AI ethics, experts noted a major issue around AI is whether the data used makes decisions to reinforce society’s racial and gender biases.


Vara said: “Right now, we should trust the technology the way you’d trust a kid. You can trust a child to choose a yellow crayon, but not to make decisions about other people’s futures.”


Vara noted in 2016, Microsoft had some issues when it launched a conversation chatbot on Twitter. “It took [the AI tool] less than a day to become a horrible bully because of the data it fed on,” she said.


Ethical issues


Also speaking on the panel at the 4YFN event, Christian Guttmann, VP and global head of AI and data science, TIETO, noted the ethical use of AI is set to become a big question for businesses and governments.


“AI is the driving force of this industrial revolution. It will change a lot of things and implies a shift of power and [changes] in the economy. It’s not a big bang, it will be slow and iterative.”


AI reconfigures itself based on what it has learned, which makes an AI tool’s future decisions unpredictable.

“If it’s a question of guaranteeing AI won’t do things, that’s not possible,” said Guttmann. “But you can put the means in place to evaluate its decisions.”


Another source of potential unintended bias is AI developers.

“Who is doing AI? It’s a very small group of people who understand the mathematics of machine learning,” said Guttmann, although he added that other sectors are active in the field, including psychologists.