Facial recognition artificial intelligence can be deeply flawed, but it’s improving all the time, according to Israeli company AnyVision, which creates unique products and algorithms used to control entry to airports, hospitals and casinos and enable state border crossings.
Active in 45 countries, AnyVision recently shared its data collection technology at the European Conference on Computer Vision 2020, which ended last week.
The company ran a Fair Face Recognition Workshop to see if it was possible to reduce the amount of bias that AI systems reveal. Among the 10 winning teams, the bias was so low it became almost negligible.
“In the early days of facial recognition, companies would use bad data,” AnyVision researcher Dr. Eduard Vazquez told The Jerusalem Post. “You can’t use bad data and get good results. “An AI should never be used to make decisions such as whom to arrest or if someone is guilty of a crime.”
Such tools can be properly used only in a society with a functioning legal system and basic human rights, he told the Post.
Vazquez is careful to point out that just as it is wrong to say machines are biased, it’s wrong to say machines are not biased. However, AI systems are less biased than people.
“It’s a tool that we are attempting to improve all the time. It is not meant, and should not ever be used, to make decisions on its own.”
In dystopian science fiction there are stories about people being unable to prove they are who they claim they are. In the real world, Vazquez said, there are other forms to prove one’s identity beyond a scan of one’s face, such as calling the office and being asked to be let in.
“If an AI system is biased, it is a bad system,” he concluded. “You can’t put in the real world a system that was not tested well.”
“If you are biased yourself,” he added, “you will also have a poor system,” no matter the technology you use.