You need to open the “black box” artificial intelligence, until it is...

You need to open the “black box” artificial intelligence, until it is too late

157
0
SHARE

For several years in the 1980-ies of the students of the Medical school of St George’s hospital in London were selected using high-tech method. A computer program, one of the first of its kind, looking through resumes, choosing from all applications of around 2,000 candidates per year. The program analyzed records of receipts, by studying the characteristics of successful applicants, and adjusted to, until its solution does not coincide with the opinion of the selection Committee.

However, the program has learned to find more than good grades and academic achievements. Four years after the implementation of two doctors at the hospital found that the program usually rejects women candidates and individuals with non-European names, regardless of their academic merit. Doctors found that about 60 applicants every year, was just denied a job interview because of their sex or race. The program included gender and racial biases in the data used for training — in fact, have learned that doctors and foreigners are not the best candidates for the doctor.

Thirty years later we are faced with a similar problem, but with the internal prejudice is now widely distributed and make decisions with even higher stakes. Artificial intelligence algorithms based on machine learning are used everywhere, starting with government institutions to healthcare, taking decisions and making predictions based on historical data. Studying the patterns in the data, they also absorb and prejudice in them. Google, for example, shows more advertising-paid working women than men; one-day shipping, Amazon will pass the Negro quarters, and a digital camera are difficult to recognize faces is not white.

It is difficult to understand whether the algorithm is biased or fair, and so consider even computer experts. One of the reasons is that the details of the new algorithm are often considered to be patented information, so carefully guarded by their owners. In more complex cases, the algorithms are so complicated that even the creators don’t know exactly how they work. This problem is the so-called “black box” AI — our inability to see the inner part of the algorithm and understand how he comes to a decision. If you leave it locked, our society can be seriously affected: in the digital environment are implemented historical discrimination that we fought for many years, from slavery and serfdom to discrimination against women.

These concerns, voiced in the small communities of computer science earlier, are now gaining serious turn. Over the past two years in this area there was quite a lot of publications about the transparency of artificial intelligence. Along with this growing awareness and sense of responsibility. “Can you be any things that we should not build?”, asks Kate Crawford, researcher at Microsoft and co-founder of AI Now Insitute in new York.

“Machine learning has finally come to the forefront. Now we are trying to use it for hundreds of different tasks in the real world,” said rich Caruana, senior researcher, Microsoft. “It is possible that people will be able to deploy malicious algorithms that will significantly affect society in the long term. Now, it seems, suddenly she understood that this was an important Chapter in our region.”

Unauthorized algorithm

We have been using algorithms, but the problem of the black box has no precedent. The first algorithms were simple and transparent. Many of them we still use — for example, to evaluate creditworthiness. With each new use comes the regulation.

“People have used algorithms to assess the creditworthiness for decades, but in these areas was quite strong settlement, which grew in parallel with the use of predictive algorithms,” Caruana said. Regulations ensure that the prediction algorithms give an explanation of every point: you were denied because you have large credit, or too low income.

In other areas, such as legal and advertising, there are no rules prohibiting the use of deliberately neproschityvaemym algorithms. You may not know why you have been refused a loan or didn’t get the job because no one is forcing the owner of the algorithm to explain how it works. “But we know that because the algorithms are trained on real-world data, they must be biased — because the real world is biased,” says Caruana.

Consider, for example, language is one of the most obvious sources of bias. When algorithms are trained on written text, they formed some associations between words that appear together more often. For example, they learn that “for men to be a computer programmer is the same as for a woman to be a housewife.” When this algorithm will instruct to find a good summary to work as a programmer, most likely, he will choose men candidates.

Such problems are quite easy to fix, but many companies are simply not going. Instead, they will hide these inconsistencies behind the shield protected information. Without access to the details of the algorithm, the experts in many cases will not be able to determine bias or not.

As these algorithms are secret and remain beyond the jurisdiction of governing bodies, citizens are almost impossible to sue the creators of the algorithms. In 2016, the Supreme court of Wisconsin rejected the request to see the inner workings of COMPAS. Man, Eric Loomis, was sentenced to six years in prison partly because COMPAS considered him “high risk”. Loomis says that his right to due process was violated by a dependency judge of an opaque algorithm. Final application for proceedings in the Supreme court of the United States failed in June 2017.

But the secretive company won’t use their freedom for an unlimited time. By March the EU will adopt laws that will require companies the opportunity to explain to interested customers how to operate their algorithms and how decisions are made. The US has no such legislation in development.

Forensics black box

Regardless of whether regulators involved in all of this, a cultural shift in how developed and deployed algorithms that can reduce the prevalence of biased algorithms. As more companies and programmers are committed to make their algorithms transparent and understandable, some hope that companies that do not, will lose a good reputation in the eyes of the public.

The growth of computing power has allowed us to create algorithms that are both accurate and understandable — this technical challenge, the developers could not overcome historically. Recent studies show that it is possible to create understandable models that predict the relapse of criminal subjects as well exactly how the black box forensics like COMPAS.

“Everything is ready — we know how to create models without black boxes,” says Cynthia Rudin, associate Professor of computer science and electrical engineering at Duke University. “But not so easy to draw people’s attention to this work. If the government Agency will stop paying for model black box, it would help. If the judges refuse to use models of a black box for sentencing, it would”.

Others are working to come up with ways to check the validity of the algorithms, creating a system of checks and balances before the algorithm will be released into the world, just as the test of each new drug.

“The models now are made and deployed too fast. Not carried out proper checks prior to the release algorithm in the light,” says Sarah Tang from Cornell University.

Ideally, developers should dismiss known bias — for example, by gender, age and race, and to run internal simulations to test their algorithms on other problems.

Meanwhile, before reaching the point where all the algorithms will be thoroughly tested prior to release, is the ability to determine which of them will suffer from bias.

In his latest work, tan, Caruana, and their colleagues have described a new way to understand what might happen under the hood of a black box algorithms. Scientists have created a model which simulates the algorithm of the black box, learning to assess the risk of recidivism according to COMPAS. They also created another model that was trained on real-world data that show whether or was predicted recidivism. Comparison of the two models has allowed scientists to assess the accuracy of projected points, without analyzing the algorithm. The differences in the results of the two models can show which variables, such as race or age, may be more important in a particular model. Their results showed that COMPAS biased against black people.

Properly constructed algorithms can eliminate long-established prejudices in the field of criminal justice, policing and many other areas of society.