Saturday, 18 January 2020

Week 7 [20-26.01.20] Does AI have to care about human values?

There is a long way before will have to consider this question seriously in it's full meaning - in terms of thinking machines with their own morality.

But even now, we can't forget about biases in data sets that AI models are being trained on.
AI models are more and more often used on hiring, medicine, scientific analysis, public policy, judiciary or banking.
G. Irving and A. Askell in https://distill.pub/2019/safety-needs-social-scientists/ warn that the problem with biases
and importance of alignment of values will increase together with advances in AI systems.

Please read the article and answer the following questions:
1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.
2. Did you hear about any cases where people become victims automated systems making decisions?
3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

20 comments:

  1. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    Statistically, a man + a dog have three legs. That's it for statistics. When it comes to people's lives, I don't think that making a good decision, at least for the moment, is making statistical decisions about their lives or functioning. It suffices that if statistically 30 chickens in the voivodship have avian flu, 60,000 animals are eliminated. There are no two identical people, there are no two identical crimes. So deciding whether to convict or acquit someone on the basis of stayistic evidence is strange. Clues generate problems, so what about statistics ???

    2. Did you hear about any cases where people become victims automated systems making decisions?

    Each of us can or will say differently is often the case of wrong decisions made by the machine. Play list on youtube? Do you always have songs you like in the automatically generated playlist? Not necessarily! They are usually after the same tag you have enabled! The same applies to the selection of content on social media. Not all content interests you and then you skip it. Do you get the right job offer on pracuj.pl based on your own description of your preferences and character traits? Of course not. If you are looking for a car on otomoto are displayed only the ones you thought of? Hmm .. That's enough

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    I don't know what the possibilities of such systems are. However, we will certainly use them more and more often. You just have to be aware that they can make mistakes. Due to incorrect algorithms or due to intentional actions.

    ReplyDelete
    Replies
    1. I agree that users of AI/statistical based support systems should be aware that they make mistakes even more often than may be expected.

      Funny thing, I've recently bought a xiomi "intelligent" scale. IT measures a lot of things and provides some advices. As long as I believe the measurements are accurate recommendations made based on them by the associated application are at least suspicious to not say they're nonsense. According to my common sense I'd be seriously malnourished if I followed the recommendations.

      You also mentioned another important topic. Due to all these recommendation mechanisms on social media and websites we really live in very tight bubbles. Maybe it's not that harmful if we're being introduced to a very small range of songs on youtube, but I think the same goes for news or other content conveying messages shaping our beliefs and opinions.

      Delete
  2. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.
    Thank you for interesting article. For me it is a very difficult topic. I have read this article and it is not easy to understand. I think in some situation statistical approach for making decisions regarding human lives is justified. Humans knowledge is reduced and we don’t know everything. Sometimes our prejudices could blind us. In some case statistical approach can help make right decision. We have to think about AI decision and decide whether we trust it or not. In some situations AI decision could be better than ours.

    2. Did you hear about any cases where people become victims automated systems making decisions?
    No, I didn't hear about any cases where people become victims of automated systems making decisions. I searched internet and found interesting article about bad decisions making by AI. Link below
    https://www.eetasia.com/news/article/When-AI-Goes-Wrong

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?
    I think in this case we can use this system as a support for humans decision or as chatbot. There are fields where statistical systems could help eg. making a decision of choosing right color to our room.

    ReplyDelete
    Replies
    1. Thanks for the link! Some examples like using facial recognition system by law enforcement that may unfairly bias officers look really serious. I heard about one court in US using AI to assist judges in verdicts. It turned out that it was racially biased as statistically Black Americans are found guilty more often then other suspects.

      Indeed, all use case when AI supports human decisions where consequences are short term or innocuous may be considered safe.

      Delete
  3. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    In my opinion statistical approach should be only a part of decision making process. Statistics, ML, AI are great tools to analyze large amounts of data. It is not possible (by human) to know all the data and all the sources that are included into statistical algorithms and thatk to the technology it could be included into decision process e.g in a court or in a hospital. The second factor of the decision should be a human aspect. The statistics shows common paths, but there are always exceptions and special circumstances that may slightly correct the results of an algorithm, so people should always has the final word.

    2. Did you hear about any cases where people become victims automated systems making decisions?

    I did not, but actually I wasn't much interested in the topic, so probably I missed something. The only thing that comes to my mind right now is 'Minority Report' movie, where the technology predicts who will become a criminal in the future. You probably seen it, because it was very popular one.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    Maybe engineering? In all technical fields calculations and precision are important (AI/ML/stat systems are great in it), and the possible error could be detected during tests.

    ReplyDelete
    Replies
    1. Yes, Minority Report is really a food for thought. On one hand it seems like a perfect system for preventing crimes but on the other you put human lives in the hands of a unknown mechanism that cannot be verified. It was also shown that those precognitives may be wrong. This is very good example of using complex, unverifiable systems in socially crucial institutions.

      Delete
  4. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    I think that the statistical approach can work in some applications. Banks can use it to calculate the creditworthiness and risk of a loan. However, I don't think it should be used in such a serious case as investigating someone's guilt in a courtroom. Statistics are often deceptive, and in most cases, humans will behave differently from what AI predicts.

    2. Did you hear about any cases where people become victims automated systems making decisions?

    I haven't heard of such cases yet. However, I think that many people may feel affected by the systems used by banks or insurance agencies.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    It's a very difficult question. We may be able to partially teach AI our motivations and feelings, but this will depend very much on the context in which we use it. What AI learns in one situation may be completely unsuitable for another. Only getting real intelligence could solve the problem, but that's probably beyond our reach.

    ReplyDelete
  5. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    We are still at the beginning of using AI capabilities and in my opinion it is a little bit to early for considering entrusting life decisions to it. But if we are talking about general idea of using AI to transform human decisions making process to be more objective, I’m in favor of this kind of solution. I think we should take into consideration how many errors people are making nowadays in case of justice, administration permits, disease diagnostics and others very important decisions. We should work on better AI models, learn from our mistakes and make it better, I hope that in a near future we reach the level where AI will propose decision and human will only give the final approve.

    2. Did you hear about any cases where people become victims automated systems making decisions?

    I heard about few of them, in all of the cases problem laid in poorly divers training data sets, which mean it was mistake of analyst / data scientist who was creating this algorithms not in how AI work itself. The first case took place during testing autonomous cars, the data set was missing of a good representation of dark skin color pedestrians, and due to this cars wasn’t train to treat these pedestrians with needed caution. The second case concerned automated recruitment process, where beside cv (gender blind) audio data from interview was take into account. Algorithm had the task to based on the voice timbre recognize emotions, commitment and other candidate features, unfortunately it was bias in view of voice height between women and men. It happened due to the fact that in training data set women voices had poor representation and their reactions would be incorrectly tagged by the algorithm.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    There are plenty of application for AI, where understanding of human values is not crucial. For example any kind of alert system based on threshold value, it can be air quality, production parameters, stock shares. The other branch of algorithms that don’t need to be human like are the optimization task, analyst only need to set the goal of this process, and voila. And last but not least, clustering algorithms, that are looking for differences in a groups - they are use in diagnostic process of a cancer for example from x-ray images.

    ReplyDelete
    Replies
    1. You stated that problems related to poorly divers data sets are the responsibility of an engineer who used them to create AI system.

      I would agree with other commenters that the mentioned article isn't very clear, but for me important point is that indeed there are biases in data sets, but at the same time not all of them are so obvious.

      The other problem is that sometimes biases in data sets reflect biases from the real world. Continuing the recruitment example, maybe there is a company favoring women. If a data scientist was hired there to support the recruitment process with a historical data should it be his responsibility to end this bias or he should deliver a solution fitted to the data he has that follows the preferences of end users? I think it's quite a deep problem.

      Delete
  6. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    In my opinion statistical approach based of making decision regarding human lives should be well considered. Nowadays, artificial intelligence gradually becomes essential part of life. It helps human to live and should make his/her life easier. I have recently joined Thailand forum where people discuss and plan their trip to Thailand. It happens that somebody asks about vaccines:” which vaccines do you recommend or when should I get vaccines before my trip?”. I know that getting or not getting vaccines is a personal decision but some other people answer like “ I was 3 months in Thailand and I was bitten by mosquitoes many times and nothing wrong happened so I think that vaccination is not necessary”. As far as statistical approach reference to mentioned case, if somebody has not experienced something it does not mean that you will not have either. To sum up, I think that statistical approach should be only a part of decision making process. The hybrid approach: statistical approach and human aspect seem to be promising. That’s why I think that AI could not replace human begin in 100%.

    2. Did you hear about any cases where people become victims automated systems making decisions?

    In this case, the example that immediately comes to my mind, is the tragedy occurred in the US in March, 2018, when a pedestrian with a bicycle crossing road in inappropriate place in nighttime was hit by an Uber self-driving vehicle. Here's the article on it: https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html.
    As further investigation showed, the system recognized the person as an obstacle, then as a pedestrian, then as a bicyclist, then as a car, but... no measures were undertaken, as the engineers set the threshold too high in order to avoid false positives, which occurred to often. The operator sitting in the car was busy playing with his phone at the moment and did not manage to do anything.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    As I wrote above that the hybrid approach: statistical approach and human aspect seem to be promising to make decision regarding human lives. I think that AI or statistical systems can be safely applied in recommendation systems, root finding and in science for example: the scientists try to separate genomes and calculate which genomes are similar to each other.

    ReplyDelete
    Replies
    1. Thanks for the example from the Thailand forum. Are you going to Thailand? I was recommended Krabi recently and I'm also considering going there sometime ;)

      Indeed, people are not programmed to believe statistics or think in Bayes, but there are cases where it's helpful. Me too I think that all the intelligent system should be used as a supportive mechanism and human always should be present in the loop.

      Delete
  7. 1. Do you think that statistical approach for making decisions regarding human lives is justified?
    The statistics on the unit are not accurate. Therefore, we can draw conclusions from such sciences as social psychology, economics or sociology for whole groups of people, not for individuals. Contrary to appearances, individual differences are too great to classify individual behaviour. And that is why in statistics we do not draw any conclusions with a probability of 100. This is because normal distribution has limits equal to infinity. It is too low for a court to pass judgment. The situation is completely different when biological samples are tested. For example, DNA. There, the accuracy is 99%

    2. Did you hear about any cases where people become victims automated systems making decisions?
    Nowadays, even court judgements are issued automatically, because the judges do not even read them but sign everything. This is especially true when judgments are passed in absentia. So there is no need to use expert systems to create such victims. One can say that overzealous application of the law creates such victims, and we all know them.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?
    Of course, statistics and statistical learning systems can and must be used to predict or classify the behaviour of groups of people or populations. Statistics were invented to describe behaviour. And to find out when it is abnormal. And in this sense, statistics work, that is, how they describe the whole population. The more we differentiate the target group, the lower the accuracy level.

    ReplyDelete
  8. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    Yes, I think the use of this system is justified. But I think that the final result should be checked by a person, and the final decision should also be with the person.

    2. Did you hear about any cases where people become victims automated systems making decisions?

    Yes, I heard that there was a scandal at Amazon Corporation related to this. The system did not accept women in the selection of recruits, because it considered them less productive compared to other genders.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    Personally, I trust AI more than the subjective opinion of a person, unless of course this AI is based on the correct parameters. I would give AI to control our entire world, and call it SkyNet =)

    ReplyDelete
  9. 1. I think that a statistical approach to any matter related to people is not good. Each of us is different and statistics are only a certain view on a given group. I liked Michał's comparison that statistically a man and a dog have three legs. This comparison says a lot about statistics. In my opinion, statistics do not fulfill their role in court cases because people commit various crimes and offenses caused by various reasons. You can certainly find a relationship in this, but I doubt that on this basis you can judge other criminals.

    2. I think I have heard somewhere one such story. It concerned the recruitment of students or employees and the algorithm favored men because they met the criteria better. The algorithm dismissed all female applications.

    3. I think that AI works great in optimization and logistics problems, searching large data sets, in early warning systems. On the other hand, in statistical systems, decisions made by AI should be verified by people, such as in medical issues.

    ReplyDelete
  10. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.
    I think it happens all the time, and is largely justified. It might seem unfair in single cases, but /en masse/ is unavoidable: like when setting up requirements for car safety, where decision makers have to trade drivers, passengers and pedestrians safety for car prices, or when administering mandatory vaccinations, where while saving many lives we also inevitably cause health complications of some small number of patients. Right now, good example might be a quarantine of cities affected with new coronavirus strain. In individual cases it is for sure unjust - those people are perfectly healthy. On the other side, we lack resources to judge each case individually.

    2. Did you hear about any cases where people become victims of automated systems making decisions?
    I guess SkyNet would be good example, luckily for us -- fictional.
    Also the case when skewed training data was causing rejection of loan applications that seemed to be based on skin color. Interesting thing is I've followed up to the nice paper explaining that it's possible to do it right, without discriminating any of groups, based on either ethnicity or income levels.

    3. If it wasn't possible to align AI with human values, what are the safe fields to use AI/statistical systems?
    I think it's exclusive - either some field is important one, in the end touches human lives, and should therefore align with our values, or it's meaningless and AI used is but a toy.

    ReplyDelete
  11. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.
    Court environment is a very tricky example. Process of judgement, done by human, is biased from many different reasons. There was a study which shown that even the part of the day could have an influence on the verdict. Adding to that statistical approach, which does not cover nuances of the particular case, in my opinion is not a good idea.
    2. Did you hear about any cases where people become victims automated systems making decisions?
    Yes, popularity of the autonomous vehicles and their systems on the roads lead to a series of crashes, sometimes fatal. Also, Boeing 737 Max had a system that undertook some very bad automated decisions.
    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?
    I think that there is a very good place to this kind of systems in every environment that requires very fast and accurate decision making, which could be done based on numerical data, that is properly cleaned, prepared and adequate to the ground truth on which this model is based.

    ReplyDelete
  12. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.
    In my opinion it is not justified and decisions regarding human lives should be taken with more human factor. It should be taken under consideration that there are a lot of factors which can be crucial to make an important decision. As for the case of determining wether someone is guilty or not, I am sure that AI couldn't replace the actual judge. There are many examples of unexpected motivations in the field of committing a crime which can't be predicted by algorithms. For example, recently there was a case in Lublin of a person who destroyed many cars by arson. As it was established during the investigation, the sole purpose of the culprit was to get revenge on people in general, because he was homeless and sometimes when he tried to sleep at the staircase in some building, occupants told him to go away or else they call the police. And because of that he decided to set some cars on fire. I don't think that it would be possible to predict something like that by AI.

    2. Did you hear about any cases where people become victims automated systems making decisions?
    Maybe it's not exactly the case of someone who became a victim of automated system of making decisions, but in China there is a point system based on face recognition. This face recognition let the authorities to see what everyone is doing and as a result - give everyone some positive or negative points. Such points are valuable, because based on that for example it is decided if you can go on a train or travel by bus. And it was a case, where one man, who starred in a commercial, was on the surface of the bus, so the face recognition system saw him everywhere the bus went and based on that gave him a lot of negative points. Bacuse of that he was unable to use the public communication. So I think we can say that he became a victim of authomatized system.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?
    In my opinion one of the field where AI can be used despite basing on human values is in medicine, where it can be used to support doctors. There could be some sort of system, where we can input data from patient's interview (age, sex, previous illnessess, sickness in family etc), medicine results and sypmtoms that are currently present, and AI - basing on experience, previous cases and other factors - can give preliminary diagnosis. In this proccess I don't think that human vaules are important.

    ReplyDelete
  13. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    I believe that in relation to human life, behavior towards people and the like, statistics can only be a tool supporting such a process. even the best artificial intelligence does not have empathic abilities, can not understand how a person feels. Many people hide their behavior and feelings. Statistically, only 16% of doctoral students will complete their degree. So, would artificial intelligence be able to determine who will have the title immediately after graduation, who will postpone it or who will resign? I honestly doubt it.


    2. Did you hear about any cases where people become victims automated systems making decisions?

    That's how I heard about cases where automated systems treated people completely differently than they actually looked. I am thinking of the eWUŚ system, among others, responsible for integration with many patient databases. According to the system, a person close to me was not entitled to medical services due to the lack of insurance, although the documents presented a different reality. Unfortunately, systems are still learning and I believe that there should be possibilities to check the correct functioning of the system.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    At present, artificial intelligence systems cannot predict human behavior based on statistics, as they are not yet well prepared for it. Statistics in systems should be used to help people. However, the final decision should always be on the person. Who will bear the consequences of a wrong decision artificial intelligence or man?

    ReplyDelete
  14. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    Using statistics has already been involved in a few misrulings. This happened even before the "ML revolution". I can't find the references right now, but I would point you towards the works of Nassim Taleb, Daniel Kahneman and Amos Tversky and titles like "Innumeracy". Something statistically likely is not the same thing as proven or factual.

    2. Did you hear about any cases where people become victims automated systems making decisions?

    We've had some discussions, even on this blog, which touched the topic of bias in recidivism prediction - there was some racial bias which overestimated the likelihood for minorities. Also, from a very different angle - the traffic victims of autonomous driving would also count here.

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    Many non-critical things can be optimized with AI. I would argue that it predates computers - beaurocracy is to some extent a mindless machine, and we've lived with beureaucracies since Egypt and Babylon.

    ReplyDelete
  15. 1. Do you think that statistical approach for making decisions regarding human lives is justified? For example in a court, based on some set of data a system could statistically more accurately determine if a accused person is guilty or not. At the same time it could make very unjustified decisions on the individual case basis.

    That's really interesting topic. To be honest I have never thought about it.
    On first I want to say that nowadays in court where the human makes a decision based on experience isn't nothing special. This is common and everyone make a decision based on his moral, religion, experience and so on. Noone is really full independent and objectiove so computer which make a decision based on statistical data could have similar results. Of course we believe that judge is well prepared and once in a blue moon make a wrong decision but it's not really true.

    2. Did you hear about any cases where people become victims automated systems making decisions?

    Yes...but "victims" aren't really suitable. I heard about system which make decision about loan in Bank of India. The system has a bug and doesn't get a loan to person which has middle salary. That wasn't really ig problem, but if someone really need money (e.g. for treatment) then we could name that person "victim".

    3. If it wasn't possible to align AI with human values, what are the save fields to use AI/statistical systems?

    Unfortunately I think that it is impossible. From other hand how we could really say that "system have human values"? I mean...in every continent the "right" decision would be different. We have some values which are the same everyone but the law and point of view are completly different so similar system with "human values" have to been described on first for every country...

    ReplyDelete