Monday, 23 October 2017

Week 1 [23-29.10.17] Can we teach robots ethics?

Read the article at  http://www.bbc.com/news/magazine-41504285 and discuss it here.

24 comments:

  1. In my opinion we are able to teach robots ethics only to certain extent. Of course we can program them in such a way so that they recognise and accept some basic rules, but we cannot make them be flexible and apply more sophisticated rules or bend them if needed. Because there are situations in which basic rules are not sufficient, there are situations in which we have to be flexible and make some changes and adjust the rules to unsusual circumstances. Robots cannot do that. A great example for that is the movie with Will Smith "I, robot". If you haven't seen it yet, I highly recommend it to you.

    ReplyDelete
  2. I assume robots can’t act ethically or unethically. In both cases using standard algorithms or machine learning during robot production responsibility rests on product sellers. They decide to offer robot on a market, and they decide about safety / profit ratio. In case of car the obstacles will never appear in the same microsecond so in my opinion robot should avoid them in sequence. I think the true problem is if regulators will set very high safety level for this new technology.

    ReplyDelete
  3. After reading the article «Can we teach robots ethics? », I came to the conclusion that it is impossible to teach a machine morals. Why?

    What is morality in general? Morality is the norm for some local group of people or for each one's own morality. In Japan and the countries of the Arab world can be like the common features of morality, and different, as with other countries.

    Therefore, if we can teach a machine of morality, then only the common features of morality.

    In the first example, where the car has a choice, knock down children that ran out onto the road, or go to the oncoming lane where there are also other cars. The correct output will knock down children. Why? If we consider the situation on the part of the machine, the car does not know how many passengers it has. Also, the life of the passengers for the car is a priority. The car does not know how many passengers are in the car on the oncoming lane. And the machine can not recognize the age of possibly killed people. So all lives are equal. and the least dangerous for the car and passengers will continue to move along its lane. Therefore, the machine should not be taught morality, but the ability to quickly slow down and see obstacles on longer distances.

    In the second example. The question of morality, to kill one person on the railway tracks, or five, or does nothing, and kill by inaction. Because we do not know people that can be killed, then all lives are equal, so it will be more correct to kill one than the five.

    And in the third example. about the possibility of throwing a person under the railroad tracks to avoid the death of five people. The meaning of morality is that it does not have a specific answer. Some people would have pushed one person to save five people, another part of the people, did not want to kill anyone and stayed idle. Both are morally correct. So how can we teach a machine of morality, if morality does not have conctertics? The machine will be guided by the smallest number of possibly dead, and it is right

    ReplyDelete
  4. Honestly speaking robots will work in the way they were programmed. So it depends what a human being will decide and what is his decision. It is probably decision that must be made not by programmer but by experts. Still no matter what the decision is, I would still keep some people to supervise the machines or robots, because like with some self-driven Tesla cars system may break and people may die, because they will not have possibility to drive the vehicle if it will be broken. As it is written in article it is important if the machine evolves through a learning process we may be unable to predict how it will behave in the future. We may not even understand how it reaches its decisions. This is the most scary thing and problem for me. There are many problems caused by the desire to automate everything that is possible to automate. Artificial intelligence is not yet developed enough to be able to leave it alone. You can not predict all self-learning algorithms, and the effects of errors can be tragic.

    ReplyDelete
  5. The article gives a lot of attention to autonomous cars and the Trolley problem. I don't consider them relevant. Autonomous cars don't have to decide what is ethically right or wrong, but correctly recognize situation on the road and adjust steering accordingly. If in a very unfortunate situation it would be impossible to avoid collision with another car, human or any obstacle when devices receive conflicting signals, a behavior of an autonomous car probably wouldn't differ too much from a behavior of a panicked driver. It's more important to focus on general road safety to minimize a probability of such situation.

    As for teaching robots ethics in general, in the article was presented an idea to use a neural network to solve this as a classification task. I don't think it's possible or at least possible to the degree that a solution could be used in a court etc. Supervised machine learning algorithms can be biased, fooled and not able to generalize concepts to unseen data to the extent a human can.

    Teaching robots ethics or philosophy would be possible if we managed to create Artificial General Intelligence. Nowadays, even the very accurate model with low classification loss shouldn't be treated as a success in teaching machine ethics.

    ReplyDelete
  6. I partially agree with Sebastian but in my opinion article is about ethics not morality. Of course, morality depends on your ideology and beliefs but ethics has only elements in common with morality. Ethics, as a philosophical view, embraces a broader range of humanity and defines concepts for wider sets of people than countries.
    In my opinion, it is impossible to teach the machine the distinction between good and evil in some cases (I mean “the enemy combatant wielding a knife to kill and the surgeon holding a knife he's using to save the injured”) however there is possibility to program it to behave ethically in general.
    With regard to the choices of autonomous vehicles in crisis situations the responsibility for them should be borne by manufacturers because they were the ones who set them up and tested them. On the other hand, Google Autos have been on the road for several years and there are no such problems with them…

    ReplyDelete
  7. Humans acquire an intuitive sense of what’s ethically acceptable by watching how others behave. Trying to pre-program every situation an ethical machine may encounter is not trivial. But it is also not impossible. An AI that reads a hundred stories about stealing versus not stealing can examine the consequences of these stories, understand the rules and outcomes, begin to formulate a moral framework based on the wisdom of crowds. Let’s do our best and try to mitigate the result.

    ReplyDelete
  8. According to me: We can not teach the ethics of robots. Robots are robots, they are programmed. The situations given in the article are philosophical, for example, about a carriage. Robot developers should focus on the best software.

    ReplyDelete

  9. My best bet would be that nobody will attempt to implement ethics in publically available self-driving car. The same way that drivers do not learn about the ethics during their driving course - instead they are thought to follow driving regulations. In my opinion the same will happen to the self-driving cars - each country will pass certain regulations that will account for what should happen during an emergency.

    ReplyDelete
  10. The article says: “One big advantage of robots is that they will behave consistently. They will operate in the same way in similar situations.” Following that, I think we should program the robots only according to the most possibly ethical regulations – for me it means the programmers shall be psychologically prepared to their work. I am very curious, if Susan Anderson, philosopher mentioned in the article, would agree with my conclusion. That would mean limiting access to work as programmer for those who are not properly ethically educated.
    I came into such idea inspired especially by the following part of the article:
    A yet more fundamental challenge is that if the machine evolves through a learning process we may be unable to predict how it will behave in the future; we may not even understand how it reaches its decisions.
    Generally I don’t think it is really possible that such process can be out of human control, but I believe that something well prepared in the beginning can develop only in good way. So “well-programming” education is my proposal to achive robots’ ethical acting. Instead of “teaching robots ethics” I suggest “teaching programmers ethics” ;-)

    ReplyDelete
  11. The article deals with an interesting problem which divides not only philosophers, but also us as an individuals, in the context of choosing "less evil" solution. I do agree with Damian that it could be non trivial (near to impossible) task to teach the machine the distinction between good and evil in every possible case. I found interesting approach of MIT, which is called "Moral Machine" (moralmachine.mit.edu). The assumption is to move the responsibility of the decision from the single dev team to general society view. Maybe if we (as all) agreed in the conclusion of such dilemmas then such technology would have better social response.In the other hand- what about situations that are not in the set of edge cases, should we average the results of simmilar situations and proceed similarly?

    ReplyDelete
  12. In my opinion problem of making an ethical decision by robots is similar to those that people struggle for ages.There are still many points in history that are difficult to assess from a moral point of view and different people have different opinions. I can agree with the last point that robots can be even better at making decisions than people because they do not have feelings, they do not panic etc. They can make a decision how to drive in a dangerous situation based on the counted probability of surviving and can do it faster than humans do. I believe that well programmed self-driving cars may be very safe.

    ReplyDelete
  13. This is a very old discussion. Should we believe that robot can be better at some ethical decisions or not? Should we implement enough logic to let them hurt other people to protect us? What can we do if a robot mistakenly interprets the situation? Do we have an emergency shut down switch? Can we teach our robots? AI is a future. It's a matter of years, however, should we be afraid of robots enslaving humans for our own good?
    I really liked the article linked below:
    Stephen Hawking - will AI kill or save humankind?
    http://www.bbc.com/news/technology-37713629

    ReplyDelete
  14. What if person on motor on the left is only parent of those playing children? I think answer of many moral question is being in probability calculation. Depends on many parameters damage otcome become more understandable and as it is in law this code and regulation will be written by blood unfortunatelly.

    ReplyDelete
  15. This a very tricky question. We are talking about teaching so we must talk about intelligence, in this case artificial intelligence. So if someone (something) is called an intelligent being, that we must remember that it can draw own conclusions. So I am sure that we can teach the machine OUR POINT OF VIEW about ethics, we can pass our conclusions, our thoughts. We can teach the machine our understanding of the ethics, what and when we would, how we would react, what is good and what is bad for us, BUT we must remember that the machine can draw own conclusions. For example we can teach the machine that killing is bad because life is priceless but the machine can draw a conclusion that killing 1000 people to save 1 000 000 is a very good idea and I am sure that not all of us would agree.
    So the answer isn't so straight forward. For sure we can teach machines our understanding of ethics but the machine can draw own conclusions and react much different than we would in same situation

    ReplyDelete
  16. If we asked people from different civilizations for optimal solutions to the presented problems, we would have received different results. It is very hard to value the achievements of different cultural circles. The development of automation will certainly have a positive impact on safety, but in some situations, some parts of the population will be appalled by the robot's prowess. I agree with predecessors that the most optimal criterion is the minimization of victims.

    ReplyDelete
  17. As Dawid wrote - simple answer for this question does not exists. I read an article about problems with artificial intelligence. The goal was a bot that is able to discuss with other people. Bot analyses discussions posted on Twitter platform in social media and tries to learn proper statements during its discussion. Funny and scary was that the Microsofts’ Twitter chatbot was a racist after one day of “learning process”. Microsoft made couple changes to fix this bug - but it was a path that reduces its learning abilities.

    Other interesting project that I know is Artificial Intelligence that writes new programs that solves complicated image recognition problems. What if this program can write better version of itself in a future?

    Program / robot will respect the rules included into its source code.

    ReplyDelete
  18. I don't see any point in discussing "ethics" of a computer program (because this is actually what is called "robot" in your question). Computer program will remain the tool, even if the algorithm is very sophisticated. I have a lot of respect for people who came up with the idea(s) of various Artificial Intelligence algorithms, but we are still very, very far from the moment when we could discuss such abstract as ethics when speaking about any technology. Frankly, I doubt if we will ever face that moment.

    ReplyDelete
  19. It is really tough subject. Firstly, because ethics itself can be understand differently. Two people can decide quite differently in the same situation, because they have a different view on what seems to be ethical. So talking about decidsions made by robots is even more complicated. For sure we can implement certain algorithms on the basis of which e.g. autonomous car will make a decision in a dangerous situation, but some people may argue if the decision (so the algorithm also) was correct and in my opinion they will have the right to criticize it.

    ReplyDelete
  20. Hello again, I am back.

    Can we teach robots ethics? I think my answer will be yes and no. When it comes to cons, we need to know that AI is still very young, I know that this part of Computer Science is one of the oldest but still we don't know how human brain works, we know how we can create Neural Networks but still we don't know what is happening inside them, so there is a long road ahead of us. But I think that we know how we can learn robot or in this case AI how what decission is the best one. In AI there is chapter called "reinforcement learning" and this is part of Machine Learning and it is inspired by behaviourist psychology. And what it means is that, that Network (or Robot) receives reward after his action (this reward is given by human), so after many iterations with human, computer should know what would be the best decission in particular situation (something similar to carrot and stick case). This is the way how Google is programming their self-driving cars (also this approach was used with AlphaGO). Still there is human factor in this solution and we have to remember that after very intensice training robot can make a bad decision. So there is long way ahead of us.

    ReplyDelete
  21. Yes, this is another pretty known problem.
    But what can we do? Article showed that even people don't have a right answers to such dilemmas. What would you do as a driver having to choose between children and a motorcyclist?

    But the good thing about the technology is that we can make it consistent and keep improving on it. First of all, in 2027 we can have much better devices, some enormous improvements on car brakes or whatever. Secondly, machine can make decision much much faster than a person. And it will be more accurate. It may sound brutal, but in the Internet of Things and Big Data world car may "know" e.g. that the motorcyclist will die soon anyway. E.g. this is his last trip, because he is very sick and he wanted to feel the freedom for the last time.

    ReplyDelete
  22. The article presents one crucial thing. We demand more if the decider is a robot than a human. We have a lot of unrealistic ethical problems without solutions. Nonetheless, in the past, nobody tried to solve them before it went into a car the first time. The knowledge problems will go, the wisdom problems will stay. We will talk about ethical issues much longer than the first driverless vehicle will go on highways. It's a reason why I agree with Marta, that we should program the robots only according to the most possibly ethical regulations.

    ReplyDelete
  23. A very interesting article. In the content, however, we can see a moral problem. What? I think the philosophical question, Whether the car ever will have to decide to kill a child or a motorcyclist should be considered in another category.

    Do you drive and see an accident, do not you wonder what would happen if you went 10 minutes before, would not stop you cork? I always see myself in such a situation and wonder what would happen then.
    Auto is not a man, the algorithm is not the tea you need to drink. It has been known for a long time that if people do not plan for a given situation, they will have minor problems.
    On the other hand, we will not escape technological development and must face such difficult questions!

    ReplyDelete
  24. This topic is almost as hot as the reproductive process. However, the invented dilemmas are a manipulation on part of the "ethics scientists" in order to grant themselves power of defining right and wrong. They put a scientific veneer onto it, but deep inside it is the good old hypocrisy again (which is illustrated by the fact that when the "science" contradicts their feelings, it must be "biased" - e.g. "biased AI" rates white women as more attractive - so we must "fight those biases").

    I generally refuse to give my answers to such questions. And I don't think that polls on this topic can give "better morality" than the one we already have.

    There is wishful thinking: "the driverless cars will have less emissions" - citation needed.

    If I can save a life without killing another, this is the way to go. If it is picking victims, it's none of my business.

    We have a moral obligation to make our technology safe, and to handle situations in the best possible manner. However, this does not necessarily mean, that "professional AI moralists" have all the answers.

    ReplyDelete