Monday, 30 November 2015

Week 4 [30.11-06.12.2015] Has science gone too far?

The topic of artificial intelligence started to fascinate me since I watched 'Matrix' back in 2000.

Observing this movie plot, as every sci-fi story read, turned out to be a moment when as a teen I jumped into questioning of the capabilities of perceived realities and asking myself questions like would it be possible in near future, who would govern and control computers and machines, or would machines would be able to control humans - almost materializing movie's 'profecy'?

Since then I've been always curiously observing the latest findings and achievements in the AI area and neural networks. 

Over the past few years, all of us must have experienced evolution of devices (e.g. from brick size feature phones to thin palm size smartphones with touch screens), miniaturization of each technology, through the first self-control yet programmable home appliances to remote control drones. All of them still innocent looking and considered as really helpful devices as being controlled by us. 

Few months ago, I came across on an article where one of the greatest visionaries of our times (Hawking, Musk and Gates) voiced their concerns about Artificial Intelligence. 

I felt that it's just a matter of time someone would merge all of these concepts and create a leap-frog achievement. Like Atlas robot (the one you could watch in embedded movie in the article) but equipped with the latest results in the area of image recognition enagaging neural networks and learning algorithms.

Last week while reading my daily portion of news, I watched one of the latest achievements in this area and I must admit it amazed me and creeped me out at the same time. Most probably I had "a man is eating hot dog in the crowd"-face while watching it.. realizing the time is now..

It looks like humanity is one step before its greatest achievement, but no one is able to address the most intriguing and inevitable question if.. 
“(..) the artificial intelligence will evolve to a point at which humanity will not be able to control its own creations, leading to the demise of our entire civilisation."

Has science gone too far? Do you see AI as “our greatest existential threat”? Or the opposite?

51 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. A very interesting video. It shows a situation where curiosity and invention is finding it's way out in form of a great innovation. On the one hand it is exciting to see how intelligent people uncover the fundamental truths about our world - in this case it is the human perception uncovered and recreated by AI. As a designer I am thrilled to see what are the boundaries of this human creative potential.

    On the other hand I am scared, just like you I imagine, because I have a feeling that it may lead to a situation where Saturn devours his son - the invention devours it's inventor. How do we know where is this thin line between a usefull idea and a creation that turns against us? A kind of a Frankenstein. In my opinion it is very important to ask ourselves the question about the threats and opportunities coming along with the new idea.

    In that context I think science has not gone too far, I think it is our conscious that needs to advance equally.

    ReplyDelete
    Replies
    1. Hi Dominika, I do admit it's very important to accompany science with conscious thinking on potential threats. However I'm not so sure if all scientist are assessing their potential discoveries against possible unwanted use cases.

      Delete
    2. I think that AI can become dangerous. Since the beginning of the world, all the tools that man created were built so that they can be controlled. We were able to predict their behavior because we knew their mode of action. This was true for all things from a hammer to a space rocket. AI is different. Her feature and advantage is unpredictability. We do not necessarily have to let it control the world through defensive system. It can be enough that poorly trained AI, will select wrong dose or wrongly analyze results of clinical trials. AI does not have to rebel or turn against us. Just please imagine for a moment that it will take decisions which we are unable to verify on a regular basis.

      Delete
  3. In my honest opinion AI can overthrow humans. Of course we are going to implement Asimov's laws(https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) to protect ourselves but still someone can hack into network and change algorithms. You never know, we have a lot of hackers, terrorists who can use it for their purpose.
    We are going in the right direction, but it's hard to predict if it makes our life better or worse.

    ReplyDelete
    Replies
    1. hi Mateusz, thanks a lot for mentioning Asimov - I was wondering who will the 1st one to call him into discussion. I'd like to think as well that anybody working on AI is designing it with the safety (or should I say sanity?) fuse. However this would be just one of the thousands instructions in the end.. and as with every software, something just may go wrong.

      Delete
  4. I don't think that science has gone too far. AI development is huge in last few years thanks to new computational capabilities on graphic cards. Unfortunately we still don't know exactly how brain works, how emotions are created and how does murderer's or rapist's brain differ from "normal" one. I think we should research AI, neuroscience, but we have to be careful what kinds of problems we solve using AI algorithms. For example we can use AI for image processing, object recognition etc but leave decision making and planning to human.
    And by the way, if "a man eating a hot dog" creeps you out, what do you think about robot sentry guarding korean demilitarized zone, able to locate, identify and potentially eliminate an enemy from over 3 km?
    Arming robots with bombs and machine guns is not a matter of science.

    ReplyDelete
    Replies
    1. Mikołaj, thank you for you comment. I believe we're not there yet on emotions creation (vide: one of the last week articles and comments posted there). Speaking about brains of murderers or notorious killers or rapists, there was British researcher - Michael Stone - who analyzed for over 30 years all 'top' murderers and psychopaths and placed on the 22-grade 'scale of evil'. Most interesting founding was that most of them had some head/brain injuries or suffered enormous amount of pain in their childhood.
      But frankly speaking, I don't perceive it as the most problematic threat due to simple reason - machines raping? You won't be able to implement this kind of thinking as machines are using in the end algorithms to compute. Machines' problems probably would be more aligned to securing the scarce and required resources.. dont you think?
      p.s. "a man eating hot dog" didn't creep me out, I most probably had the same facial expression while watching this dutch guy's latest work.

      Delete
    2. And more thing, I do agree with you on the latest that arming drones with bombs is not the matter of science, but because of this capabilities shouldn't we then respond that in fact science allowed for too much?! Don't want to even start nuclear weapon topic here.. is it science? Yes. Was it worth creating? Definitely not.

      Delete
  5. "Have science gone too far?". Very interesting question.
    Do you know that our state of the art systems are actually unable to precisely hear and see and understand there roundabouts? This are just algorithms. We took only baby steps as of now.
    Fantastic service like Google Now is completely useless if you want to use Polish language. Or even worse, mix Polish with English. Or their so called autonomous vehicle, that achieved the unthinkable landmark: over 100 thousand miles driven. The only problem is, not a single one was driven in an unknown environment (that is on the road that wasn't previously completely scanned and measured).

    There are many examples of Artificial Intelligence that are quite impressive, like cognitive system that "reads" a manual and learns on its own how to use programs. Or Microsoft's automatic translator that translates English spoken language to Chinese spoken language and vice versa. However, these systems are still far from being... no, not just perfect, useful. Or even usable.

    I am sure that it will take decades of research to create AI systems that will be able to hold a conversation. Nowadays, all of these are nothing else but toys. Some of them are impressive, some of them even remarkable, but... toys.

    As far as safety is your concern... I have to remind you, that it is us who develop algorithms and train Artificial Neural Networks. We're in control. And with all due respect to Hawkings, Gates, Musk, or other contemporary geniuses, these guys are complete ignorants when it comes to build AI systems.
    It takes just a simple exercise in training ANN to understand that all of these fairy tales about AI dominance in the foreseeable future are extremely unlikely.

    ReplyDelete
    Replies
    1. Thanks for the comment.

      Regarding different toys you listed.. autonomous vehicle competitions are organized with pretty amazing results in the randomly selected places (no chances to measure it like Google would do). So it does not look to me that far from now.

      Same for speech translators. One milestone is already achieved. The next are just a matter of time. Though no harm from ANN foreseen here but pure benefits.
      To be honest, if the main goal of developing AI would be to hold conversation, I'd would be pretty disappointed. So much effort spent just to be able to talk to machine?!

      Coming back to timing.. have to agree with you it is uncertain when all these technologies will be integrated however even now you can identify particular capabilities and functions available. So again it looks like a matter of time some one will put these together and make it. Like Apple with its 'innovations'.

      That's pretty much bold statement on Hawking and the rest of guys. Not so sure knowing they have a really great insights into the latest achievements.

      Delete
    2. You can count on me when it comes to bold statements ;)
      Having insights into latest research and having actually done something on the subject are different things. Sorry.

      For example, I know that there are some clever, smart Software Architects who have great insights into the latest programming frameworks and languages, but never wrote a line of code in them. Do you think their fellow programmers like them? I hope you get an analogy.

      Delete
    3. What has any liking to do with it?

      On one side you admit there are smart software architects on the other you don't consider them being capable of assessing consequences or threats of particular software (especially if it was designed by them)?
      Please don't tell me that only low level software developers are true sanity keepers.. ;)

      Delete
    4. Sadly, it means you don't get my analogy.
      There are good architects out there, but you can't be really good at programming if you're not doing this. That's what I meant.

      What I meant by that is, Gates, Musk and Hawking may be incredible smart people, but they have never tried to actually use machine learning algorithms by themselves. They just don't know what kind of effort is required to teach the computer to recognize even the basic patterns. To me statements like "AI could be a threat to mankind" is laughable to say the least. Not in this century. And probably never.

      By the way, it always amazes me that out of all the threats to mankind like global warming, our dependence on electricity in general and electric grid in particular, some unidentified cosmic object hitting this planet, nuclear war, et cetera, people choose to worry about the least probable.
      Talking about electricity: just pool the plug. No electricity, no AI.
      And on the other side of the equation: we are so dependant on electricity that all it takes to destroy the civilization as we know it, is one magnetic storm lasting long enough (say, one week). There will be no satellites, no communication, no power = no water, no oil, no transport, no food distribution, mass confusion and panic.

      There are certain events, that are at least plausible, AI dominance is not one of them.

      Delete
    5. No worries :) I got your analogy, but it looks like you haven't got mine and explanation.

      These architects are usually good not without apparent reason. Most probable they have a serious background, in this case in programming. Otherwise nobody would be interested in their work. To be able design something, in vast majority of the cases, you need to know how it will work 'under the hood', what are the constraints and so on..

      Speaking about Musk and his company, I would be very modest in underestimating their skills and intelligence. Things do not happen without reason. These guys aren't where they are because of coincidents..

      Btw, I'm really glad to see you having a very strong opinion in this area - this drives discussion forward ;)

      Regarding electricity, I believe you're overestimating our dependence on it. There are different fuels which wouldn't be compromised by lack of electricity. Even in the blackout, you could still supply people with drinking water, so not that drastic as you're sketching it. Same for satellites or communciation - there were times where these were not available, and yet people were happily living their lives.

      Speaking about probability :) I think it's not a matter of probability of occurence, but probability of extincting..
      No electricity (more probable) won't kill you. Yet cosmic object (very unlikely!) would be pretty effective here (of course size matters here).

      Look and the transportation, statistically speaking travelling by planes is much much safer (less accidents on average) than travelling by car (thousands of them daily!). However the plane crashes are more often discussed..

      Delete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Honestly? First time when I saw the Atlas robot in action I was a little scared. Especially when I realized that this is reality. But for the reasons mentioned by Nicholas I really believe we should not be afraid of the AI ​​so far. I don't think we know enough to create a real monster. But I can imagine that future robots can potentially become extremely dangerous.

    ReplyDelete
    Replies
    1. Przemek, I do agree with you. Though by referring to 'greatest existential threat' I don't mean a real monstrous machine, but rather threatening humanity use case.. Obviously I'm amazed by the number of beneficial purposes we could come up with. What would be the most appealing and most beneficial from your perspective?

      Delete
  8. I don’t see AI yet as “our greatest existential threat” but I think the risks should be taken into account on every step of it’s development. As with every technology – if the right people control it it can be very beneficial but you can never guarantee that it will not get into wrong hands. As the risks are potentially big I think it would be very good idea to create some kind of international regulatory to keep an eye on it (although I find it not very realistic at this point).

    ReplyDelete
    Replies
    1. Like Mateusz mentioning Asimov, we all, I guess, believe that because of the fact we'te in control no misuse is planned or foreseen.
      Speaking about regulatory, how would you see its monitoring and 'sanity' policy enforcement could be introduced and implemented?

      Delete
  9. In my opinion that science has gone too far. But we have very large technological development and we have a lots of graphics, processors for specific case but we must be careful for reseracher AI because we use this for improvement of life but in other case we use this for example for war, because we can create a monster or robot soldier and if it is damaged in a battle and the software fails we can not control the robot

    ReplyDelete
  10. Science didn't go too far, yet but it of course may go too far. The future will show.
    AI is a potential threat and we need be very ceraful with it. The difference between AI that understands us and the one that evolves and become something greater is very small. Scientist have great power, and the worst thing with that is that they often are not aware of that. They always should act responsible and they always need to think what effects they can cause.

    ReplyDelete
    Replies
    1. Hi Dawid, it's a valid comment. Speaking about awareness of threats.. do you have any ideas on, let's say, policy or procedure that could be designed and enforced?

      Delete
  11. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. I do not think that the science has gone too far, but we have to be careful with doing science. Research can be source of very dangerous problems. I heard about scientists, which are working to protect us against viruses. They want to prepare new virus which will be able to find dangerous viruses and "destroy" them. In my opinion It is a little bit murky and dangerous. Can we control it after implementation?

      My question is what should be the purpose of research? Results can be used in war, but target of this research is (or should be) different use case. Scientists should focus on that what can do to improve life quality. CPR machine can save life without the paramedic. Special algorithm helps in that.

      AI is very powerful and interesting part of science.

      Delete
  12. This comment has been removed by the author.

    ReplyDelete
  13. What a discussion :D. I must say this is the most intense discussion ever. But let's get to the point. I don't think that science had gone too far. Our generation and I think future generation also will have to work very hard to build robots that can destroy human generations. I know that every device is getting smaller but our knowledge has some limits. I think that in near future your car will take you home but this car won't be autonomous. This will be achieved by networks, every car will be connect to another one and they will be sharing data. Yes I know that there is Google Car and "autonomous" car race but we have to wait for our objects to be intelligent enough to think like human.
    About drones there is a very good scifi movie: Masters Of Science Fiction: Watchbird this is very good example of independent machines.

    https://www.youtube.com/watch?v=F41ar4w703w

    Conclusion from my talk is that we have t wait to get robots with very sophisticated AI (like for example Cylons from BattleStar Galactica or T-1000 from Terminator). Also we can be sure that human will use this machines to wage wars, but this won't happen in our life.

    ReplyDelete
    Replies
    1. I’m glad you like the intensity of our discussion - I guess that was the whole point of it ;)

      I’m not so keen of autonomous car even to be honest - what would happen with the joy from driving the car?! However inevitably, each car will be connected to the network(s) providing real-time data. Imagine real-time traffic intensity provided so as you could pick a faster detour. Or in case of accidents trigger a notification to call for emergency and so on. There are EU resolution enforcing all car manufacturers selling cars in Europe to equip every new car sold with m2m modems and board computers connected to the internet - this should happen by 2020 (if recall I correctly). But to achieve this we don’t need AI, but simple algorithms.

      Good episode - thanks.

      Regarding the horizon sketched, I’d be probably bit more aggressive ;) Perhaps not 5 to 10 years but seeing the latest pace of technology evolution I would be surprised if it take us few decades.
      Ten years ago nobody (apart from a few!) would dream of smartphones with touch screens.

      Delete
  14. I love watching SF movies and reading books and I frequently read books about people who are under machines’ control or who will fight against machines in the future. For me this is a very interesting topic and I have watched “Terminators” series several times, “I, Robot” movie or “Star Wars” series. I read the article “Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence” with great interest where the author describes different aspects of IA in the future. Generally I agree with these men who warn us against using robots everywhere especially in the military. But I think we should view this problem from two different standpoints. First I don’t believe in situation when machines will fight against humans to take control of the Earth in the way people fight against other people. I think willingness to rule is a typical human feature because we have free mind and soul. I think until machines lack human feelings and soul they will be only machines. I hope machines will never achieve human ability to feel angry or love. Secondly I believe in situation when the army of robots will fight against people as a modern weapon in human hands. Military equipment is still developing. One hundred years ago people used rifles, cannons, horses. Now soldiers use tanks, aircrafts, UAV etc. In the future people will use to fight AI. But I hope they will still be able to control them. In my opinion we must think about AI as a new kind of weapon not as our successors.

    ReplyDelete
    Replies
    1. I must admit that I like your prespective. That’s probably the threat these guys meant. Not to call nuclear weapon as analogy.

      However, observing todays world, I can imagine situation there will be a group of interests willing to use AI as weapon. Without any remorse.

      Regarding successors.. I'm with you on this one as well. But if we're still considered as in control over machines, precedents.. we could start wondering what could happen to humans. The population is growing. We’re living in ‘cost savings’ times. Robots/machines’ labour cost is already cheaper then humans’. With ANN/AI added, barely few of us will be truly irreplacable. Should we start thinking on how to leverage such huge human potential? Or should we start looking at ideas like basic income?! What do you think?

      Delete
  15. This comment has been removed by the author.

    ReplyDelete
  16. You know I like new and advancing technology more than most people but this is going to far in the wrong direction. I am a nerd who sits at his desk and is shy and frankly it is boring. That does not mean I need a virtual reality where I feel comfortable talking to "someone" and feel comfortable dealing with "life". Seriously I am for escaping from reality with things like games and movies but this could easily become too powerful. By that I mean people spending more time in a "fake" world than the real one. I very much relate to very shy people , but the way out is not virtual reality it is TRUE reality. I think we need to learn how to adapt to our surroundings, not adapt our surroundings to fit ourselves. Doesn't changing our surroundings instead of ourselves seem a little arrogant?

    ReplyDelete
    Replies
    1. Hi Monem, I'm one of the old-school times preferring the real world over fake 'social' live in virtual world. I definitely agree with you on arrogance, hence I always treat all the technology with a certain distance. But as I responded to Paweł's comments above, I would be really disappointed if the whole point of AI would be to enable it holding conversation with human. But ironically this would mean end of huminity (lack of human to human relations?), don't you think?

      Delete
  17. I wouldn’t say that science has gone too far. I feel that this is just the beginning, and it’s fascinating to see all this. You cannot stop the technological expansion, and people will be constantly pushing the boundaries of science.

    The whole concept of Singularity and threats related to it sounds like science fiction, still I do believe that we will see huge changes in this area in our lifetime. I think that AI/robotics research will have a positive impact on our lives, but we should keep in mind the possible threats. So in principle I agree with Elon Musk, that we need to have some regulations and supervision in these areas.

    ReplyDelete
    Replies
    1. Speaking about regulations and supervision, how would you see that happening? In the world of science, I dont think researches would be keen on sharing their progress results if considered by them as too worthy. How would this supervision work then?

      Delete
  18. Well, I can’t say anything more then I have written before. I think that I mentioned before but I will repeat it here. AI gives us many cons and pros. I have opinion on this subject similar to Steven Hawking’s, that “the development of full artificial intelligence could spell the end of the human race”. Here is one more article about this subject :
    http://www.independent.co.uk/news/science/stephen-hawking-ai-could-be-the-end-of-humanity-9898320.html

    Btw, I hoped that nobody would publish this subject, because I was planning to do it, now I must find another one.

    ReplyDelete
    Replies
    1. Hi Kinga, perhaps nothing is lost yet, I'm sure you can find a nice continuation of this topic even for your presentation.
      So you're one of the cautious ones thiniking AI's cons outweighting pros, right? Would you think that 'almost' full artificial intelligence would be a commonly considered ultimate end of AI research? Or still curiosity will prevail?

      Delete
  19. AI is not something that we should be afraid of (yet). The reality presented in films like Terminator is in my opinion strongly exaggerated. We should focus rather on being up to date with new technologies in so rapidly changing world (in terms of using new technologies). On the other hand, breakthrough technology like this quickly finds an application in military purposes. According to what is going on around us, the offensive use of artificial intelligence is just case of time.

    ReplyDelete
    Replies
    1. Marcin, so how would you respond to the question? Has is it all gone too far without control?

      Delete
  20. The video you linked is pretty outstanding. Making all the processing required to recognize objects and produce results in real-time is not an easy task, especially on a mobile device with limited resources (although I am not aware of the specifics, maybe the application was just streaming the video and the processing was done on a remote server which then sent results?). Returning to the topic, I see technological advancement in this area as necessary even at the cost of sacrifices. You know, maybe the whole intention of the universe is to create a method of setting order in chaos and we (people) are just one of the stages required for this outcome. I know it sounds gloomy but sometimes I try to view things from a humanly unbiased perspective.

    ReplyDelete
    Replies
    1. I had exactly the same impression regarding the video.. definitely outstanding. Why do you think further advancement is necessary? Especially when you assume some sacrificies?

      Delete
  21. Hi!
    I don't think that science gone too far. Although the development in AI had a great progress during last 10 years, we are very far from Terminator vision. Movie you presented (with NeuralTalk) shows an amazing application, but from this point to decisive robots is a pretty big space. Also Atlas robot is impressive, but people use robots from decades (e.g. in factories). They mostly do not move and don't have human posture, so don't make that big impression.
    I think, now, we don't have to be worried about AI. This field is developing quite fast and already can impress, but still existing solutions are not even close to human decisiveness and creativity.
    In the future AI may become a threat, but it depends on human approach to it, like with nuclear energy - it can be used as a power source (nuclear power plant) or a atom bomb.

    ReplyDelete
    Replies
    1. Emilia, Speaking about decisiveness.. researchers were pretty successful if they had a task-focused machine to learn. Have you heard about Gari Kasparov losing to Deep Blue computer in a chess competition?
      Would you be able to imagine this level of image recognition as by the application presented in the video, and the overall level of decisiveness and algorithmic thinking as observed while playing chess? Like Deep-er Blue? :)
      Would you be horrified then? Or amazed by the number of opportunities?

      Delete
  22. This video was pretty amazing! I have seen many of the mentioned sci-fi movies and I think that these may have inspired many people in their work. I think that scince has not gone too far. There is so much to do in the field of
    AI technology and I am looking forward to the moment when that technology will be on a daily basis. Of course AI is our greatest opportunity as well as our greatest challenge. New security issues will have to be resolved and developed.
    We have to be careful because some decisions may be irreversible.

    ReplyDelete
    Replies
    1. Katarzyna, a good point - some decisions are irreversible. What kind of security issues do you see as materialising firstly with the inevitable evolution of AI?

      Delete
  23. Scientific progress really fascinates me. I am a supporter of progress. According to me it is not dangerous.

    ReplyDelete
    Replies
    1. I must admit I admire conciseness of your response :) if AI is not dangerous, what benefits you foresee? Which would be the most fascinating?

      Delete
  24. AI can be “our greatest existential threat" and can be the opposite depending on the context of using it. It's like a dynamite. I am personally amazed and a little bit afraid of Boston Dynamics robots. I agree with Elon Musk that there should be control (maybe governmental) over AI implementations. And he knows the subject (and potential threats) very well - look at Tesla's latest software update! AI can be a perfect tool and a perfect weapon but from my point of view we are not there yet and science hasn't gone too far, but it might go there pretty soon.

    ReplyDelete
  25. AI can be “our greatest existential threat" and can be the opposite depending on the context of using it. It's like a dynamite. I am personally amazed and a little bit afraid of Boston Dynamics robots. I agree with Elon Musk that there should be control (maybe governmental) over AI implementations. And he knows the subject (and potential threats) very well - look at Tesla's latest software update! AI can be a perfect tool and a perfect weapon but from my point of view we are not there yet and science hasn't gone too far, but it might go there pretty soon.

    ReplyDelete
  26. Thank you all for the comments and vivid discussion :)

    ReplyDelete