Hello everybody,
I would like to present something interesting, it is not connected with my research area but in my opinion may be quite interesting.
The article will be about how Google AI invents its own cryptographic algorithm and no one knows how it works.
After reading this articles, answer the following questions:
1. Can AI learn how to encrypt itself?
2. Do you think it's good that Google scientists don't know how this algorithm works?
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
Main article:
http://arstechnica.co.uk/information-technology/2016/10/google-ai-neural-network-cryptography/
Additional info:
https://www.inverse.com/article/22928-google-ai-created-own-encryption
Link to the research paper:
https://arxiv.org/pdf/1610.06918v1.pdf
Hi,
ReplyDeletethe neural network (NN) structure of Alice and Bob is simply an autoencoder, basic unsupervised NN that reconstructs its input. Moreover training of Bob and Alice was conducted jointly (that means Bob's errors were propagated to Alice and Alice tuned it's parameters based on Bob's errors), so we may consider Bob and Alice one NN. On the other hand, Eve has no information about P (network's input) it tries to reconstruct P based on some internal representation of second network. It's errors do not propagate to Alice, therefore it's harder for Eve to reconstruct correct input but in my opinion it is possible to build a network that will do the job.
I disagree with Hawking, humanity will kill itself.
So the question in your case should be "Can Artificial Intelligence be faster in ending mankind than humans?" Thanks for answer.
DeleteTomek thanks for your article. When I read this article I froze for a moment. This is amazing why people always try to invite and improve issues which can destroy the human race. I am a really great fan of SF literature and movies but I generally don’t believe in the future created in such kind of books or movies. With one exception. I believe if we develop AI it finally will kill us. This is not matter in which ways: total war human against AI or in common situation in accident at home when we were killed by a fridge finally will be the same – we will be dead. Therefore all information about new, positive results during developing AI technology for me is a nail in the human coffin. I agree with Stephen Hawking that artificial intelligence could end mankind and in my opinion we cannot develop this kind of scientific way.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteAll this smells SKYNET!!!!
ReplyDelete1. Can AI learn how to encrypt itself?
Yes I think it can. I think Human will develop AI that not only encrypt itself but kill everyone...
2. Do you think it's good that Google scientists don't know how this algorithm works?
No ! If You don't known how it works You have small chances to defeat it when time comes.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
Very dangerous ground. I agree with Stephen Hawking that artificial intelligence could end mankind .
Ideed it looks like the beggining of Skynet. I am still looking through window and I am waiting for T1000 to come. :-) I must say that I was agreeing with profesor Hawking about this statement but looking at world today I must say that I AI won't finish our civilisation.
Delete1. Can AI learn how to encrypt itself?
ReplyDeleteAn article presents very interesting example of application of artificial intelligence. It shows that neural networks can learn how to encrypt a communication. If so I think it can encrypt itself.
2. Do you think it's good that Google scientists don't know how this algorithm works?
In my opinion it's not like they don't know how algorithm works but they found another way of encrypting. They know the architecture of neural network, how this network works and they are able to evaluate the results. They have just shown that we don't need special algorithm to encrypt messages. I would be rather afraid of opposite what if they run neural network which could decipher any message?
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
I am really skeptic about ending mankind by Artificial Intelligence. Some artificial intelligence algorithms do things that were impossible for deterministic algorithms and may look like black-box but they always need human inference so far.
Yes I agree with you, Google scientists are agreed that they need more time to get some valuable information from this experiment.
Delete1. Can AI learn how to encrypt itself?
ReplyDeleteI was going to say yes and no.
However, would you be so kind to define what AI is?
Clearly, these ANNs have not learned to encrypt anything by themselves. They were guided to do it. What they did learn was how to encrypt the communication. They developed their own algorithm.
And by the way, "simple" ANN is not AI by my reckoning.
2. Do you think it's good that Google scientists don't know how this algorithm works?
It is possible, that they know now. It takes some debugging. I can't tell you whether it is good or not, but I what I can tell you from my own experience with ANNs is, there is no easy way to tell why they work. There are usually no clear answers. And this property results in algorithm that is no easy to guess or understand by a human.
Simply put, these ANNs learned a multivariate function that generates (a set of?) encrypt function(s?) for secure communication. There might be infinite number of such functions, therefore it's not so easy to guess whether it is good or not. But it can be "debugged" or in fact reverse-engineered from the weights in those networks.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
I think this Machine Learning is extremely vibrant area with a lot of clever ideas and interesting research problems that may lead to very useful applications. Sometimes though, people tend to work on problems that force me to really question my sanity and I believe you gave the best example. What's the point?
To answer another part of your question... I tend to wonder... What is Peter Norvig's view on gravitational waves? Or Stuart Russels' view on electric car business? Those are very interesting questions. These guys are really smart, well maybe not as prominent as Hawking or Musk, but still... Why not go ask them? Gravitational waves could be serious threat to mankind! And electric car disruption, especially combined with self-driving ability could seriously undermine our economy!
Go ask them!
Sorry for this "reductio ad absurdum", but you really should not ask the guy who have no experience in ML anything related. They have no clue, and I am really fed up hearing these fears over and over again.
So far these bright state-of-the-art AI systems can't even read (so called) offline handwritten text and it will take decades of research and enormous computer power to reach human level perception, but we should car about some ignorant fears. Please stop.
Strict to the point in answer three. I think lot of people build their buisness model on this kind of academic agonizing.
DeleteAre you sure that Hawking don't have any experience in AI? Why do you make such an assumption ?
Delete90% of our advisors don't have slightest idea about our research areas and still they are judging us.
And I think Norvig is much smarter then Musk :D.
And also I totally agree with Piotr Wójcik.
This is very interesting case. We need to remember that we can think that this is encryption but it could be just an error :) Of course if they understand how it is encrypted than ok, but if not than we can`t be 100% sure about that.
ReplyDeleteAI is still in very early phase. Different thing is to teach machine to learn and to teach machine to understand and feel. Intelligence is one thing but to create true AI it should feel and do something more than going to B from A
Yes that is true.
Delete1. Can AI learn how to encrypt itself?
ReplyDeleteIt is hard to say. Yes and no - according to this article it is viable it is possible at some stages.
2. Do you think it's good that Google scientists don't know how this algorithm works?
It has its' pros and cons. We can be proud of the invented technology, hoowever, how can we sure that it will not bring any difficulties and problems in the other actions? We lose control over AI then.
3. What do you think about the current research in the AI field.
Do you agree with Stephen Hawking that artificial intelligence could end mankind?
It is a very interesting issue. It depends on the scope where these AI systems operate. However, if these systems will be willing to break any security systems it seems it is possible. I think that an inclusion of the human factor in this process may provide some gateaways (?).
I think that after some time of developing this kind AI programs, it will be a matter of days for this kind of intelligence to break security firewalls etc.
DeleteAnd I totally agree that we as mankind should be proud of this technology.
1. Perhaps very soon but for what reason?
ReplyDelete2. I don't know how to use parachute but is it a problem to not use this kind of equipment? As far as I know people hardly dies by jumping from airplane.
3. Thanks to Katarzyna answer I started to be aware this research can be used to automatic breach of any cryptography algorithm. But soon we would discuss about quantum computers that probably have similar functionality.
So I wonder if in future AI would have capability to break any type of defenses constructed by human architect.
Sometimes people are dying from parachute failure (but it's very rare - same situation can be in this are). And I think same as Katarzyna that the biggest issue connected with AI will be in breaching security protocols.
Delete1. Can AI learn how to encrypt itself?
ReplyDeleteWell I am not so into AI, but from program mist’s point of view anything that encrypts itself is pretty weird. Of course it is some kind of protection and till now we only considered people trying to break into systems that were encrypted by other people. This seams to be a war of computer systems fighting between each other. When it comes to human it is simple that the smarter will win, but when it comes to computers with AI the result is not that obvious.
2. Do you think it's good that Google scientists don't know how this algorithm works?
As I said I don’t know that much about AI. In IT there are situations when programmer have no clue why his program does work when it should not, so it is possible that they don’t know everything about this algorithm. But I think that it is a matter of time since Google’s scientists are extremely smart and they will figure it out.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
Again it is hard for me to have opinion about something that I don’t know to much. I think that we are still at the beginners stage when it comes to AI, there is so much that we have to learn, test until we will be sure how it works. But Stephen Hawking is probably right and artificial intelligence can end mankind one day.
Interesting point of view, thanks for your answer.
DeleteCan AI learn how to encrypt itself?
ReplyDeleteBased on my AI knowledge I think that neural networks can learn how to encrypt communications. I is only matter of time when it will be publicly available.
2. Do you think it's good that Google scientists don't know how this algorithm works?
I you do not know how something is working it is very difficult to control it. Such think can make a disaster.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
I fully agree with Katarzyna . I am also skeptic about ending mankind by Artificial Intelligence. I am sure that sooner or later people will developed more stupid way to destroy themselves.
I agree in all points with you. :-)
Delete1. Can AI learn how to encrypt itself?
ReplyDeleteIn my opinion it is possible. If we consider Alice and Bob as a one AI (neural network) we can experience that this statement is a fact. However at the end of this structure is located a human and he decides on AI. Therefore self-encrypting AI is possible but it depends on human.
2. Do you think it's good that Google scientists don't know how this algorithm works?
It doesn’t matter if they know or not. I’m not sure if this knowledge is important in such situation. Maybe it would be better to know how it works but the most important are effects.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
I disagree with Hawking’s opinion. Maybe AI looks dangerous but it still depends on human. Without people AI can’t exist. On the other hand I would like to see in AI a kind of help for mankind: especially there, where human’s life is endangered or can be supported by AI stuff.
This article definitely reminded me of Skynet and Terminator. And I guess I wasn't the only one who thoutght of that.:) I'm also very afraid of the day when somebody finally develops AI. I am deeply convinced that this breakthtough will truly be the beginning of the end of the human race. I really hope that I will be long dead by then. We human beings have this really disturbing characterics - we strive for autodestruction. I really do hope that scientists will finally focus on more useful inventions than the ones that can destroy us in the end.
ReplyDeleteHi, thanks for presenting this topic to us, I have to say that it is a very interesting subject which might raise some concerns about our future. Let me answer your questions:
ReplyDelete1. It certainly can, however I guess the question regards the invention of a totally innovative cryptographic method which would be incomprehensible by humans. This is also possible, in fact I think that given enough data, resources and some time an AI can evolve and do anything.
2. If that is really true, maybe they should come up with AI that will analyse the algorithm, find out how it works and present it to the scientists. ;) But what I wrote in the first answer also applies to the human brain, so with time I am certain they will finally understand how it works.
3. It could, but maybe "organic" intelligence or simply natural events will do it at some point. Everything is possible.
Everything is possible this days :) Knowing the power of AI we need to take precautions and have isolated systems to prevent ecosystem takeover.
ReplyDeleteAfter reading this articles, answer the following questions:
ReplyDelete1. Can AI learn how to encrypt itself?
Would be awesome, if it can and improve its algorithms and so on. Making a living encryption algorithm :)
2. Do you think it's good that Google scientists don't know how this algorithm works?
If they don't know, maybe it's easy to crack them and they have no clue about it. It's weird and a bit hard to believe. Moreover, I think that a lot of people across the world are trying to crack them for money. So, there is a high chance that in nearest future someone will show them how it's working and enlight them.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
I agree with Hawking. There is a pessimistic scenario, where AI decide to get rid of all humans. Let's face it, human beings are fragile, we have emotions which can drive us to illogical things and so on. We aren't perfect, we can get sick, we can't replicate ourselves. A lot of problems we are facing every day doesn't exist in a silicon world.
1. Can AI learn how to encrypt itself?
ReplyDeleteThe article describes how two neural networks can encrypt, send and decrypt given message. They can do it without decryption of the message by third party NN, so for sure AI can learn how to encrypt message, but it was actually the purpose of this experiment to make NN learn to do it, so no surprises.
2. Do you think it's good that Google scientists don't know how this algorithm works?
Isn't it the case that we face from time to time when we talk about neural networks and AI. Similar case is with e.g. face recognition when AI picks some special points at the face picture to recognize it, but we don't know what are they. I think it is the part of NN science, not having an algorithm, but performing a task anyway.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
I don't agree with Stephen Hawking, at least not for the closest future. Contemporary artificial inteligence is no intelligent at all and do not have self-consciousness. It just makes what programmers want them to do, so I don't think it is a threat for humanity. But maybe it will change in a future.
1) Yes I think it can.
ReplyDelete2) There is not a good solution, only the knowledge of the algorithm can realistically evaluate its efficiency.
3) If I know the problem, I think that we have nothing to fear. But every system should easily stop by man.
1) I don't think AI would ever know how to encrypt and decypher the information. Even main article states that this is improbable that AI would ever be good in cryptoanalysis.
ReplyDelete2) It's bad, definitely for the Google scientists, because they won't earn money on that.
3) I do not consider AI as a threat itself at all. Still, it's a tool for particular problems - and will be. As you can see - Google develops AI for just one case - encryption of data. Other companies probably develops AI for other things - data analysis, for example. I think today's world - especially many IT geeks - is too optimistic about AI revolution. Thinking about humanoid robots which will produce laser guns to destroy the human race is just illustration of our thoughts which are based on a pop-culture (thinking about robots which are using human bodies as a power source, or robots which are creating some conspiracy against humanity is the same pop-culture illustration).
But... I agree with Stephen Hawking who says AI can put an end to the human race. I don't know what he's saying in detail, but professor Andrzej Zybertowicz, for example, states that AI can put an end to the human race in sociological sense. AI - and machines, in general - satisfies demands that they created. It leads to many possibilities, about which we don't know right know, but humanity can become a community which virtualizes its life - life which exists in human mind, not in reality, because people would want to satisfy their virtual needs which are not important in real world. Life lived in post-truth reality, where you cannot say what's true or not. And in this sense, AI is more dangerous to us (and it is more likely to happen during our lives). Instead of watching Matrix or Terminator, I recommend to read o book "Samobójstwo Oświecenia". But, of course, Matrix and Terminator scenarios can happen someday -but remember, that it took 200 000 years for us to become who we are right now, so I won't be so optimistic that we would invent humanoid machines which will think abstractically in less than in thousands of years.
1. Can AI learn how to encrypt itself?
ReplyDeleteAfter reading this article I rather have mixed feelings, NN is one of the tools of AI and in this particular example we have nothing else than proof of back propagation learning with fitness function. So the answer to your question is yes we can use tools of AI to encrypt the message.
2. Do you think it's good that Google scientists don't know how this algorithm works?
I would say as long as it works they made their point.
3. What do you think about the current research in the AI field. Do you agree with Stephen Hawking that artificial intelligence could end mankind?
I think we should be careful with the design of autonomous systems and the measures we take to control it. Let's imagine we design autonomous system to overview the safety of the flight. As long as we have possibility to manual override it's decision we are safe but if we skip the manual override functionality it's very probable that one day something will go wrong. To sum up I think we might get to the point where some day something will go wrong BUT this will be a human error to not place "manual override" in the design of the autonomous system so I pretty agree with Stephen Hawking.