Wednesday, 6 January 2016

Week 6 [04-10.01.2016] OpenAI as a way to mitigate AI threats

Hello All, Earlier this week Kinga posted an interesting article about breakthroughs in the field of AI. I, on the other hand, would like to discuss threats related to AI research and OpenAI initiative/company which is supposed to mitigate them. 

We already had an interesting discussion whether advanced AI research is a threat for human existence (Week4, Has science gone too far?). I’m proposing this topic as a followup to our earlier discussion.

OpenAI is non-profit artificial intelligence research company. Its goal is to advance AI research in the way which will benefit to humanity as a whole. The interesting thing is that all OpenAI research and patents will be open to public and code will be open sourced. The organisation was founded on December 11th 2015 and among the founders are Elon Musk, Sam Altman (CEO of Y Combinator) and other well know tech entrepreneurs. OpenAI has world-class research engineers and scientist on board and $1billion budget donated mostly by the founders.

I cannot recall any other research initiative which has such an interesting goal and is so well (privately) founded. I’m pretty exited about OpenAI and I plan to follow its research. You can read more about OpenAI on the official website. Please read through this interesting interview with Musk and Altman in which they discuss their motivation for creating OpenAI: https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.5w9vvndtq.

Few questions to start the discussion:
  1. What do you think about OpenAI initiative?
  2. Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?
  3. Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?

34 comments:

  1. 1. What do you think about OpenAI initiative?

    One of the aims of OpenAI is to share their work with the world (for example, they have committing to sharing any patents they might develop). A company with money has the possibility to hire best people and have them focus on research. The research they intend to do looks interesting, and I hope it will have a spectacular effect.
    I think that is a good initiative.

    2. Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?
    „The decision to make AI findings open source is a tradeoff between risks and benefits. The risk is letting the most careless person in the world determine the speed of AI research – because everyone will always have the option to exploit the full power of existing AI designs, and the most careless person in the world will always be the first one to take it. The benefit is that in a world where intelligence progresses very slowly and AIs are easily controlled, nobody will be able to use their sole possession of the only existing AI to garner too much power.”

    ReplyDelete
  2. Action such as Open AI will serve the scientific and development in this field. We know that publish their research helps develop the field. We talked about it several times.
    Problems that have arise and are related to artificial intelligence, are specific. I believe that every ethical way to build is a good way. The same applies now may be as good or bad.
    There are several initiatives Open - I do not know to what extent this initiative is different from the others, perhaps distinguishing feature is the theme here.

    ReplyDelete
  3. Action such as Open AI will serve the scientific and development in this field. We know that publish their research helps develop the field. We talked about it several times.
    Problems that have arise and are related to artificial intelligence, are specific. I believe that every ethical way to build is a good way. The same applies now may be as good or bad.
    There are several initiatives Open - I do not know to what extent this initiative is different from the others, perhaps distinguishing feature is the theme here.

    ReplyDelete
  4. Hello
    1. AI is a relatively young area as a whole IT :).Most of solutions and innovations are based on previous results, so I have a feeling that OpenAI team will prepare a knowledge base of AI. OpenAI works at own researches of course, but main source of their content will come from other AI researchers.
    2. I think yes. Let's summarize my answers from previous this week posts. AI should improve our life. RIght now algorithms are trying to understood what customers want to do (or buy :)), or how can we replace people by machines to save money. AI can be something more usable. It can save life in advanced systems in medicine or for example in vehicles.
    3. I think this initiative is based on the others :) it is not breakthrough.


    ReplyDelete
    Replies
    1. In terms of research continuity, definitely this initiative is based on the overall AI research in academia and tech industry (at least on the publicly available). So in this aspect it isn't a breakthrough - I agree with that.

      I still feel that the model in which it was founded and motivation behind it is really interesting:
      1. Tech industry thinkers and researchers identified a potential threat to humanity which might be caused by our own research.
      2. Tech entrepreneurs create a non-profit company which aims to mitigate this threat by conducting open research in this area.
      3. The organisation is privately founded with a budget around $1 billion.

      Delete
  5. Honestly I'm not so sure whether this initiative help increase the probability that A.I. will be developed in an appropriate/beneficial way. Certainly will help A.I. researchers in making progress and may be the way to better understand some of the risks associated with it. But hard me to imagine that could control how A.I. research will be developed.

    In comments to the article The Machine Intelligence Research Institute was mentioned as a similar initiative. MIRI is a research nonprofit studying the mathematical underpinnings of intelligent behavior. As they say "Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed."

    ReplyDelete
    Replies
    1. I guess the only way to be sure that we won't see malicious AI would be to control or even block all research in this area, and I agree with you that this seems impossible. Still I'm glad that OpenAI will be focused on human-friendly AI and its goals aren't profit oriented. This is a huge difference if you compare this to big tech companies which are currently spearheading in AI research (Google, FB).

      Thanks for pointing out MIRI, looks interesting. I can see that Peter Thiel, who's also supporting OpenAI had donated a large sum of money to MIRI.

      Delete
  6. Non-profit research company which covers artificial intelligence is an amazing idea. All non-profit events in my opinion are fantastic ideas and big help for common people. In my opinion a lot of young researchers and engineers will develop their own AI projects based on OpenAI ideas in the future. I think non-profit company will promote AI between common people and can help them understand and domesticate this idea in the future. Moreover non-profit research company can develop new ideas over public control and can protect all danger connected with use AI in the future. It is possible that “Dr. Evil” will be able take groundbreaking idea and use it against other people but if we know everything about this idea we will make some defence weapon. I reckon OpenAI initiative is a little similar to Linux initiative. I admire people who would like to do something without salary.

    ReplyDelete
    Replies
    1. Definitely agree with your points about non-profit organisations and OpenAI. One clarification though, since OpenAI is really well founded the researches working with them will be really well paid. I feel that this is really import in order to conduct world class research.

      Delete
  7. Several personalities and companies came together to form an alliance around artificial intelligence. OpenAI named, this initiative aims to fund research whose results will benefit everyone. Elon Musk, one of the founding members, and dream of a "general artificial intelligence" that could handle almost any type of calculations.
    OpenAI is both an alliance, an initiative and a non-profit research organization. Its founding members are known industrial or companies with a vested interest in seeing the development of improved tools and new avenues of research in artificial intelligence. The two co-chairs will thus Elon Musk, head of SpaceX and Tesla, and Sam Altman, the Y Combinator boss. Ilya Sutskever, formerly researcher working at Google, the structure will be the research director, and there are other familiar faces, as investor Peter Thiel, as well as companies such as Amazon (its branch Web Services) and Infosys Indian.

    ReplyDelete
  8. Hi,

    1. What do you think about OpenAI initiative?

    In my opinion OpenAI initiative is is very good initiative
    This initiative can be useful for researcher and people who work in this initiative. I am agree with ZC about non-profit initiative is amazin.

    2.Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?

    In my opinion every way have some risk but we must ask question. It is very big risk or no? If answer is no we can working in this project and only make some rules for risk. But if response yes we must develop a plan for dealing with risks
    3. Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?
    I agree with Pawel Markowski this initiative is based on the others :) it is not breakthrough.

    ReplyDelete
  9. OpenAI idea seems quite reasonable. I hope they will succeed. I look forward to results of their work. However, I have similar feelings like . Somehow I cannot imagine any initiative, which would force ethical development of AI.

    ReplyDelete
    Replies
    1. Agreed, still this feels like a good start and raises awarnes of potential problems.

      Delete
    2. Krzysztof in your opinion, what other efforts coud be taken in order to ensure the development of an etical AI?

      Delete
  10. I'm a bit skeptical and don't get how OpenAI will ensure proper usage of AI. AI is just a tool and the more people will know how to use it the more probability that someone will use it for some unfair, bad purpose. Companies like Google, Apple or Facebook have their corporate standards and policies that more or less ensure that technology is used for business purposes. If we have problems with corporations scanning the Internet and looking for provide information, what about cyber criminals that don't use AI now?
    I think that releasing AI libraries, like Google's TensorFlow better serves a purpose of accelerating AI research.

    ReplyDelete
  11. So OpenAI will actually release their code as open source. Google open sourced a much less powerful version of TensorFlow than the one they actually use in their projects. This is understandable since they don't want to lose their technological advantage over other competitors. OpenAI doesn't have this problem and can do ALL the research in public.

    ReplyDelete
  12. 1. What do you think about OpenAI initiative?

    That seems something that make sense. Unless it changes hands or its core values, I believe this is the step in the right direction.

    2. Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?

    I do believe that the risk they put an emphasis on, that is greedy corporations trying to use AI to their own good is correctly addressed. As for AI being smarter than humans and trying to steer humans into doing something that they do not want, I don't see that as risk at all. After all I am fighting that kind of intelligence every single day.

    3. Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?

    Define "similar". I recall other initiatives that aim to fight big companies and their profits; things like Social Enterprise. That make sense and I really want to see raise in such non-profit initiatives.

    ReplyDelete
  13. 1. Every stuff which is Open and you have access to its code it iss great. But (there are always some cons :-)) any kind of open initiative which has no support from big company will fail. Great example is with Red Hat and Fedora, whenever there is some serious bug in Fedora Os, Red Hat moves their developers to work in particular area. So it's great when it's open but this kind of research should be backed from some company which can hire some smart guys to work at the problem.
    2. I think that in my life I have nothing to fear about in AI area.
    3. Sorry I can't recall any kind of similar initiative. Maybe yes, I think that it should not be replicated by it should be forked.

    ReplyDelete
  14. What do you think about OpenAI initiative?
    I totally agree with my colleagues, every open source initiative is good for community. Especially with budget like this.
    Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?
    It's hard to predict. It's better than doing nothing.
    Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?
    Bringing .Net to open source? :) Another example is Toyota and their hydrogen engines.
    http://cleantechnica.com/2015/01/08/toyota-making-5600-hydrogen-fuel-cell-patents-free-use-industry-companies/

    ReplyDelete
  15. I'm not really a fan of Microsoft and .NET technologies but I remember there was an open source project called MONO which was porting .NET to Linux. Toyota initiative looks promising thanks for pointing this out.

    ReplyDelete
  16. Hi, thanks for sharing this interesting article and interview with us. I support "open" movements like these, since it will enable a wider amount of people to contribute but also use it. Especially the field of AI will flourish from the introduction of such initiatives.
    The more peers reviewing such a project the lesser the chance of some rogue activities that could pose a risk in the future. The project is pretty complex at the moment however in the future it might become easier to understand and manipulate so contributions should be rigorously monitored and checked before committing.
    The idea is actually pretty clever because at least it shares the responsibility of maintaining using it with all contributors and if something malfunctions everyone will be to blame and not just a specific organization (like Skynet). ;)
    There is the Blue Brain Project which tries to simulate a fully functional brain however I think it's not open. The complexity is probably similar between those projects.

    ReplyDelete
    Replies
    1. Thanks for the valuable insights, I'm glad that you also agree with the direction OpenAI is heading. I haven't heard about Blue Brain Project earlier, I can see that they are publicly founded (Swiss government, EU grants). It also looks promising!

      Delete
  17. What do you think about OpenAI initiative?
    I think that it is a way which may provide beneficial way of the development of AI. It may be a way of the empowerment of human by weakening the impact of AI.
    Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?
    In fact it may be the first step taken in order to address the existential risk from advanced artificial intelligence. The more AI is developed the more we will be conscious about it and involved in any actions to limit the destructive influence.
    Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?
    This idea is actually based on the others providing open-source code.

    ReplyDelete
  18. What do you think about OpenAI initiative?

    I think it is the right initiative. This is a step in the right direction that may lead to a new quality. Always the more people working on a field together have a broader sprektum experiences, ideas, solutions...

    Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?

    This issue is the rather specific, but almost everything that exists can be in both good and bad ways.

    Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?

    How did you think about it we can find the examples that now I will not be replaced.

    ReplyDelete
  19. I think it is interesting initiative and needed one, as it was said in previous discussions it would be great if there was some kind of international independent authority which would supervise developments in AI as the risks are potentially big. And maybe this is a good first step for rising security issues. I like that it is not profit (if it really will be) and that it was cofounded by Musk who is known to be careful about AI - maybe it will make the whole initiative more effective. I also like that it is open source –it maybe easier to stop the negative development of AI if there are more people involved and aware of the developments.

    ReplyDelete
  20. What do you think about OpenAI initiative?

    Great thing. It can only help

    Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?

    We don't know hot to handle this kind of risk because we don't understand it fully. It is only a theoretical problem. It may change when we`ll make a big step forward.

    Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?

    In IT we have a lot a of similar pojects. For example a nonprofit project created by NASA that allow you to help find life in the universe

    ReplyDelete
  21. I think this is very interesting and right direction for all research programs. This is alternative for corporate and state-owned technologies. Corporations won't share the profit and knowledge with people, that's sure. The same in case states - most of achievements are state, protected secrets to which citizens don't have access, which is funny as it is said that the state belongs to the citizens.
    Those kind of initiatives might be the future of technological progress and this should make us happy.

    ReplyDelete
  22. I think this is very interesting and right direction for all research programs. This is alternative for corporate and state-owned technologies. Corporations won't share the profit and knowledge with people, that's sure. The same in case states - most of achievements are state, protected secrets to which citizens don't have access, which is funny as it is said that the state belongs to the citizens.
    Those kind of initiatives might be the future of technological progress and this should make us happy.

    ReplyDelete
  23. I think that every initiative that popularizes and supports research is good. It gives an opportunity to use other's results and therefore, to avoid mistakes they made. Beside that, with "open approach" the speed of progress in such a advanced field can be increased. On the other hand, we have to face risks mentioned by Marcin in first comment. At the moment I'm not able to find a better answer for the second question. Moving to the last question, isn't it related to every other open source initiative?

    ReplyDelete
  24. What do you think about OpenAI initiative?

    I agree with others that it is a great initiative. In my opinion it could speed up some works in this subject. Moreover it enables for a lot of people that are interested in this subject to work with it and be up-to-date, even if they are not engineers in big companies AI projects.

    Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?

    I don't know if it is a right way, but at least it is a first step. I think it is important that such subject is under consideration.

    Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?

    Actually it looks kind of similar to existing and well-known open-source initiatives.

    ReplyDelete
  25. 1st of all, thank you for the follow up :)

    1. What do you think about OpenAI initiative?

    You just can’t say no to this kind of initiatives. When smart people with money and ideas, hire even smarter people to develop something and share it with the world for the greater good, it just has to go well.

    2. Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?

    it’s definitely one of the possible ways. The good thing is that it is started by private guys, not the governments.

    3. Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?

    Yes, at least a few.. just to start with open-source initiative as pointed by Emilia.
    Yes, this model is easily implementable in other ares. Idea + money = attempt to realise a goal set.

    ReplyDelete
  26. On one hand I like the idea of empowering people with the AI knowledge as said in the last sentences of the article : “ (…) the best defence against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any person or a small set of individuals who can have AI superpower.”
    But on the other hand I think one of my predecessors has raised a valid point saying that the more people will know how to use it, the more probable is the misuse of OpenAI. Anyway the whole idea is in the embryonic stage and I am looking forward to seeing it grow.

    ReplyDelete
  27. It looks really promising but also scary. There are a lot of great businessmen involved and they put a lot of money (1 billion dollars!). They can do almost anything and I'm not sure that with this kind of resources it's really safe to publish all the results. Of course it's better that the idea is to keep the research open rather than creating "secret evil organization" ;-).

    I'm not sure if it's the best way to address the existential risk from advanced AI. It might also increase the risk. For sure this organization will increase the progress in the field which is a great thing for researchers.

    Model of "open source" is well known but this is more. It's not a network of highly skilled AI experts. It's well funded organization which can be good but with this level of funding and influencers involved it can go in many different directions - not always good. Open Source is a perfect balance because of a large community of independent experts.

    ReplyDelete
  28. 1- What do you think about OpenAI initiative?
    > Saw Elon Musk saying how 'robots / AI' might command us one day. First time I heard OpenAI and Elon Musk supporting it, I couldn't understand the intention clearly. Then all that risk etc. talk. However, there are many respective people in the that initiative.

    2 - Do you feel that this is the right way to address the existential risk from advanced artificial intelligence?
    > No / Yes... (prefer not to answer in detail)

    3 - Do you recall any other similar initiatives? Do you think this model could be replicated in other areas?
    > This is not the first time a workgroup has been established to work on a specific subject. Unfortunately many of them didn't receive this much support.


    ReplyDelete