Friday 14 June 2019

Traditional classes


Dear Students,
Decide when you would like to deliver your presentation.
It should be from the area of your studies and it should be 10-13 minutes long. Do remember to prepare a PPT presentation.

21 June 2019  Room 120
1.00 p.m.
1.20 p.m.
1.40 p.m.
2.00 p.m.
2.20 p.m.




24 June 2019 Room 231
5.00 p.m.
5.20 p.m.
5.40 p.m.
6.00 p.m.
6.20 p.m.
6.40 p.m.
7.00 p.m.


25 June 2019 Room 231
5.00 p.m.
5.20 p.m.
5.40 p.m.
6.00 p.m.
6.20 p.m.
6.40 p.m.
7.00 p.m.


26 June 2019 Room 231
5.00 p.m.
5.20 p.m.
5.40 p.m.
6.00 p.m.
6.20 p.m.
6.40 p.m.
7.00 p.m.

Monday 3 June 2019

Week 7 [03-09.06.2019] Can you tell which face is real?

Hi!
Recently, there was hot news about a paper entitled "A Style-Based Generator Architecture for Generative Adversarial Networks". The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis.

That research introduced a new, highly varied and high-quality dataset of artificial human faces.

The first approach to fake-face generation task was put together by Ian Goodfellow, the director of the machine learning at Apple’s Special Projects Group and a leader in the field when he proposed a new framework: Generative Adversarial Networks. In less than five years, all of that changed. Today’s AI-generated faces are full-colour, detailed images.

1. Have you ever heard about artificially generated images which contain realistic human faces? Can you propose some possible real-world applications of such algorithms?

You may have seen a website named ThisPersonDoesNotExist.com doing the rounds, which uses AI to generate startlingly realistic fake faces. There is also a website called WhichFaceIsReal.com, which lets you test your ability to distinguish AI-generated fakes from the genuine article.

2. Could you spot/are you aware of what kind of imperfection those generated faces have?

3. Do you consider any public security or privacy issues regarding those methods?

Articles and sources:

https://arxiv.org/abs/1406.2661
https://arxiv.org/pdf/1812.04948.pdf
https://www.youtube.com/watch?v=-cOYwZ2XcAc
https://www.theverge.com/2019/3/3/18244984/ai-generated-fake-which-face-is-real-test-stylegan
https://medium.com/@kcimc/how-to-recognize-fake-ai-generated-images-4d1f6f9a2842

Sunday 2 June 2019

Week 7 [03-09.06.2019] Modern medium for presenting research

Reading through a scientific paper is often arduous. As authors, we usually want to present our ideas as fast as possible to meet the deadline for a conference. As a consequence, we may introduce a research dept and transfer all the labor to readers. They'll have to struggle to understand our work.

To address the problem scientists mainly from Google and OpenAI started in 2017 a new journal called Distill. The journal is focused on clear and easy to understand presentation of ideas.

This is the article about the research dept: https://distill.pub/2017/research-debt/
And the journal itself: https://distill.pub/

Suggested questions to discuss:
1. Do you think that it's the authors of research papers responsibility to present their ideas in an easy to understand manner? Or doing it accurately and precisely is enough?
2. Are visualizations or interactive presentations any helpful for explaining ideas in your field of research? Or maybe other means of communication work better?
3.  Have you recently seen any work that helped you to understand some concept better?
I like this article about Attention Mechanism if you're interested: https://towardsdatascience.com/attn-illustrated-attention-5ec4ad276ee3

Saturday 1 June 2019

Week 7 [03-09.06.2019] Automated identification of media bias in news articles



I want to sharre with you an article with overview of news media bias detection methods. News media bias was well studied in social sciences and now become an interest of computer scientists. There are plenty of text mining, computer vision and other types of algorithms that may be used to detect media bias. I want to ask you the questions below.

https://link.springer.com/article/10.1007/s00799-018-0261-y

1. What is media bias? What types of media bias do you know?
2. Which type of media bias has the gratest impact on the reader?
3. Do you think it is possible to create automatic tools to detect news media bias?