Tuesday 17 December 2019

WEEK 5[16-22.12.19]Analysis of Decision-Making Process Using Methods of Quantitative Electroencephalography and Machine Learning Tools


Hi everyone, this is our’s last article:
Long story short, it’s about pilot study on 41 healthy men and 30 men with some mental disorders. They did Iowa Gambling Test which is a psychological task thought to simulate real-life process of decision making.
We found some difference in brain activity during this exam between healthy people and ill ones.
For example our experiment showed that the people with psychiatric disorders have the amygdala more frequently hyperactive compared to healthy participants from the control group.
The amygdala is an almond-shape set of neurons located deep in the brain's medial temporal lobe.
Unfortunately, we did not have enough research sample to build a classifier for recognizing specific mental illnesses.
Here are some questions for You:

1.How does Iowa Gambling Test look? Have you heard about this test before?

2.Do You think that with much more participants we will be able to build classifiers for recognizing specific diseases?
 
3.What do you think about using AI in diagnosis of mental ilnesses or other sickness? 

4. Will it replace psychiatrists or other specialists in the future?

Monday 16 December 2019

Week 5 [16.12-22.12.2019] What can artificial intelligence tell us about unicorns?

Hello,
this week I would like to talk to you about automatic text generation by neural networks. Natural Language Processing is a rapidly growing field of science. It touches on problems like speech recognition, langage understanding, machine translation, but the one I personally find the most interesting is natural language generation. Recent years the neural networks techniques seems to be the most promising in this area. In 2019 OpenAI group released very impressive language model called GPT-2 based on Transformer architecture. An example of what it can create might be this text about unicorns.

Articles that tell more about the model:
https://openai.com/blog/better-language-models/
https://arxiv.org/abs/1706.03762

If you would like to try it by yourself, check TalkToTransformer or TabNine tools. 

What do you think about generating texts in natural language by AI?
Is it a chanse or threat?
What practical applications of this technology do you see?

Week 5 [16-22.12.19] Dark energy

Today I'd like to turn your attention to this interesting article:
https://www.livescience.com/34052-unsolved-mysteries-physics.html

It is conjectured that dark energy makes up about 74% of the universe (with dark matter a distant second at 22%, and only about 0.4 % in solid celestial bodies). It's the energy that powers the expansion of the universe (which would otherwise not happen due to gravity).

1. Do you think we can use the dark energy to our benefit?
2. How could we go about tapping into this resource?
3. Can you propose any other energy that could be useful in long-haul space travel (beyond the solar system, far from the sunlight)?

Sunday 15 December 2019

WEEK 5 [16.12-22.12.2019] Blending Realities with the ARCore Depth API

Hello, most of you may have heard of ARCore from Google, which is an android library that allows you to handle augmented reality. Today, looking for a presentation topic, I found a new function for this library, which has recently been presented. The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel. I think that creating a well-functioning depth recognition feature is another big step in the development of AR.
"Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene. We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today."
I tried to check whether this function works already in my case for 3D models of animals but all the time the model is only superimposed on the image without recognizing the depth. If your phone supports ARCore, you can also check it at yourself.

Video presenting new functions:
https://youtu.be/VOVhCTb-1io

Articles:
https://developers.googleblog.com/2019/12/blending-realities-with-arcore-depth-api.html
https://techxplore.com/news/2019-12-google-flag-ar-depth-builders.html

If you have an idea for a project and would like to use this API in early access mode, please apply here:
https://developers.google.com/ar/develop/call-for-collaborators#depth-api

Questions:
1. Have you ever used AR in your projects? If so, what was it and what tools did you use?
2. What do you think the new function can be used for (apart from the ones mentioned in the article)?
3. Which AR or VR technology do you think is more future-proof? Or is it just their combination in MR (mixed reality) that is what we should develop?

Tuesday 3 December 2019

WEEK 4 [02.12 – 08.12.2019] Disabled or Cyborg? How Bionics Affect Stereotypes Toward People With Physical Disabilities.



     This week I would like us to think and discussed about stereotypes in a field that not such a long ago was mentioned only in sf-movies. Modern technology is making a great progress in field of bionic prostheses and other artificial adjustment to treat our disabilities. New developments at the intersection of computer science, engineering, robotics, and medicine include exo-skeletons for people with paraplegia, powered and computer-controlled leg prostheses, fully articulate bionic hands, and cochlear implants for people who are deaf. Beyond the technological aspects of this progress we are also facing the psychological change, German researchers performed a study with hypothesis that increase  in use of bionic technologies (e.g., bionic arm and leg prostheses, exo-skeletons, retina implants, etc.) has the potential to change stereotypes toward people with physical disabilities.

I recommend to you this great TED-Talk on bionic prostheses :


After that please share your thought about:
1. What do you think of such modification of human body? Are you in favor of bionic prostheses?
2. Are you a in favor of a trend of replacing humans organs with artificial ones?
3. What do you think change in our perception of disabilities, that we judge peoples with plastic prostheses as a less competent than those with bionic ones?

Monday 2 December 2019

WEEK 4 [02.12-08.12.2019] PyParadigm - A Python Library to Build Screens in a Declarative Way


I would like to present you an article about a new Python library – PyParadigm. The library enables the creation of experimental paradigms in experimental psychology. A paradigm consists of different states in which stimuli are displayed and the user has to react to the stimuli and the responds. The aim of the paradigm is to define the user’s behavior in the form of reaction or making decision. Psychologists more often are confronted with computer-based paradigms and creating such paradigms can require IT knowledge. 
PyParadigm is the new library based on declarative approach to build user interfaces. The authors write that the proposed approach requires less code and training than alternative libraries. This library works with 2D objects and uses numpy Python library. The aim of the authors was to achieve ability of writing paradigms with minimum of code and training. The authors have prepared tutorial and several examples of most known paradigms. A user can modify the examples or create his/her own. The author applied the declarative approach to reduce the amount of the code and increase readability. 
The library is divided into four modules: surface_composition, eventlistener, misc, extras. The surafce_composition is used to create and display images on the screen.  The misc enables creating the window and drawing images within the window. The extras module contains functions allowing to use numpy and matplotlib libraries of Python.


Questions:
1.     What is your opinion about using programming language libraries by people who are not experienced in programming, e.g. by psychologists?
2.     Do you use your own software when you conduct the experiment or you use some publicly available/commercial software? For example, if you created your own library, you could share a link to it.
3.     One can found a lot of libraries for various programming language, e.g., on github. When you look for something and want to use, do you believe, that the library which somebody has uploaded, does not contain mistakes?
4.     How do you think, is it possible that Python language will become the only one useful programming language?

Monday 18 November 2019

Week 3 [18-24.11.19] CO2 Storage in Minerals

While searching through articles from Frontiers, I found the article titled "An Overview of the Status and Challenges of CO2 Storage in Minerals and Geological Formations"

  
      We have heard a lot lately about the climate crisis and ideas on how to prevent it. According to the Paris agreement, all countries should reduce CO2 emissions so that in 2100 the world temperature would rise by only 2°C, which is the most optimistic scenario. In the above article, the authors describe the method of CO2 storage in minerals and geological formations. Most of the carbon dioxide contained in the atmosphere must, unfortunately, be constantly stored, only a part can be converted into e.g. fuel. Carbon dioxide storage technology is constantly being studied on a laboratory and experimental stage. "Globally, coal mineralization in these rock types has a sequestration potential of up to 60,000,000 GtCO2 if the resource is economically available and ultimately completely saturated with carbon dioxide."
      There are different methods of coal mineralization: (1) ex-situ, where the source of alkalinity is transported to the CO2 capture site, grounded to small particles and combined with CO2 in a high temperature and pressure reaction vessel, (2) surficial where diluted or concentrated CO2 reacts with a source of alkalinity on the surface (e.g. mine tailings, smelter slag) and (3) in situ, where CO2 carrier fluids circulate in subsurface porosity in geological formations.
       Such CO2 storage methods are long-term and non-toxic, and can also help mitigate health and environmental hazards in specific contexts. In addition, because CO2 is converted to a stable carbonate form, it is the safest storage mechanism in terms of minimizing leakage.

  1. What do you think about the methods of CO2 storage in minerals and geological formations mentioned by the authors? Maybe you've heard about other ways to store CO2?
  2. Do you think that with the current economy and human activities we have a chance to meet the conditions of the Paris Agreement and limit the rise in world temperature to only 2°C until 2100?
  3. What do you think technological solutions (even abstract at the moment) could help prevent climate change?

Week 3 [18-24.11.19] Write-A-Video

A global team of computer scientists, from Tsinghua and Beihang Universities in China, Harvard University in the US and IDC Herzliya in Israel, have developed "Write-A-Video", a new tool that generates videos from themed text. Using words and text editing, the tool automatically determines which scenes or shots are chosen from a repository to illustrate the desired storyline. The tool enables novice users to produce quality video montages in a simple and user-friendly manner that doesn't require professional video production and editing skills

1. What do you think about this idea, does it have a chance to be used more widely or is it just a curiosity?
2. Do you think Artificial Intelligence will be able to support or replace film editors in the future?
3. Do you have experience in video processing? Don't you think that learning the basics will be easier than creating the described video database?
4. Are you familiar with other projects that make it easier to work with video processing? If so, tell us about it.

https://www.eurekalert.org/pub_releases/2019-11/afcm-csd111419.php

A video illustrating the project can be seen here:
https://vimeo.com/357657704

Full article:
http://miaowang.me/papers/a177-wang.pdf:

Sunday 3 November 2019

Week 2 [4-10.11.2019] Man versus machine.

While searching for curiosities about face recognition, I found the following study.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0150036
Face recognition is used to confirm a person's identity. Research shows that artificial intelligence is better at this task than people. The English have found and selected people who do it much better than AI.
Please read the material, especially the tests and their results. And answer the questions ...

1) Do you still think that AI is much more effective than human?
2) Is it reasonable to invest in such people?
3) Maybe, however, invest in machines and further improve them?

Week 2 [04-10.11.2019] Blended learning

Hello everybody,

today I would like to raise topics related to blended learning, because I believe that each of us has or had contact with such a teaching method. Please read the article and give a short answer to the following questions.

1. What is blended learning?
2. Give the advantages and disadvantages of this teaching method?
3. What examples do you know (other than in the article) of using blended learning?
4. What could be the future of teaching and learning? Will information technologies dominate teaching so that teachers become unnecessary?

Link to the article:
https://www.researchgate.net/profile/Feifei_Han/publication/327947654_Han_F_Ellis_R_A_2018_Identifying_consistent_patterns_of_quality_learning_discussions_in_blended_learning_environments_Internet_and_Higher_Education_Doi_101016jiheduc201809002/links/5d351f2fa6fdcc370a54868b/Han-F-Ellis-RA-2018-Identifying-consistent-patterns-of-quality-learning-discussions-in-blended- learning-environments-Internet-and-Higher-Education-Doi-101016-jiheduc201809002.pdf

Thursday 24 October 2019

Week 2 [04-10.11.2019] Peer review process in science

Hello,
Today I'd like to discuss responsible science. Once one a while, we can notice a piece of news discussing an issue related to the irrelevant output of a bunch of scientific papers. Even though they are peer-reviewed.

However, only a few of them have a reviewer who is willing to re-do all the described tests. It is pretty straight forward: someone else's experiment for the second, third, or fourth time isn't nearly as exciting as running your research for the first time, but studies like this are showing us why we can no longer avoid it.

What is your opinion about the peer review process in science?

https://science.howstuffworks.com/innovation/science-questions/database-18000-retracted-scientific-papers-now-online.htm

https://www.blog.pythonlibrary.org/2019/10/13/thousands-of-scientific-papers-may-be-invalid-due-to-misunderstanding-python/

https://www.sciencealert.com/a-bug-in-fmri-software-could-invalidate-decades-of-brain-research-scientists-discover


Monday 21 October 2019

Week 1 [21-27.10.2019] How we can store digital data in dna

Hi everyone!
I always share paper material...but this speech was very interesting to me (I'm hope that also for you).

https://www.ted.com/talks/dina_zielinski_how_we_can_store_digital_data_in_dna#t-755635

I have a few questions for you:
1. Which nowadays data should be for sure saved to next generations?
2. Which problem do you see about store data in DNA?
3. Do you think that it's good solution to store data or we should use something other?
4. Do you know any new direction in data store (newest than cloud)? Do you have any idea what the next step will be?

Week 1 [21-27.10.2019] Worth Getting Your PhD Degree

Dear Students,
Read the article at https://finishyourthesis.com/worth-getting-your-phd-degree/
and share with us your comments and experieces.

Week 1 [21-27.10.2019] Ted Talks

Dear Students,
Choose a TedTalks presentation. Watch it and tell us what you have learnt from it.
https://www.ted.com/#/

Friday 18 October 2019

Winter semester 2019/20

Dear Students,

1. Each week  there will be presented texts/films and presentations, which I would like you to read/watch and comment on.

2.  You should also present a scientific article with your comments and questions for the group to discuss: you do not write it, you just find something of your interest online, present it and moderate the discussion of it.  Put your name on the list of blog moderators next to the date when you would like to do it.

3. At the end of the semester you will deliver a 10-minute presentation of your research area

Read How to Make an Oral Presentation of Your Research  at http://www.virginia.edu/cue/presentationtips.html
Use Steve Jobs's presentation techniques  (https://www.youtube.com/watch?v=S4UEJMuo0dA



1. 21-27.10.19
2. 04-10.11.19
3. 18-24.11.19
4. 02-08.12.19
5. 16-22.12.19
6. 07-12.01.20
7. 20-26.01.20

Friday 14 June 2019

Traditional classes


Dear Students,
Decide when you would like to deliver your presentation.
It should be from the area of your studies and it should be 10-13 minutes long. Do remember to prepare a PPT presentation.

21 June 2019  Room 120
1.00 p.m.
1.20 p.m.
1.40 p.m.
2.00 p.m.
2.20 p.m.




24 June 2019 Room 231
5.00 p.m.
5.20 p.m.
5.40 p.m.
6.00 p.m.
6.20 p.m.
6.40 p.m.
7.00 p.m.


25 June 2019 Room 231
5.00 p.m.
5.20 p.m.
5.40 p.m.
6.00 p.m.
6.20 p.m.
6.40 p.m.
7.00 p.m.


26 June 2019 Room 231
5.00 p.m.
5.20 p.m.
5.40 p.m.
6.00 p.m.
6.20 p.m.
6.40 p.m.
7.00 p.m.

Monday 3 June 2019

Week 7 [03-09.06.2019] Can you tell which face is real?

Hi!
Recently, there was hot news about a paper entitled "A Style-Based Generator Architecture for Generative Adversarial Networks". The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis.

That research introduced a new, highly varied and high-quality dataset of artificial human faces.

The first approach to fake-face generation task was put together by Ian Goodfellow, the director of the machine learning at Apple’s Special Projects Group and a leader in the field when he proposed a new framework: Generative Adversarial Networks. In less than five years, all of that changed. Today’s AI-generated faces are full-colour, detailed images.

1. Have you ever heard about artificially generated images which contain realistic human faces? Can you propose some possible real-world applications of such algorithms?

You may have seen a website named ThisPersonDoesNotExist.com doing the rounds, which uses AI to generate startlingly realistic fake faces. There is also a website called WhichFaceIsReal.com, which lets you test your ability to distinguish AI-generated fakes from the genuine article.

2. Could you spot/are you aware of what kind of imperfection those generated faces have?

3. Do you consider any public security or privacy issues regarding those methods?

Articles and sources:

https://arxiv.org/abs/1406.2661
https://arxiv.org/pdf/1812.04948.pdf
https://www.youtube.com/watch?v=-cOYwZ2XcAc
https://www.theverge.com/2019/3/3/18244984/ai-generated-fake-which-face-is-real-test-stylegan
https://medium.com/@kcimc/how-to-recognize-fake-ai-generated-images-4d1f6f9a2842

Sunday 2 June 2019

Week 7 [03-09.06.2019] Modern medium for presenting research

Reading through a scientific paper is often arduous. As authors, we usually want to present our ideas as fast as possible to meet the deadline for a conference. As a consequence, we may introduce a research dept and transfer all the labor to readers. They'll have to struggle to understand our work.

To address the problem scientists mainly from Google and OpenAI started in 2017 a new journal called Distill. The journal is focused on clear and easy to understand presentation of ideas.

This is the article about the research dept: https://distill.pub/2017/research-debt/
And the journal itself: https://distill.pub/

Suggested questions to discuss:
1. Do you think that it's the authors of research papers responsibility to present their ideas in an easy to understand manner? Or doing it accurately and precisely is enough?
2. Are visualizations or interactive presentations any helpful for explaining ideas in your field of research? Or maybe other means of communication work better?
3.  Have you recently seen any work that helped you to understand some concept better?
I like this article about Attention Mechanism if you're interested: https://towardsdatascience.com/attn-illustrated-attention-5ec4ad276ee3

Saturday 1 June 2019

Week 7 [03-09.06.2019] Automated identification of media bias in news articles



I want to sharre with you an article with overview of news media bias detection methods. News media bias was well studied in social sciences and now become an interest of computer scientists. There are plenty of text mining, computer vision and other types of algorithms that may be used to detect media bias. I want to ask you the questions below.

https://link.springer.com/article/10.1007/s00799-018-0261-y

1. What is media bias? What types of media bias do you know?
2. Which type of media bias has the gratest impact on the reader?
3. Do you think it is possible to create automatic tools to detect news media bias?


Thursday 23 May 2019

Week 6 [20-26.05.2019] Something about music & medicine & technology

Dear All,

I'd like to share with my new interests:

https://positivepsychologyprogram.com/music-therapy-clinical/#dementia

The article indicates, how the music can influence on us, our health & diseases.

All of us know the music can help to get out of bad mood and feel better, sometimes even very fast. But this is not such issue.
I think the sounds (not only melodies) composed together in a specific way, can treat us from serious diseases. Some scientists also believe in hypotheses like that and lead specific researches on various detailed fields, like cancer, Alzheimer, Autism
And also, of course, many psychiatric cases use music therapy, like depression, schizophrenia etc.
But the most surprising for me were the effects achieved by scientists working with cancer cells, because there were many experiments proving decreasing or even liquidating them thanks to the sounds with specific frequency. Many other interesting issues I've met till now, I've read for example, that the sound of steps can help people after stroke with their legs' rehabilitation.

But coming to my questions:

1. Have you ever met with someone, who used such kind of healing (it may be either patient, or a doctor)?
2. Have you ever heard about technical tools enabling such therapies? If not, can you imagine such tool? What kind of?
3. Can you imagine that computer can support such treatments? How?
4. What was the most surprising for you in this article?
5. Do you consider this kind of methods as serious ones or just "magic" for naive people? Why?


Have a nice day,
Marta