Tuesday 17 December 2019

WEEK 5[16-22.12.19]Analysis of Decision-Making Process Using Methods of Quantitative Electroencephalography and Machine Learning Tools


Hi everyone, this is our’s last article:
Long story short, it’s about pilot study on 41 healthy men and 30 men with some mental disorders. They did Iowa Gambling Test which is a psychological task thought to simulate real-life process of decision making.
We found some difference in brain activity during this exam between healthy people and ill ones.
For example our experiment showed that the people with psychiatric disorders have the amygdala more frequently hyperactive compared to healthy participants from the control group.
The amygdala is an almond-shape set of neurons located deep in the brain's medial temporal lobe.
Unfortunately, we did not have enough research sample to build a classifier for recognizing specific mental illnesses.
Here are some questions for You:

1.How does Iowa Gambling Test look? Have you heard about this test before?

2.Do You think that with much more participants we will be able to build classifiers for recognizing specific diseases?
 
3.What do you think about using AI in diagnosis of mental ilnesses or other sickness? 

4. Will it replace psychiatrists or other specialists in the future?

Monday 16 December 2019

Week 5 [16.12-22.12.2019] What can artificial intelligence tell us about unicorns?

Hello,
this week I would like to talk to you about automatic text generation by neural networks. Natural Language Processing is a rapidly growing field of science. It touches on problems like speech recognition, langage understanding, machine translation, but the one I personally find the most interesting is natural language generation. Recent years the neural networks techniques seems to be the most promising in this area. In 2019 OpenAI group released very impressive language model called GPT-2 based on Transformer architecture. An example of what it can create might be this text about unicorns.

Articles that tell more about the model:
https://openai.com/blog/better-language-models/
https://arxiv.org/abs/1706.03762

If you would like to try it by yourself, check TalkToTransformer or TabNine tools. 

What do you think about generating texts in natural language by AI?
Is it a chanse or threat?
What practical applications of this technology do you see?

Week 5 [16-22.12.19] Dark energy

Today I'd like to turn your attention to this interesting article:
https://www.livescience.com/34052-unsolved-mysteries-physics.html

It is conjectured that dark energy makes up about 74% of the universe (with dark matter a distant second at 22%, and only about 0.4 % in solid celestial bodies). It's the energy that powers the expansion of the universe (which would otherwise not happen due to gravity).

1. Do you think we can use the dark energy to our benefit?
2. How could we go about tapping into this resource?
3. Can you propose any other energy that could be useful in long-haul space travel (beyond the solar system, far from the sunlight)?

Sunday 15 December 2019

WEEK 5 [16.12-22.12.2019] Blending Realities with the ARCore Depth API

Hello, most of you may have heard of ARCore from Google, which is an android library that allows you to handle augmented reality. Today, looking for a presentation topic, I found a new function for this library, which has recently been presented. The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel. I think that creating a well-functioning depth recognition feature is another big step in the development of AR.
"Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene. We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today."
I tried to check whether this function works already in my case for 3D models of animals but all the time the model is only superimposed on the image without recognizing the depth. If your phone supports ARCore, you can also check it at yourself.

Video presenting new functions:
https://youtu.be/VOVhCTb-1io

Articles:
https://developers.googleblog.com/2019/12/blending-realities-with-arcore-depth-api.html
https://techxplore.com/news/2019-12-google-flag-ar-depth-builders.html

If you have an idea for a project and would like to use this API in early access mode, please apply here:
https://developers.google.com/ar/develop/call-for-collaborators#depth-api

Questions:
1. Have you ever used AR in your projects? If so, what was it and what tools did you use?
2. What do you think the new function can be used for (apart from the ones mentioned in the article)?
3. Which AR or VR technology do you think is more future-proof? Or is it just their combination in MR (mixed reality) that is what we should develop?

Tuesday 3 December 2019

WEEK 4 [02.12 – 08.12.2019] Disabled or Cyborg? How Bionics Affect Stereotypes Toward People With Physical Disabilities.



     This week I would like us to think and discussed about stereotypes in a field that not such a long ago was mentioned only in sf-movies. Modern technology is making a great progress in field of bionic prostheses and other artificial adjustment to treat our disabilities. New developments at the intersection of computer science, engineering, robotics, and medicine include exo-skeletons for people with paraplegia, powered and computer-controlled leg prostheses, fully articulate bionic hands, and cochlear implants for people who are deaf. Beyond the technological aspects of this progress we are also facing the psychological change, German researchers performed a study with hypothesis that increase  in use of bionic technologies (e.g., bionic arm and leg prostheses, exo-skeletons, retina implants, etc.) has the potential to change stereotypes toward people with physical disabilities.

I recommend to you this great TED-Talk on bionic prostheses :


After that please share your thought about:
1. What do you think of such modification of human body? Are you in favor of bionic prostheses?
2. Are you a in favor of a trend of replacing humans organs with artificial ones?
3. What do you think change in our perception of disabilities, that we judge peoples with plastic prostheses as a less competent than those with bionic ones?

Monday 2 December 2019

WEEK 4 [02.12-08.12.2019] PyParadigm - A Python Library to Build Screens in a Declarative Way


I would like to present you an article about a new Python library – PyParadigm. The library enables the creation of experimental paradigms in experimental psychology. A paradigm consists of different states in which stimuli are displayed and the user has to react to the stimuli and the responds. The aim of the paradigm is to define the user’s behavior in the form of reaction or making decision. Psychologists more often are confronted with computer-based paradigms and creating such paradigms can require IT knowledge. 
PyParadigm is the new library based on declarative approach to build user interfaces. The authors write that the proposed approach requires less code and training than alternative libraries. This library works with 2D objects and uses numpy Python library. The aim of the authors was to achieve ability of writing paradigms with minimum of code and training. The authors have prepared tutorial and several examples of most known paradigms. A user can modify the examples or create his/her own. The author applied the declarative approach to reduce the amount of the code and increase readability. 
The library is divided into four modules: surface_composition, eventlistener, misc, extras. The surafce_composition is used to create and display images on the screen.  The misc enables creating the window and drawing images within the window. The extras module contains functions allowing to use numpy and matplotlib libraries of Python.


Questions:
1.     What is your opinion about using programming language libraries by people who are not experienced in programming, e.g. by psychologists?
2.     Do you use your own software when you conduct the experiment or you use some publicly available/commercial software? For example, if you created your own library, you could share a link to it.
3.     One can found a lot of libraries for various programming language, e.g., on github. When you look for something and want to use, do you believe, that the library which somebody has uploaded, does not contain mistakes?
4.     How do you think, is it possible that Python language will become the only one useful programming language?