While doing science is hard by itself, publishing the results is yet another problem. Some points might be valid for other fields as well, bet I'd like to focus on AI which is already sometimes being compared to alchemy due to the strong focus on practical results rather than theory.
In the following paper, authors identify troubling trends appearing in research publications:
https://arxiv.org/pdf/1807.03341.pdf
The authors describe four mistakes that are often made especially by young researchers:
1. Failure to distinguish between explanation and speculation.
2. Failure to identify the sources of empirical gains
3. Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies
4. Misuse of language
Questions:
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
3. Have you noticed any other problems with scientific publications you read so far?
Note: You can relate those question to other fields if you're not interested/familiar with AI.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
I think that there should be two options, one with profesional terminology for advanced experts, and some easier versions for people who are not familiar with the subject.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think that we should have a source of trustworthy materials to base our work and research on. Rigorous standards are not that rigorous in my personal opinion. This is just formality to make is have similar structure, so everyone despite from country can find it out.
3. Have you noticed any other problems with scientific publications you read so far?
Many of them seems to be ‘sponsored’ meaning that they do not portray the truth but are kind of promotion of particular solution or company. Paper will accept whatever will be written on them, which unfortunately we can’t completely believe.
Thanks for the comment. Indeed, sponsored articles may be a huge problem. Especially if they deal with very specific subject that is described in only few other sources. It may be really hard to validate information from such papers.
Delete1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
The terminology is like learning a language or educating children. At the beginning there's a simple language, simple sentences, simple names and simple letters. Later, we find out that it's "rz" and "ż". The same's true of technical terminology. We should always start with simple things and learn more and more difficult. Did you have a concept in the high school (technical secondary school) what's the triple integral of the volume :-D!
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
A perverse question. I will use the aphorism of prof. Jana Miodka:
"The theory is when we know everything and nothing works! Practice is when everything works, and no one knows why. In this room, we combine theory with practice. Nothing works and no one knows why."
Thus, the presentation of something that works in simple words is a genius idea in itself.
3. Have you noticed any other problems with scientific publications you read so far?
Many different problems are with the publications. The biggest one's on whose order or order it's written. Then many conclusions and arguments can be interpreted differently.
Thanks for the comment! Oh, that's really bad if conclusions leave a room for other interpretations than intended by authors :P
DeleteReferring to your funny quote, I have a bad feeling that many papers combine theory and practice in a untruthful way when "everything works on paper and no one knows why". I'm often disappointed when trying to actually use some solutions described in papers.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDeleteI wholeheartedly agree. Wishful thinking doesn't need much to inflate a hype bubble. Very often there is talk about this or that "exceeding human performance" and so on (computers are better at Chess and Go, blah blah). This causes resource misallocation (greedy business people fund "data scientists", dreadming of firing all their employees, screwing over the nerds who made it possible, and having all the profit for themselves). And then another AI winter is coming (when the product of the nerds requries even more labour than the original system, and now the fat cats hate anything connected with "AI").
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
Yes, they can slow the system down, but they will never be bullet-proof. People will find new ways to game the system and there will be a need for more "modest proposals" (as the one cited in the article).
3. Have you noticed any other problems with scientific publications you read so far?
I have a very small portfolio of papers myself, so I don't want to criticize too much. But some articles are better than others, and even if difficult, they teach something. For AI or any CS-related field I think a working implementation should always accompany the publication. This makes reproducing the result possible for everybody. Other scientific disciplines should publish the raw empirical data used in their research, not only selected statistics supporting theirs claims. I am not asking for more large tables in the papers, which are useless (especially in printed form). I would like the data to be published in a computer-readable format, downloadable from the Internet.
Thanks for the comment! I'm glad we're on the same side :D Me too, it bothers me that AI solutions are called like if they were self-aware entities. This indeed can be harmful for the industry in the long run.
DeleteYep, more and more companies want to know why models they bought work and not only what is an accuracy on some test set. It'll be bad if they find out that their supper advanced technology base its predictions on something totally unrelated to the problem.
You said that you don't wan't to criticize other papers. Recently I was reviewing a one for the first time. I think i understood the paper better because I had to write a whole section about what is wrong with it.
You're right, a code attached to a paper would be perfect. Papers who have their project pages where you can download datasets and see a whole code are much more trustworthy than papers with perfectly performed and described statistical tests.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's OK to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDeleteYes, I agree it's wrong. And also irresponsible. Such publications can be read by journalists, who in turn can transform those into some monstrosities. It is not worth this wider audience.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think two meanings of "publication" are mixed here. "Publication" as in peer-reviewed journal, that "counts to the score" of scientist, and "publication" as "presenting and idea or result to be discussed". The paper linked above cites LeCun's "Proposal for a new publishing model in computer science" [44] which seems like a step in right direction, allowing for fast circulation of ideas, and evaluation of quality.
3. Have you noticed any other problems with scientific publications you read so far?
I concur with Tomasz that missing working code is problematic. Other problem I notice is paying to much attention to few percent point improves over current "state of the art" for popular datasets, while ignoring performance end effectiveness.
Yes, focusing on tiny improvements in accuracy is a plague. It's probably because it's the easiest way to publish something. But unfortunately works that focus on insights and new ideas rather than squeezing out precious percents in accuracy are a minority.
Delete1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
Yes I agree with this statement. Especcially scientist should specify in detail what problem thay are exactly solving and what input to this they do. In another section they can wrtie how it can be xtended and this is the right place to imagine how it could be.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think that every scientific papershould be carefully read and the author should explain everything in a clear manner. I think that slowing down the process of publications is not so bad if the quality of thsese papers is better. I know that many scientist compete with each other to publish their results faster that others but they should be more patient and cerfully checked their papers.
3. Have you noticed any other problems with scientific publications you read so far?
The one thing that is problematic for me is that the authors write about some methods and describe them in general but forget about details which make this work unreproduciable.i think that in case of algorithms and computer science publicly available code make publication looks more reliable as we know that all the details which may be not explained in a paper for some reason are in a published code.
Reproducibility is a big problem. It's unbelievable that in the Computer Science it's so common. I understand that some people can't publish their code, but it should be a marginal case, usually when paper is dealing with some commercial solution.
DeleteLeaving important details annoys me as well.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
I think that it depends of point of view. From the point of experts, the terminology should be professional and accurate but if the article is read by someone who is a novice or not very familiar to the topic, he/she expects understandable statements. The technical terminology is very complicated and embroiled for non- professionals. It is obvious that if we want to present some paper works to wide audience, we have to use terminology which will be understandable by most people.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think that a suitable overview of the literature should be the support for any research. A proper review confirms that the research in this field is relevant . If we show that the research makes sense, the rigorous standards will not be even terrible
3. Have you noticed any other problems with scientific publications you read so far?
Note: You can relate those question to other fields if you're not interested/familiar with AI.
Yes, I have noticed problems with scientific publications. I am interested in Artificial Intelligence and electroencephalography so I noticed that the authors of scientific papers sometimes do not present all the obtained results, e.g. instead of presenting the results accompanied with the sound proof of statistical significance, they present only selected best results, trying to show, that the method, they proposed, is superior compared all other approaches. When they present some problem which is based on the advanced formulas, for example they show the equations without explanation! It makes the article less readable, it usually happens when the authors do not understand the key ideas of used methods but try to make the article look more boffinated.
Monika Kaczorowska13 January 2019 at 00:38
ReplyDelete1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
The field of Statistics is constantly challenged by the problems that science and industry brings to its door. In the early days, these problems often came from agricultural and industrial experiments and were relatively small in scope. With the advent of computers and the information age, statistical problems have exploded both in size and complexity. Challenges in the areas of data storage, organization and searching have led to the new field of “data mining”; statistical and computational problems in biology and medicine have created “bioinformatics.” Vast amounts of data are being generated in many fields, and the statistician’s job is to make sense of it all: to extract important patterns and trends, and understand “what the data says.” We call this learning from data.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
Research and development are closely interlinked. They contribute to increasing the human knowledge base in every area of life. The combination of research and development creates new products, technologies and other innovations that foster the development of the economy and thus improve the quality of life. Research and development is counted as the first stage of the product life cycle. Research and development work allows for the development of innovative products and services.
3. Have you noticed any other problems with scientific publications you read so far?
Many scientific studies can give false results. Experience has shown that many factors can cause falsifications in the risk of obtaining results. false positives: errors in publications, longer time to publish results that do not reach the level of statistical significance, tendency to reduce the magnitude of the effect with the year of publication, weak predictive value of the initial reports; post hoc surveys of subgroups distinguished by gender or environmental factors and the way in which the survey is financed.
Thanks for the comment. I'm glad you've mentioned a statistical significance. My friend was working on some new classifiers on his PhD and to test them he was performing all the statistical tests that should be performed and he said that it was really hard to obtain statistically significant results proving that a new method is indeed better even though it seemingly was. Many solutions definitely lack proper testing.
Delete1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
I think that this type of definitions allow to bring the ideas of artificial intelligence closer to people who are not IT experts. Unfortunately, they often present an idea of much greater possibilities offered by artificial intelligence algorithms than in reality.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I believe that stricter rules in this area could really slow down development. You simply have to be very careful about what you read and look for confirmation in other publications.
3. Have you noticed any other problems with scientific publications you read so far?
The main accusation I have in relation to many publications is the language they are written in. It is often more complicated than it could be.
Yeah, people who use very difficult language when writing papers may try to hide something. But in fact it's very difficult to write in an easy, consistent and precise way. No doubt it's worth to learn though.
Delete1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
I think that depends of journal where we want publish our publication. In our publications we should use words that are understanding for people that are familiar with topic. We should explain professional definitions in the relevant section. If we want that our article reads people not familiar with topic then we should use simple words that help this people understand our work.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I don’t think so. Without the regulations the articles avaible in the internet and paper journals would be poor quality. We need to check our solutions many times before we publish it. Articles should help other people to improve their work and help get knowledge about reserch area in which their want do work. Finally, they should help improve our life.
3. Have you noticed any other problems with scientific publications you read so far?
Authors describes problem in the article, but without details that could help repeat the results in own laboratory. Another problem is the language used in publication.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDeleteSometimes I have a feeling that publication authors are creating the content of the article just like The Sun does, in the form of clickbait. From the other hand, I think that reviewers sometimes like that because that potentially would generate better income form the sell of the magazine.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think that everybody has it's own mind and should be able to assess the validity and usefulness of theses and conclusions, no regulations are needed there.
3. Have you noticed any other problems with scientific publications you read so far?
A click-baiting title and a large amount of well-known theory to make an impression about the knowledge of the author. Furthermore, very few papers are in the form which allows you to code/verify anything mentioned.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
Yes i do agree with the authors. We should not use buzzwords and scientific paper should not misuse language on purpose
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
In my opinion we should stick to the rigorous standards, and I do not feel this could slow down publications. Scientific paper should be what is supposed to be and fact based analytical document
3. Have you noticed any other problems with scientific publications you read so far?
Some of the paper could be called as vague and are not trying to resolve any real problem
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience? *Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
ReplyDeletePersonally I believe that there is always a possibility to clarify something with simple words. However, I would recommend to use the most professional and reliable vocabulary as possible. Yes, I agree with the authors but such situation takes place only sometimes – not often.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I would like to have good, proven publications. It is sad that when reproducing certain experiments, you get completely different results than in published materials. I also meet the opinions of other researchers that some articles deviate from the reality. Rigorous standards may only help in this situation…
3. Have you noticed any other problems with scientific publications you read so far?
Apart from those four mentioned in the presented article I have also encountered significant grammatical errors and references to non-existent bibliographic items.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
AI solutions are currently on top, so a lot of people is interested in this subject and because of that I think there should be split between popular science articles and scientific onces. This could be done in 2 ways, we can create seprate articles or split paper into chapters with 'popular' language and another one with deeper explenation for expert. I would prefer first option to do not waste my precious time reading some nice words for broather audience :)
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think that 'time to market' is really important factor, but not at expense of quality, so I think it is missed argument. I would prefer to have mature publication with good explenation of algoryth and verified hipothesis rather then only a new idea without any prove that it is really working. There are another places in AI community, where such ideas can be shared.
3. Have you noticed any other problems with scientific publications you read so far?
For me one of the biggest problem is that usually there is no working code attached to article, so I need to believe that someone really done it correctly and moreove if I want to reuse some part of this solution for other purpouse I need to start everything from scratch. Moreover without this code I cannot be sure, that all results are correct and nothing was hide.
1. It depends on who we direct our articles and publications to. In many scientific journals, a specific technical language is required, which is hard to write. On the other hand, scientific articles can be written to a wider public who does not understand the technical language and needs to describe the issue in an accessible way.
ReplyDelete2. I agree that certain norms are too rigorous, and sometimes mainly by language, articles are rejected despite their high scientific value. On the other hand, each article should be checked so that the publishing house does not publish something that it should not, which is wrong. It's hard to find a golden mean in this matter. Sometimes, publishing an article with a new idea that is not fully tested results in the fact that some other research team will be interested in this topic and check it in their own way and can then confront their results with such an article.
3. Often, researchers publish ideas for research rather than the results of specific work. Also, I met with a whole range of slightly falsified results, because a research group did not want to lose funding for their research and did not want to reveal that the material they are testing is not suitable for any applications. From what I remember it was about some uses of hydrogen. In addition, the publication process itself often takes a very long time.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDeleteI think that there should be a term for people that understand this topic (experts), and also meaning for people that do not understand this topic, ordinary users
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think that there should be a standard that will facilitate the work of many, as well as a source that you can trust so that you can draw on new information and use their materials.
3. Have you noticed any other problems with scientific publications you read so far?
I was faced with problems of distortion of information.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
On the one hand, a really catchy introduction and appropriate choice of words will disseminate the article if, however, the described solution does not really work properly or works only on a strictly defined set of data, what is the meaning of its widespread dissemination?
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
If the idea is based on false assumptions, then this should not be made public ... and so we have too much increase in articles in the field of computer science ;)
3. Have you noticed any other problems with scientific publications you read so far?
Many of them describe the offered solutions in such a general manner that it is difficult to find out how properly (and if) the presented algorithm works.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
I think that texts should generally be adjusted to the potential readers and audience. Scientific papers are written for researchers , so the language and terminology should be appropriate. Texts written for newspapers for let's say a bit "less educated people" should definitely differ from the former ones. The language should reflect the potential audience's ability to grasp and comprehend particular notions.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I don't think that the current standards are rigorous. In my opinion, there are too lenient.
3. Have you noticed any other problems with scientific publications you read so far?
Just like the previous speakers, often I have some doubts and problems with regards to reliability and objectivity of so-called "sponsored" publications.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience? *Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
ReplyDeleteYes, I fully agree with the authors of the mentioned paper that this is wrong. Scientific articles are not for everyone. Of course there is many science blogs for ordinary people which are bringing science news taken from the science articles and sometimes they can exaggerate them. There is no need to bring more confusion.
Nowadays fake news is real problem and the awful plague. Just let's not get into that.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
I think this is very poor counterarguments. You always can publish your ideas with some pilot study and describe what you can do next in discussion section . And this is justified as it can ensure the validity and correctness of given data. The publication should be made only if you are sure of the findings presented in it. Only then the publication can meet its main purpose - to present people some idea which is worth to discuss and study farther.
3. Have you noticed any other problems with scientific publications you read so far?
It seems to me that sometimes articles are too short – they don’t mention important topics from the research, hence I can not always identify the main problem or find the answer on the questions coming from reading an article.
And on the contrary - sometimes they are too large, and have some unnecessary repetition of text and information, which makes the reader unwilling to read it from desk to desk.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDelete*Misuse of language, for example saying that some AI solution solves reading comprehension task while it naturally doesn't comprehend anything. Refer to section 3.4.1 of the paper for details.
I think it’s depend who is a recipient of the text. It will be written different depends if it’s prepared to readers from outside the field, then If it’s dedicated to fellow scientist.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
In my opinion the most important thing is to reader be able to understand what author want to share. Sometimes scientist use the grammatical constructions that are so sophisticated and mashed up that it’s really hard to understand it, not mentioned the scientific meaning of this sentences.
3. Have you noticed any other problems with scientific publications you read so far?
Many publications are written in more complicated way than it’s needed to be. Sometimes I get impression that author you want to intimidate, or trick reader through the sophisticated wordings.
1. AI solutions are often portrayed with suggestive definitions as something taken directly from science fiction*. Do you agree with the authors of the mentioned paper that it's wrong? Or maybe it's ok to bend the truth to make a solution more appealing and reach a wider audience?
ReplyDeleteI think that AI is not from science fiction, because we have to deal with every day. there are a lot of applications that use AI. Of course, it should be adapted to each problem individually. The way of presenting AI must be also properly selected and clearly explains its operation and application.
2. One of the counterarguments mentioned in the paper is that rigorous standards can substantially slow down publications of new ideas. Do you think it's the justified argument? Or maybe sharing new ideas without properly checking why they work has a little value anyway?
The tightening of rules will reduce the number of articles that meet certain standards. I believe that this is a bad step, because it will limit the rapid development of this branch of computer science. Some ideas can not be checked right away because of their specificity. In addition, the scientific community itself will eliminate wrong or unreal ideas.
3. Have you noticed any other problems with scientific publications you read so far?
I have read publications from similar fields that they are very different in terms of content. Some were full of content, others were only some information. Unfortunately, I also met with magazines that for a period of two years have university numbers that are obsoleted by the universities and unfortunately there is no possibility of publishing articles from outside.
Dear Adam,
ReplyDeletemany thanks for interesting issue!
1. For me it’s obvious that the wider audience shall adjust to the kind of issue being discussed. I mean, the language must fit to the subject, otherwise we would give higher priority to form, instead of content. In my opinion language shall always be on slave position, because its function is to describe the given issues, as precise, as it is possible. To achieve maximal accuracy we need to use adequate wording, if the subject is technical, then such special language will help to gain the best description effect.
2. I think rigorous standards are not always the best idea, but the scientific analysis really need exact frames, because proper understanding is the key to continue research with logic order. I mean going further with scientific works can bring better and faster effects, while is leading according to strict procedures. Flexibility does not help here.
3. For me the worst problem in scientific publications is too much words versus real valuable content. I don’t like reading the same text written in fact in few different ways, just to show how eloquent the author is :-)
BR,
Marta