Less beans please.
Beans are all I can afford, figuratively and literally.
What many don't realize, though, is if someone has 3 strong, 6 mediocre, and 1 weak paper, there may be a 70% chance the first paper someone reads of will be a mediocre or weak paper. If so, they are less likely to look at your other work. The more bad stuff you put out, the more chance your good stuff will be overlooked.
What many don't realize, though, is if someone has 3 strong, 6 mediocre, and 1 weak paper, there may be a 70% chance the first paper someone reads of will be a mediocre or weak paper. If so, they are less likely to look at your other work. The more bad stuff you put out, the more chance your good stuff will be overlooked.
That assumes that each paper has an equal likelihood of being read. If 3 papers are great, they are more likely to be cited and more likely to be seen and read. Your point about bad stuff negatively impacting good stuff may be true, but this also ignores any halo effect that the good papers might have.
Also, what exactly counts as a great paper? I've seen junk at top journals, and gems at interdisciplinary or marginal journals. Some stuff was ignored until later discovered to be great, and some celebrated stuff quickly lost its appeal. I'm not saying that people should rush papers or use deceitful practices like salami slicing, but I find a lot of this hemming and hawing about quality papers to be ex post facto.
What many don't realize, though, is if someone has 3 strong, 6 mediocre, and 1 weak paper, there may be a 70% chance the first paper someone reads of will be a mediocre or weak paper. If so, they are less likely to look at your other work. The more bad stuff you put out, the more chance your good stuff will be overlooked.
That assumes that each paper has an equal likelihood of being read. If 3 papers are great, they are more likely to be cited and more likely to be seen and read. Your point about bad stuff negatively impacting good stuff may be true, but this also ignores any halo effect that the good papers might have.
Also, what exactly counts as a great paper? I've seen junk at top journals, and gems at interdisciplinary or marginal journals. Some stuff was ignored until later discovered to be great, and some celebrated stuff quickly lost its appeal. I'm not saying that people should rush papers or use deceitful practices like salami slicing, but I find a lot of this hemming and hawing about quality papers to be ex post facto.
I accounted for everything you said when I said "there may be." Obviously, it depends on where people look for work, and other complex factors. And, like you, I don't think the journal in which a paper appears is a good sign of a paper's quality. Its better than chance, but not by much. Lots of bad papers get published in top journals, and some real gems get published in low impact factor journals. But, it is also true that the more bad stuff you put out, the more the chance that a person's first exposure to you will be one of your poor papers. If so, the chance they'll choose to read one of your other papers goes down. At least, that's how I operate, and that's how a lot of colleagues I know operate. Think of it as the halo effect in reverse.
Think of it like music, film, or literature. If the first novel you read by novelist X is bad, how likely are you to take a look at the next nove
...See full postIf I publish 10 articles 3 are strong 6 are mediocre and 1 is downright weak. Is that better than someone who published 1 strong article?
Three being strong helps you. But why publish 6 mediocre papers and 1 paper you admit is weak? Why not work and make those 7 papers be 2 or 3 strong papers, instead?
This calculation--should I just throw stuff out there, or should I work to make what I publish be very good--is the most damaging result of the push to count instead of read.
Sometimes you thought the paper was a real contribution, but the field didn’t see it the same, and it winds up in a B journal. Sometimes you collect data with others, the results weren’t exciting, but you still need to publish for their sake. Sometimes you collect data You hoped would be exciting, but they weren’t, and you don’t want to suppress findings, contributing to the file drawer effect problem. Sometimes there is a theoretical or empirical point you think matters even if others don’t, and you might be right. Sometimes a paper really is good, and people generally think so, but it doesn’t fit the top journal template, so it publishes a level lower even though people like it.
Personally, I’m more concerned about people who have mediocre findings, but have enough paper framing skills to write them into a top journal, even though the work is not really a significant contribution and takes up space that could be filled with work that is.
Sometimes you thought the paper was a real contribution, but the field didn�t see it the same, and it winds up in a B journal. Sometimes you collect data with others, the results weren�t exciting, but you still need to publish for their sake. Sometimes you collect data You hoped would be exciting, but they weren�t, and you don�t want to suppress findings, contributing to the file drawer effect problem. Sometimes there is a theoretical or empirical point you think matters even if others don�t, and you might be right. Sometimes a paper really is good, and people generally think so, but it doesn�t fit the top journal template, so it publishes a level lower even though people like it.
Personally, I�m more concerned about people who have mediocre findings, but have enough paper framing skills to write them into a top journal, even though the work is not really a significant contribution and takes up space that could be filled with work that is.
I do not evaluate work the way you do. People are not responsible for what their findings are, because none of us can make the world work they way we want it to work. But people are responsible for the skill of the analysis they conduct--and it is the analysis that is supposed to produce the findings. If an analysis is produced with great skill, then the findings should be accepted until a better analysis comes along--even if I wish the world worked differently than the findings say. By great skill I mean, for example, accounting for or otherwise addressing obvious alternative explanations, measuring or observing well, thinking carefully about the theories that might apply, if statistical then using appropriate models, actually doing an analysis that addresses the question you are asking, and so on.
People who think "good work = good findings"--whatever "good findings" means--are another major problem with the field.
Because you can contribute to the conversation many different ways.
You sound like someone from Berkeley
If I publish 10 articles 3 are strong 6 are mediocre and 1 is downright weak. Is that better than someone who published 1 strong article?
Three being strong helps you. But why publish 6 mediocre papers and 1 paper you admit is weak? Why not work and make those 7 papers be 2 or 3 strong papers, instead?
This calculation--should I just throw stuff out there, or should I work to make what I publish be very good--is the most damaging result of the push to count instead of read.
I disagree. Honestly the productivism of the discipline is so front and center in the way that I engage with research that my immediate reaction is: this was probably something they had to schlep together for the bean counters.
What many don't realize, though, is if someone has 3 strong, 6 mediocre, and 1 weak paper, there may be a 70% chance the first paper someone reads of will be a mediocre or weak paper. If so, they are less likely to look at your other work. The more bad stuff you put out, the more chance your good stuff will be overlooked.
what I have seen from some senior people is not so much salami slicing, but maybe we can say fore-warning. they write shorter, less complex papers in mid-level journals as a sort of precursor to the main paper. I think it can be a combination of getting the article in front of different audiences, getting something out quickly for grant purposes, spinning off some smaller ideas or test ideas early on and so on.
I dont know what a 'weak' paper is - usually the abstract tells you whether the paper has the sort of complexity you are looking for. i think there are people that might be outside your sub-field, might be practitioners, might be students, that might be interested specifically in your 'weak' articles.
Voted for bernie
There is nothing wrong with publishing a lot. The problem is the competition this triggers. Productive people set standards that others have to strive for. This is what creates salami slicing, recycling of text, ideas and whatnot, not to mention the lack of well thought out and developed research in the discipline.
Many people in sociology are capable of fantastic research. Instead they are forced to squeeze as many papers they can into the time they have, rather than doing something really great.
An additional problem is that no one has time to actually read all these papers...and so then we just count them instead...which only reinforces the salami-slicing/bean-counting phenomenon. Self-fulfilling prophecy and all that...someone should write 37 papers about it.
An additional problem is that no one has time to actually read all these papers...and so then we just count them instead...which only reinforces the salami-slicing/bean-counting phenomenon. Self-fulfilling prophecy and all that...someone should write 37 papers about it.
Or, don't count them. I've salvaged a couple of job candidates by pointing to the high quality of their work, even though they had no ASR/AJS and only 1 published paper. (And as those people have succeeded here, it makes my task easier every year). And I've eliminated more than a few job candidates by showing the low quality of their ASR paper (or multiple papers) in comparison.
So, even if everyone seems to be counting, you still have the option to read. Its in your power to do that, and to publish as if you want people to read, not count.
Sometimes you thought the paper was a real contribution, but the field didn?t see it the same, and it winds up in a B journal. Sometimes you collect data with others, the results weren?t exciting, but you still need to publish for their sake. Sometimes you collect data You hoped would be exciting, but they weren?t, and you don?t want to suppress findings, contributing to the file drawer effect problem. Sometimes there is a theoretical or empirical point you think matters even if others don?t, and you might be right. Sometimes a paper really is good, and people generally think so, but it doesn?t fit the top journal template, so it publishes a level lower even though people like it.
Personally, I?m more concerned about people who have mediocre findings, but have enough paper framing skills to write them into a top journal, even though the work is not really a significant contribution and takes up space that could be filled with work that is.
I do not evaluate work the way you do. People are not responsible for what their findings are, because none of us can make the world work they way we want it to work. But people are responsible for the skill of the analysis they conduct--and it is the analysis that is supposed to produce the findings. If an analysis is produced with great skill, then the findings should be accepted until a better analysis comes along--even if I wish the world worked differently than the findings say. By great skill I mean, for example, accounting for or otherwise addressing obvious alternative explanations, measuring or observing well, thinking carefully about the theories that might apply, if statistical then using appropriate models, actually doing an analysis that addresses the question you are asking, and so on.
People who think "good work = good findings"--whatever "good findings" means--are another major problem with the field.
Well, sure. I agree with you 100% that a paper should be judge
...See full postIf we could spend time writing one paper every 7 years, we probably would. But modern-day academia means you need to produce more, and work at a faster pace. Sometimes I submit and publish work I wish I could have spent another year working on. But there are time constraints, tenure clocks, and other factors pushing us to work fast. So, sometimes this means quality suffers a bit. I am not saying this is right, just saying how it is.