![Laughing :lol:](./images/smilies/icon_lol.gif)
![Laughing :lol:](./images/smilies/icon_lol.gif)
![Laughing :lol:](./images/smilies/icon_lol.gif)
Fu Manchu wrote: ↑Fri Feb 05, 2021 3:35 pmHow old is your supervisor? Just curious to know what "young" meansGiadina_1988 wrote: ↑Fri Feb 05, 2021 2:18 pmIt was the case for me last year. I had a relatively young supervisor, but excellent in his qualifications. He had an excellent truck of successful supervisions, great track of publications, the host which already has MSCA fellow... Nothing bad on paper! In the reviewers' comments we could totally see they had something against the supervisor (who in fairness, has a bit the reputation of being a 'challenger' in his field). They questioned his qualifications, even stating he was of a too young age to supervise a MSCA. Results: 30 points LESS than the previous year (when the supervisor was exactly the same and not questioned at all!).
PetetheCat wrote: ↑Fri Feb 05, 2021 2:10 pm
Yes, but remember that this also means that you can get a reviewer who dislikes your well-known supervisor. Mostly it is good, but it can cut two ways.![]()
![]()
![]()
I agree with Little_Venice. I add that with 11000 application (or 9000 doesn't make really a difference) there is the need to sort out heavily applications. Therefore, if an evaluator wants to find shortcomings/limitations to your application she/he will find it and then your application might be compromised in any case already. Its not the first time I hear that from other colleagues evaluating.Little_Venice wrote: ↑Fri Feb 05, 2021 4:35 pmThis sounds perfect on paper but I personally, over the years, have seen dozens of examples of random, inconsistent and self-contraductory reviewer comments. They will literally say that the project is innovative in the first comment and that the projects lacks innovative aspects in the last one, with all kinds of incosistencies in between. Also, dozens of examples of people scoring significantly lower on a resubmission, having allegedly improved the proposal. Sadly, the reviews I have seen strongly suggests that, at least in those particular cases, little of what you describe below is actually followed in practice.
I MlGuyFromSpace wrote: ↑Fri Feb 05, 2021 4:20 pmPeople are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.
Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).
Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.
Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
It's not random, so perhaps the word "lottery" is unfair. But it is about the evaluator looking at your proposal and seeing value in it. The nature of debate in academic fields means that there are different views on this. And last year, the comments that I got that lowered by "excellence" score by over one point (even with improvements in clarity from the year before, when this section scored highly) were clearly about a political position in relation to the work I was carrying out. They were about a critique of the project that I could understand that someone would have from that perspective -- and have tried to preempt this year -- but they weren't actually about not seeing merit in the proposal. And then there were other comments that were clearly in there to find justifications to lower the score.Bluestar wrote: ↑Fri Feb 05, 2021 5:14 pmI agree with Little_Venice. I add that with 11000 application (or 9000 doesn't make really a difference) there is the need to sort out heavily applications. Therefore, if an evaluator wants to find shortcomings/limitations to your application she/he will find it and then your application might be compromised in any case already. Its not the first time I hear that from other colleagues evaluating.Little_Venice wrote: ↑Fri Feb 05, 2021 4:35 pmThis sounds perfect on paper but I personally, over the years, have seen dozens of examples of random, inconsistent and self-contraductory reviewer comments. They will literally say that the project is innovative in the first comment and that the projects lacks innovative aspects in the last one, with all kinds of incosistencies in between. Also, dozens of examples of people scoring significantly lower on a resubmission, having allegedly improved the proposal. Sadly, the reviews I have seen strongly suggests that, at least in those particular cases, little of what you describe below is actually followed in practice.
I MlGuyFromSpace wrote: ↑Fri Feb 05, 2021 4:20 pmPeople are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.
Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).
Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.
Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
GuyFromSpace wrote: ↑Fri Feb 05, 2021 4:20 pmPeople are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.
Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).
Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.
Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.