[D] Does seeing the identify of authors influence your scoring?
Let's be honest, at some stage of the review process. A lot of us have gotten bored and tried to Google the papers we are reviewing. And sometimes those papers might have already been uploaded onto arXiv with the identity of the authors. Which we then tried to look them up. As a first-time reviewer, I noticed the top 2 papers in my batch happened to be the only papers in my batch that is on arXiv. I am trying to work out if revealing the author's identity had influenced my decision. Or it's just a coincidence. submitted by /u/d_edge_sword [link] [comments]
Could not retrieve the full article text.
Read on Reddit r/MachineLearning →Reddit r/MachineLearning
https://www.reddit.com/r/MachineLearning/comments/1s9iacl/d_does_seeing_the_identify_of_authors_influence/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
reviewpaperarxiv
Show HN: Wazear – A visual AI orchestrator where agents review each other
Hey folks, For the past month I've been working on a visual AI orchestartor tool that allows users to create a pipeline similar to SDLC. Basically you fire up Wazear, create a project and add your brief. You select the agents (each agent serves a role such as planner, architect, etc...) and set which agent reviews which other agent's work and let it do the work. At any point you can pause the pipeline to review output yourself. You can check it out here: https://wazear.space . Any feedback is welcome. Thank you very much. Best Regards. Comments URL: https://news.ycombinator.com/item?id=47624203 Points: 2 # Comments: 0
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
Omni123: Exploring 3D Native Foundation Models with Limited 3D Data by Unifying Text to 2D and 3D Generation
Omni123 is a 3D-native foundation model that unifies text-to-2D and text-to-3D generation using a shared sequence space with cross-modal consistency as an implicit structural constraint. (1 upvotes on HuggingFace)
DynaVid: Learning to Generate Highly Dynamic Videos using Synthetic Motion Data
DynaVid addresses limitations in video diffusion models by using synthetic motion data represented as optical flow to improve realistic video synthesis with dynamic motions and fine-grained motion control. (2 upvotes on HuggingFace)

![[D] Does seeing the identify of authors influence your scoring?](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-circuit-gold-PMJWD5qsqGfXwX8w9a97Cb.webp)
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!