Main Menu
· Home
· Member Directory
· User Guide
· Stats

Topics of Interest
· Discussions
· Interviews
· Job Openings
· Book Reviews
· Conference Links
· HCI Societies
· Town Hall (3)
· Town Hall Results (2)

www.CHIplace.org
· Mission
· History
· FAQ
· Contact Us
· Newsletter
· Interview Guide

 
FAQ   Search   Memberlist   Usergroups   Register 
Profile   Log in to check your private messages   
   
Author Message
Loren_Terveen



Joined: Mar 25, 2001
Posts: 9

  Posted: Dec 01, 2003 - 07:21 PM

It's that time of year again, when some folks get good news and some folks get bad news... and since bad news always triggers more of an urge to respond than good news, we ask (following Kristina Höök's post from last year):

Was your submission to CHI rejected this year? Did you find the rejection fair or do you have a nagging feeling that your paper was treated unjustly? Was is because your research area is not important enough according to the current CHI perspective? Or did the reviewers not understand it because it is so novel and innovative? Or because your paper was too theoretical? Or because you are not from the states (!)? Share your stories here - both rejection stories and also the unexpected acceptance stories!
Scott_Jenson



Joined: Oct 09, 2003
Posts: 1

  Posted: Dec 08, 2003 - 07:12 PM

Talk about a disincentive! Who wants to advertise they were rejected? As my experience was a bit frustrating and I hope the community can grow from the discussion, I'm just going to swallow my pride and go for it...

What made this year frustrating to me is that my tutorial was accepted at previous CHIs yet received a perfunctory rejection, one which even gives the impression of being done incorrectly. Let me be perfectly clear however my intention is not an attempt to reverse the decision. I hope a much broader discussion can take place that improves the entire submission process.

Of my 3 evaluations, one wasn't completed past question 3 and the negative one was a series of 'no' or 'I don't' understand' comments with a "do not recommend" at the bottom.

If CHI wants to encourage proposals, it needs to carefully rethink this evaluation process. I would like to state that, in general, I agree with the 3 person review. I also feel that getting volunteers to review is a thankless and difficult task. However, I do feel the reviewers need to be given stronger guidelines and there needs to be some validation of properly completed work. I would suggest:

1) The reviewers complete the evaluation form (!)
2) Reviewers answer questions, when appropriate, with more than 1 word answers
3) There is a process to clarify reviewer concerns about the submission
4) When there is a strong accept and a strong reject, some effort is made to resolve this apparent conflict
5) Strong rejects make an attempt to give stronger reasons for their negative reaction.

Suggestions 1 and 2 are are fairly trivial and indeed my case my only be 'bad luck'. Other postings might confirm if my experience is a trend a just bad karma...

The others are clearly more difficult and may already stretch an overworked volunteer program. The cure may actually cause more harm. However I do hope this spurs further discussion.

Scott Jenson
Dan_Cosley



Joined: Jun 18, 2003
Posts: 7

  Posted: Dec 11, 2003 - 06:02 PM

Perhaps a separate discussion about quality of reviews is in order. We had a paper accepted, but the reviews felt pretty shallow and weren't that helpful in deciding how to improve the paper. (As much as I'd like to believe it's because our paper was flawless, history indicates that's unlikely.) Also, as a novice researcher/reviewer, I'm trying to learn how to do effective reviews. Good examples would be helpful.

-- Dan
Gitte_Lindgaard



Joined: Dec 17, 2003
Posts: 1

  Posted: Dec 17, 2003 - 08:20 PM

Hi,
So, other people are starting to ask questions about the review process at CHI - interesting! I have been a reviewer since about 1987 or 88, but as of 2003 I will no longer be part of the review process. For two years running I have been pressured into changing my judgment on some of the papers I reviewed because my judgment happened to be 'too far from those of the other reviewers'. Seeing that I give very detailed feedback on the papers in an effort to (1) assist the authors improve future versions of their paper, and (2) show the reasoning leading to my judgment, I am both disappointed in, and offended at, this practice. First, I do read and re-read the papers in detail and spend a considerable amount of time doing the reviews and giving the process my best, and second, I judge a paper as fairly as I can the first time round. In none of the cases in which I have been pressured to change my final mark have I seen any reason for the request other than it apparently was 'out of line'.

I was also amazed that other reviewers' reports were accessible online before all reviews had been submitted in preparation of the Fort Lauderdale conference. In order to avoid the pressure of being asked to revise the final mark, the easiest strategy for a reviewer is thus to wait till the last minute, look at other reviewers' marks, and place one's own close enough to avoid being bothered a second time.

I do hope this discussion will give food for thought.

Gitte Lindgaard
Chair User Centred Design
Carleton University
Ottawa
Canada
Ed_Chi



Joined: Apr 16, 2001
Posts: 4

  Posted: Jan 22, 2004 - 10:21 PM

One major problem this year, in my opinion, was the list of keywords and categories one had to chose that describes the paper. I believe this was an attempt to get the papers assigned to a set of reviewers. Unfortunately, the categories and the keywords were, IMHO, a bad set. It was very hard to describe many papers using this set of keywords/categories. Many papers appears to have gotten reviewers that were completely inappropriate (i.e. they don't know that field or area.)

_________________
Jonathan_Grudin



Joined: Apr 24, 2001
Posts: 19

  Posted: Jan 23, 2004 - 01:03 PM

Although both of my submissions were rejected, I generally postpone the pleasure of reading the reviews until it's time to revise to submit elsewhere so I can't comment on this year's reviewing. I agree that the keyword choices were poor -- the list did not include anything appropriate for either of my papers, my first indication that they were unlikely to make it. Yet I think they were relevant to CHI. Look for them elsewhere.

We need to keep in mind that the CHI papers committee is not in the business of constructively improving submitted work, it is in the business of finding ways to reject 85% of it. This is an environment in which nit picking and creating mountains out of molehills is going to be appreciated, reenforced, and naturally selected for. We've got to find reasons to reject, reject reject. It's not pleasant work but someone has to do it, apparently. Look at those poor people who had to pressure Gitte to get with the program, do you think they enjoyed making her squirm? No, we're all nice and well-intentioned, it is the system that makes us do it.

Think of the alternative: Let's say we accepted 85% of the submissions, then selected 15% for a Really Great Paper category. We could work constructively with everyone to improve their work! We could go to the conference and find out much more about what is happening!

If we did that, I'd be willing to be on the program committees for CHI and CSCW again. I like being constructive but got tired of being part of a machinery that drives people to submit work to and attend other conferences.

-- Jonathan
E_Dykstra-Erickson



Joined: May 14, 2001
Posts: 3

  Posted: Feb 17, 2004 - 07:39 PM

Well, just to show it's universal, my tutorial proposal got rejected (no favoritism here for the chairs! we can be pleased about the even-handedness). The reviews reflected the opinions of people more disinterested in the topic than in looking at the quality of the proposal. So it was quite difficult to take comments into account and improve it for the next year. Just my opinion.... and oh, it was on a practitioner topic, not a research topic.
mattkam



Joined: Mar 02, 2004
Posts: 1

  Posted: Mar 02, 2004 - 12:25 AM

The process taken in handling and evaluating our submission disappointed us. Namely, the meta-reviewers took extraordinary measures to counteract peer review of our paper. The original peer-review marked the paper as uniformly and exceptionally strong, with overall ratings of 5, 4, 4, and 5. There was no ambiguity in the reviews and only minor flaws cited by these reviewers. The meta-review did not challenge any of the positive reviewer comments. Instead, the meta-review only focused on small parts of the paper and quoted suggestions for improvement from the reviewers as though they were grounds for dismissal. They rejected the paper with overall scores of 2, 3, and 2.

It is important to note that the first meta-reviewer (a paper acceptance chair) wrote that, "Authors found no empirical evidence for the benefit of this system", yet was presented with empirical data to the contrary which all peer-reviewers cited for accepting the paper. We can only conclude that the meta-reviewer views the "empirical" aspect of HCI with a very extreme viewpoint -- something that we believed the CHI community and our peer-reviewers eschew for a more inclusive and inquisitive understanding of interaction. It was also obvious that the first meta-reviewer influenced the other two meta-reviewers based on their comments.

We know that the decision on the paper will not change, but hope the constructive outcome is that future meta-reviewers will reconsider the wisdom of capriciously overriding a clear peer verdict.
Matt_Jones



Joined: Oct 31, 2001
Posts: 3

  Posted: Mar 04, 2004 - 02:08 PM

Like others here I am beginning to be disheartend about the CHI philosophy.

Hoorah for Jonathan's comments: I feel CHI should be in the business of representing the community's work, exposing it to debate, building links between potential collaborators and consumers etc etc: it clearly does not do this at the moment.

Of course it is not attractive to attendees to feel they are just an audience of second-class citizens. Most non-presenters I've met are just as interesting, enthused and brimming with potential as the elite presenters. We are nuturing a mute community - come on, let us sing!

I second Jonathan's proposal : go for a higher acceptance rate and a revise and re-review cycle to identify key papers (other conference's call them "best papers" ; we could be more savvy and call them things like "high impact" , "ideas to watch").

It would be good to know if the SIGCHI exec read these comments (we've seen similar ones in previous years) : can there be a serious consideration of the proposal, please?


Matt

_________________
msalzman



Joined: Mar 06, 2004
Posts: 1

  Posted: Mar 06, 2004 - 01:47 PM

Our attention was drawn to this CHIplace discussion and to the posting by mattkam concerning his recent experience with CHI reviewing.

First, I would like to respond to mattkam. Thank you for sharing your experience. Your correspondence, along with the records we have of your reviews and the submitted paper itself, have been passed to the CHI Papers Support Team (described below) who will review this case is some detail. As you yourself point out it is not possible to revisit the actual decision. However, we do indeed intend to review in some depth the overall process and to recommend changes that take on board experiences such as your own. It is important that CHI learns from experiences such as yours. All members of our community should understand the importance of working together towards a vibrant conference that everyone is proud to be associated with. We would welcome your input to this process and would like your permission to draw upon your experience and to use this to help us shape new processes and guidelines for the future. Feel free to continue your discussion here, or to email me directly at: .

Next, I would like to thank all of you for a set of well-considered arguments on the nature of the review process and how this has been reflected in your own experiences. We welcome your continued discussion and constructive inputs, and we share many of your concerns about the consequences of review experiences such as some of the ones you describe. Within the SIGCHI and the CHI committees, significant and considerable discussion is underway about the whole review process, and many of the topics you raise are central to these discussions. The issue is not whether things need to change but what changes to make.

I also want to reassure you that both SIGCHI and the CHI committees are listening. In fact, SIGCHI has recently formed two subcommittees to address the kinds of issues that are being raised here:

- The CHI Papers Support Team is currently working with CHI Papers co-chairs in implementing the papers process for CHI 2005 and beyond, including any process innovations/enhancements. The Support Team will draw upon experiences and data collected from the review processes of past years in order to help ensure continued improvements for the future. The experiences offered here are being integrated into that very analysis.

- The SIGCHI Publications Strategy Team is taking a broader look at how well our current publication venues (conference, CHI Letters, TOCHI, other journals, other serial publications) are meeting the needs of SIGCHI’s researchers and the people who consume that research, identify gaps/needs, and develop some strategies for addressing those gaps/needs. They will help us set a long-range vision for what SIGCHI can do to ensure that we are meeting the publication needs of our membership.

To learn more about either of these subcommittees, feel free to contact me at: .

Sincerely,

Marilyn Salzman
SIGCHI VC for Publications
Scott_Hudson



Joined: Nov 24, 2002
Posts: 4

  Posted: Mar 08, 2004 - 05:03 PM

I’m apparently coming a bit late to this discussion, but I want to reply to Jonathan Grudin’s January post here (and similar sentiments I’ve heard expressed elsewhere). Jonathan suggests that we would all be happier accepting 85% of CHI papers instead of 15% (and giving some sort of special designation to the top 15%).

I think this is a recipe for disaster. While Jonathan suggests that “We could go to the conference and find out much more about what is happening!”, in fact this implies about 5 times the current content, moving from the current ~3 things happening at once to ~15 things happening at once (or from the current 3 days to 15 days -- take your pick). This will not increase our ability to see the good things happening in our field. The fact of the matter is, the conference is currently bursting at the seams in terms of total content. When was the last time you heard an attendee saying, “Gee, I wish there were more things happening at the same time”?

I have heard others suggest that dramatically lowering the bar for papers would increase conference attendance. I believe there is pretty clear evidence already out there that this in not true. There are several HCI conferences which accept a substantially higher percentage of papers, and have done so for many years. Yet these conferences do not have huge attendance. What they have is a much higher author to non-author attendee ratio -- a lot of the people at those conferences come because they have a paper. On the other hand most people come to CHI because it’s the place where the best HCI work is presented every year. If you accept a lot more papers, all you do is destroy CHI as the premier venue for publishing HCI work (Jonathan’s “Really Great Paper” category not withstanding).

For these same reasons, an 85% acceptance rate would have huge implications for academics -- it would pretty much instantly cease to be a desirable place to publish. For reasons of academic survival most people anywhere near a tenure processes would run from such a conference as fast as they could. A citation that says “Really Great Paper” might help a little, but I don’t think it would come close to retaining academic respectability. For me personally, the bottom line is that there are only very limited circumstances in which I would consider publishing in any venue that accepted 85% of submissions -- I would likely consider it a stain on my vita and would probably just not list it among my publications.

Speaking as someone who has had 8 CHI papers rejected in the last 2 years, please do not even consider substantially raising the acceptance rate. It seems to me that it would be an unmitigated disaster. I firmly believe that the CHI paper review process has not been working well. However, acceptance rate is not the problem.
Shumin_Zhai



Joined: May 14, 2001
Posts: 6

  Posted: Mar 08, 2004 - 06:56 PM

As CHI 2005 papers co-chair, I thank everyone for sharing your views on improving the leading conference of our field. We gather information from forums like CHI Place regularly.

High quality review is the key to CHI’s success. Every year the CHI papers committee takes steps to improve it. There are some inherent difficulties to ensure high quality reviews for all submissions all the time given CHI’s multidisciplinary nature and the large number of submissions. The close to 600 submissions to CHI 2004 meant about 3000 reviews have to be made. We are indebted to many experienced researchers who do a large number of reviews for CHI and other HCI publications, but we need more of them. Whenever possible, every researcher should volunteer to review. If person P in the past X years made Y submissions to the CHI conferences, P should review at least Z (= Y* 5 (reviews)/X) submissions. The more experienced researchers should do more.

Due to its size and impartiality concerns, CHI paper committees have relied increasingly heavily on automatic assignment of Associate Chairs (ACs) and reviewers to paper submissions. Sometimes this resulted in poor matches of expertise. As a result ACs occasionally had to dismiss some reviews and use more of their own judgment in making a recommendation at the papers committee (PC) meeting. For CHI 2005, we will try to make better matches by increasing human manual selection of ACs and reviewers. Automatic matching algorithms and other web and computing resources will be used as assistive tools in the selection process. This will undoubtedly increase the already quite heavy workload of the papers committee, but it maybe worth the effort.

One option to make things “easier” is to significantly increase its acceptance rate. This is a frequently discussed and sometimes hotly debated option at the CHI papers committees and other CHI task forces such as those mentioned by Marilyn. Jonathan and others have made many good points arguing for this option (but also see the larger picture Jonathan painted of the complex factors involved in the publication world in his essay in the latest TOCHI issue). The arguments against such an option include the prestige of the conference as a highly selective publication forum, a selective filter for what is presented at the conference for the audience (to avoid a “bargain basement”), and the academic credit to have a CHI publication etc (for tenure, funding, promotion, research performance measurement, etc). Some have argued that if CHI were an easy place to publish, they would have sent their best work elsewhere. Many first tier ACM conferences in other computer science fields also have a tradition of being highly selective (e.g. SIGGRAPH for graphics, STOC for theory, SIGMOD for database etc). For various reasons these fields and ours often use selective conferences rather than journals as a definite research publication outlet. As a comparison within the HCI field, there are a few comprehensive conferences that accept a lot more papers, although their attendance is lower than CHI.

Although we have to cope with various reality constraints for conference planning purposes etc, the goal of the papers’ committee is to accept all submissions that make a significant contribution to HCI (and reject those that do not). The current definition is “Papers present significant contributions to research, development and practice in all areas of the field of human-computer interaction”. In the recent years that I remember, the acceptances from initial rounds of decision at the PC meetings have been always been short of the target number of papers to be accepted. The committee usually makes extra effort to increase acceptance. In that sense, it is not the relative percentage that limits the current acceptance level. However, many PC members do agree (although often not on the same papers) that some interesting but premature (not convincing enough, not implemented yet, not enough evidence etc) submissions are unfortunately rejected. Occasionally highly questionable contributions did get accepted because they were debatable (with varying results at the conference). We are considering ways to get interesting enough but not fully established research heard at CHI.

As for the paper discussion function, I feel sorry that Gitte felt she was pressured to agree with other reviewers. My experience is that reviewers tend to elaborate more on the reasons of their rating when asked to discuss a paper. This is good for making the final paper decisions. The discussion function is also helpful for community building. Many reviewers are curious on how others viewed the same papers and may learn from other’s perspectives. But no one has to change their initial rating if they think they are correct even after considering others’ view.

We have been considering various ways to improve CHI papers. There is a task force on this (CHI support team, as mentioned by Marilyn). Suggestions for improvements are welcome, but our apologies in advance that we can not reply to all individual emails or posting here.

Shumin Zhai

_________________
Jonathan_Grudin



Joined: Apr 24, 2001
Posts: 19

  Posted: Mar 09, 2004 - 09:37 AM

Scott thinks an 85% acceptance rate would be a disaster, as I think the 15% rate is. Let's try a compromise experiment one year: 50%. Many large conferences with acceptance rates around 50% are spectacularly successful. Just for one year. My guess is that if we did, we would wonder what took us so long and never look back. But if I'm wrong, no lasting harm would be done. Keep in mind the first CHI conference had an acceptance rate of 34%, closer to 50 than 15. It was not a disaster.

Scott has not listened to the same CHI attendees I have if they are challenged by too many exciting alternatives. I hear complaints of boring paper sessions. Every paper is interesting to a small number of people, but with so few accepted the sessions are not very coherent, and much work that makes it through is incremental advances with familiar-sounding bullet-proof rationales and literature reviews honed in previous submissions. With 3x as many papers, sessions would be far more coherent, and although attended by fewer people it would be people who are very interested in the topic. In the old days all CHI attendees were generalists because the field was new and little had been done, so just about everyone was interested in just about everything. But today many of us are specialized, many younger folks have always been focused (on ubicomp or CSCW or whatever), and aren't interested in the latest advance in another specialization. We have become a mature field but have not reflected our maturation in our conference.

I participated in scores of faculty appointments, tenure cases, and full professor promotions. In my experience in the University of California system, Scott is wrong about one thing. A Best Paper award in almost any conference counted more than a CHI paper did. Maybe it should not have, but it did. It looks impressive sitting on the CV and it can be singled out in the case written for the candidate. If you look at my CV on the web page you will see a couple colorful best paper awards -- both rejected by CHI as I recall. It wa a good trade -- a CHI paper for a best paper award. (One thing that counts less than a CHI conference paper is a CHI conference paper that masquerades as a journal article under the CHI Letters imprint, so please don't try that.)

Having tried for over ten years to get CHI to experiment, e.g., allowing paper authors to provide feedback on review quality through a formal mechanism, trying a more open conference, and so on, my observation is that the people running CHI listen to what you say. It is less clear that they hear what you say. And it has not been my experience that they alter what they do based on what anyone says. It just takes a couple people hollering like Scott to get our leaders to back off for fear of being the person who wrecked CHI. And you can find someone who will moan about any procedural innovation. My concern is that CHI is fading into insignificance just when our topic is becoming ofhigh salience in the world. As outlined in the TOCHI essay that Shumin mentioned, it is complicated and seems to me to be due to the fact that CHI emerged at a time and place where digital technology and other factors were shifting the balance between journals and conferences. Simply opening the conference a little won't solve all the problems. Creativity will be required. But there are a lot of creative people around here.
Scott_Hudson



Joined: Nov 24, 2002
Posts: 4

  Posted: Mar 09, 2004 - 09:51 PM

I stand by the same outlook for 50% as I do for 85% (and for the same reasons). I still think it would be pretty much of a disaster (although a central part of that is wrapped up in the reputation of the conference as a premier venue, and the loss thereof, and that would take more than a single year to show).

But let me go at this via what Jonathan and I agree on, rather than what we disagree on:
- A best paper award does beat a CHI paper in pretty much any tenure case.
- I think we do hear the same attendees – I also hear complaints of not very exciting work, and boring sessions.
- I am also concerned that CHI is fading into insignificance (and overall that the current papers selection process is not serving the needs of the community well).
- Some of the pressures on us as a community, and on this process, are because we are now quite diverse in interests (and perhaps we can’t seem to agree to disagree as much as we should).
- The first CHI conference (with a 34% acceptance rate) was not a disaster. (Any conference begins with a high acceptance rate due to difficulty attracting authors -- the disaster I fear is loosing the hard won respect the conference now holds as the top HCI venue).


Unfortunately, I don’t think we agree on what’s behind these things, or what to do about them.

First, an “award” that places a paper at a level of selectivity equal to the current level to just get published cannot be, or even be called, a “best” paper award. Whatever it’s called, I don’t think it will have dramatically more effect on a tenure case than one would currently have listing CHI conference pubs as CHI Letters. On the other hand if we look at the 35% of “new” papers, bad things are happening. In the current system many of these papers are submitted to other venues, or resubmitted to CHI next year -- in either case hopefully after improving the paper first. In the proposed 50% system I don’t foresee many people withdrawing their accepted papers so they can improve them. In the end we will be poorer as a community because of that. Competition is good -- it causes you to work harder than you would otherwise and it produces better results.

As to attendees (including, I must say, myself) complaining about boring sessions: I don’t see this as a result of poor coherence of session topic (if only it were that simple). That might be improved some, but it’s not really that bad. I see it as a quality issue -- the papers are not that interesting because they are not that good. Personally I find a lot of them as Jonathan says “incremental advances”. However, if in fact quality is the issue, then opening the flood gates is not going to fix it, but will instead make it worse, much worse. (This is one of the reasons I use the word “disaster”.)

While I see the current system as flawed as a quality filter (*that* is what needs to be fixed!), I think we will all agree that its at least partially functional -- quality does not end up uniformly distributed over the papers that the process ranks in the top 50%; the probability of finding quality in the top ranked papers is higher than the probability at the lower ranks (hopefully, quite a bit higher). So what happens if we triple the number of accepted papers as proposed? The average quality is going to go down (substantially) and the likelihood that we find a quality paper at any given session goes down. (Note that we don’t actually have to even agree on what quality is for this analysis to work, only that the current system produces a “quality gradient” for whatever our definition of quality is.)

Interestingly, I think that Jonathan and I would probably agree that raising the acceptance rate will increase the absolute number of “good” papers at the conference for several different notions of good -- there are quite a few good papers that are getting rejected, and that is very damaging to our community. I think perhaps Jonathan is proposing that we dramatically lower the bar in order to get these papers into the conference. I agree that working to get these papers into the conference is what we should be trying to do. However, I think doing it by dramatically lowering the bar is a big mistake because it would substantially lower the average quality. Instead, I think we need to concentrate on improving the review process.

Specifically, I think that the quality of reviewing in the current system has been quite poor -- we are simply not getting enough good enough quality reviews done. We are not doing a good job matching papers to meta-reviewers, and in too many cases, the actual reviewers don’t know a huge amount about the area of the paper they are reviewing. You can see this reflected in the comments in this discussion thread and lots of other places. People are up in arms about it and they should be.

We should not be surprised by this. In recent years, we have been picking reviewers for papers, not on the basis of a careful judgment of who knows a lot about the particular subject matter of the paper but using an algorithm which matches a nebulous set of keywords against a database of who ever happened to sign up (from which many, if not most, prominent HCI researchers are missing, because taking time out to sign up for more work is not what really busy people do).

In addition to the directly damaging effect of injecting tremendous randomness into the process (and inevitably pushing out some good papers), this has other insidious effects. One is that the meta-reviewers don’t know hardly anything about the people doing the reviews for them, but do know that it’s not necessarily a great idea to take all the reviews at face value (because a lot of them are in fact not very good reviews). This leads to a culture which allows, almost requires, (highly qualified) meta-reviewers to have a strong voice in comparison to the (not very qualified) reviewers. This leads to exactly the kind of effect that I think those who instituted the database system were trying to avoid.

We need to stop using algorithms to pick meta-reviewers (ACs) and reviewers, and instead rely on expert human judgment for this. The system cannot work well without good reviews, It cannot produce good reviews without good reviewers and good matches between paper subject and reviewer expertise. Every paper author deserves to have their paper reviewed by people who know a lot about the subject of the paper. We ought to make that the number one priority, and set up the culture so that ACs are accountable for making it happen. If you stand up at the meeting and say “yes, the reviewers love/hate this paper, but they don’t know what they are talking about, and we should reject/accept it anyway”, then the next words spoken should be “so why did you pick these reviewers?”.

Note also that hand picking reviewers (and trusting them more) also helps us to agree to disagree across areas. Technology papers can be reviewed by highly qualified technology people, CSCW papers can be reviewed by highly qualified CSCW people, etc., etc. This will lead to the appropriate standards being applied based on the type of paper (e.g., papers that are really about innovative technology but end up with not so great statistics in their user study will not be killed by that, nor will great behavioral studies which don’t invent any innovative technology, etc.). This will tend to reduce the “mediocre, but offends no one” papers that we see too much of now.

While the logistics of this may seem unmanageable, I think they are not. This implies a substantial effort in hand-picking an AC for each paper. I would propose that the chairs be given help with this -- hold a 2 day meeting with the chairs and say four committee members selected for diversity and area coverage. I would also propose that a few slots on the committee be held back, so if needed, new ACs can be recruited at the last minute to cover what turn out to be underrepresented areas. After that, the work of picking the actual reviewers is distributed across the committee (and the committee is well positioned to do it -- they have papers in the areas where they know who to pick). Note that many of the prominent HCI researchers who have opted out of the database will none the less agree to review (multiple) papers, *if* they are personally asked to do so by other prominent members of the community (AKA, program committee members) -- personal persuasion will get us a lot higher participation rates.
Jonathan_Grudin



Joined: Apr 24, 2001
Posts: 19

  Posted: Mar 10, 2004 - 02:28 AM

This is my last comment for awhile.

I encourage everyone to attend a couple conferences that have ~50% acceptance rates and see how they work. Ask around for one consistent with your interests. I enjoyed AAA and HICSS among others.

With some creativity, Scott's concerns could be addressed. There are people here more creative than I, but here are suggestions.

If we accept 50% (make it 45% if you feel we MUST reject a majority), and bestow a "Best Papers" (or Gold Star) label on the top 15% of submissions, CHOSEN BASED ON THE FINAL VERSIONS, we would provide a major incentive to improve papers. Even people who would have been accepted before must improve their work or risk being displaced by papers that would have been rejected due to fixable flaws. This could lead to much higher quality papers across the board. People will want those gold stars in the program and on their CV. Also, judging could be more careful. The best 15% will end up better than the current lot because now there is no incentive to improve an accepted paper.

Another feature of umbrella conferences in mature fields is a constructive attitude to improving work. Because some of it IS work in progress, people can help. And here is where those unfamiliar with such conferences overlook something. With 3x (or more) papers, those gathered to make a session are far, far more closely related than in CHI conferences now. The audience is smaller but much more in tune with each paper, more empathic, more willing to forgive flaws and help out. The good ones are like your close colleagues, the flawed ones like your students. Also, the role of a discussant, which was abandoned for good reason in CHI, suddenly makes sense again, because a session is more coherent and quality more variable. Good people are willing to be discussants and it adds interest.

I will conclude by noting another point of agreement. I think Scott and I agree that experimenting with a 50% acceptance rate one year would not hurt CHI.

-- Jonathan
Display posts from previous:     
Jump to:  
All times are GMT - 6 Hours
   
Powered by PNphpBB2 1.2a © 2003 PNphpBB Group
Credits

The place for CHI community   

This site is maintained by the GroupLens Research Project.
This web site was made with PostNuke, a web portal system written in PHP.
PostNuke is Free Software released under the GNU/GPL license.