Sunday, 27 November 2016

A radical proposal to innovate the CHI-conference

In short: increase acceptance to 50% and have a mandatory discussion of at least 15 minutes per paper at the conference! 

Around this time of the year (starting with the rebuttal phase and then again when notifications are out) we have heated discussions about the review process and quality of our main ACM SIGCHI conference. Much of the discussion can be summarized as: ACM SIGCHI is a lottery, I got the wrong reviewers, they did not carefully read my paper, and this should have been accepted and the committee is too dumb to see it. I strongly disagree with these statements (and I am probably one of the persons with the most CHI rejects over the last years), but if we as a community feel this way we have to change the process.

Why to increase the acceptance rate? 

If we compare our prime venue with a lottery it would mean putting value to these publications, e.g. in hiring, would be absolutely wrong. As people in the community are really proud of their publications in CHI (and I have not seen much talk about lottery on the accepted ones) it seems that we only accept too few.

Looking through my Facebook comments and at the tweets during the rebuttal phase, many well established colleagues were surprised that their best papers did not get good reviews. So people with the highest esteem and highest peer recognition (CHI Academy, former chairs, …) are not able to know whether the paper they consider perfect gets in or not. They cannot judge the quality of their own work with regard to the CHI review process. If this is the case we are probably too selective and reject good work, hence my proposal to increase acceptance rate significantly, e.g. to 50%. The same people complaining about their paper not getting in rant at the conference about the low quality of papers and that we need to be more selective. Is this another filter bubble effect?

By increasing the acceptance rate to 50% we would make the conference predictable. It would no longer depend on the SC, ACs, 2nd ACs, and reviewers whether you get in or not. Reasonable papers would be very likely to get in. It would also decrease the value of the publication as such and would put the focus back on its content. Another positive effect I would expect is that people would not gamble by salami-slicing their work as they could be pretty sure a solid paper gets in and the value of a paper would be defined by its content and not by the fact that it is accepted.

Why enforce long discussions? 

I am pretty sure what the answer to the following question to ACs would be: if you have only one trip you could do, would you go to the program committee meeting or to the actual conference? We did not ask this question in 2014 but I ask many people what value they see in the physical PC meeting. And there were very clear answers: overview of many contributions made, expert discussion of the contribution – especially the controversial ones, and getting a feel for what the community values.

In contrast, going to CHI2016, UBICOMP2016, and UIST2016 I felt that the discussions were most often not existent or even embarrassing. In our prime venues we do not discuss the results that are presented. We do not engage with why people took a certain approach, we do not discuss the implications of the work, we do not discuss alternative approaches, we do not discuss limitations, and we do not discuss what value this contribution has for the community. Such discussions happen in other fields (I recently presented at a social science event and the discussion was very challanging but insightful). We have lost the discussion culture at our conferences.

By enforcing discussions (e.g. have the AC and 2AC present at the presentation) we would learn much more from the work people present and I would expect that some people would even be more careful about what they publish. If we do not challenge the research in discussions, we keep everyone in their bubble, having no real idea how the wider community receives and sees it. My suggestion would be to have talks of 15 minutes and 15 minutes of discussion. If no discussion happens the paper cannot be published. If the discussion reveals that the paper reports on major findings or even delivers a breakthrough the authors should be encouraged to extend it for a journal. We could even imagine that people at the discussion can non-anonymously either endorse or oppose the paper in the ACM DL.

Perception of Human Computer Interaction by others

I have a few prototypical comments below that illustrate how others view the way we publish. I have not received exactly these, but some with a similar meaning from colleagues from other fields:
  • “Interesting, you just present the work and there is no discussion? Maybe that is useful in your field as you can present more in the same time.”
  • “It’s is exciting that you follow an agile publication strategy in your field. Each web questionnaire, prototype, pre-study, and study is presented in a separate publication in your top venue. How do you keep track of this?”

CHI as benchmark is too little

Whenever there is the discussion that we need to reform the CHI paper process people argue: this is essential in the US tenure track process and hence we cannot change it. And the solution is to create another (minor) venue that is removing some of the competition for the “serious” researchers. It seems CHI has managed to become a great benchmark for hiring and tenure, but its function as publication venue may be at risk – especially if people talk about the “CHI lottery”. As I said I disagree with this observation, but I think we should make an effort to:
  1. Make the conference a predictable publication venue (e.g. senior people in the field would be able to place their own submission reliably into one of the three categories: clear reject, borderline, clear accept)
  2. Enforce discussion at the conference to avoid that researchers live in their bubble (where they and their friends believe it is good, but the community at large thinks it is not). 
This may not require the radical changes I have suggested in the heading, but at least we should start to discuss it!

Wednesday, 16 March 2016

Keynote at PerCom2016 by Bo Begole: The Dawn of Responsive Media

Bo Begole is VP at Huawei is presenting a Keynote PerCom2016 on “The Dawn of Responsive Media”. He has worked in the Ubicomp domain for over 20 years. At PARC he investigated Contextual Intelligence as a logical follow-up of making context-awareness smarter and actionable. Aiming at “harvesting” the Ubicomp research at PARC he looked at business cases and business opportunities for Ubicomp, which resulted in his book “Ubiquitous Compting for Business”[1].

At Huawei his focus is on immersion and experience. He started out with outlining how people like “lean back experience” and how this is well supported by current technology. He argues that the traditional remote control supports this still more liked than gestures and people like this form of lazy media consumption. There is also a growth in “learn forward” experience, basically requiring high intensity interaction (e.g. gesture control, body activated games) that asked the user to be active. He reasons that the space in-between may be an interesting and important for future products.

In his talk Bo looked at a short history of Ubicomp and VR and spoke on to current developments and the buzz in the industry. It seems to many that VR/AR is the next huge thing changing media consumption and media sharing fundamentally, like moving from text to images. He puts up an interesting question on whether or not a second life like world is coming back? Many of us still remember the excitement (and investments) around second life the first time round.

Bo is confident that games are the killer application for VR and that hence it will sell very well. Once the technology is out with the users there will be further uses. At the same time, he questions if spatial placement (like in the HoloLens concepts seen in the media) is really helping people to organize their things, activities and data or not.

For immersive technology in the home he sees that high resolution screen will play a major role alongside mobile technologies. MirrorSys is an example of a research project of an immersive communication system. Key aspects are live sized presentation of people and a visualization that is close to the upper bounds of human perception. His calculations for the display is to have 30000 x 24000 pixel (=720 Mega pixel) as the upper bound for perception without head movement (this is roughly a factor 10 more than we have in the lab in Stuttgart [2]). At the same time camera technology is advancing towards high spatial and temporal resolution and towards camera arrays that allow you to move around the scene. He had some impressive examples of what you can do with a camera array that simultaneously captures scenes and allows you to navigate through scenes from different angles.

In his view speech interaction is also gaining more importance – moving towards conversational systems with a deep language understanding. He makes the point that especially with many devices in the Internet of Things (without classical user interface) this will play a more important role.

Finally, he suggested that user engagement is a key for responsive media. So far this has been a key in presenting advertising to customers. In the future this will be central to many applications, as the systems will optimize for engagement with the user and will adapt their content and presentation to ensure that the user is and stays engaged.

The research roadmap he presents is pretty wide with a lot of open issues to be solved.

[1] Begole, Bo. Ubiquitous Computing for Business: Find New Markets, Create Better Businesses, and Reach Customers Around the World 24-7-365. Ft Press, 2011.
[2] Power Wall at the University of Stuttgart,

Tuesday, 15 March 2016

Keynote by Cecilia Mascolo at Percom2016: Technology and Experiecne in the Physcial World

Cecilia Mascolo presents the keynote at Percom2016. Her opening statement is: “Technology must enhance and not substitute the physical experience”.

Cecilia makes the point that continuous sensing with mobile devices can overcome many issue that are well known with traditional studies (especially the classical problem of psychologist studying psychology students in a dark lab). One of here early papers (EmotionSense, see [1]) shows how we can move studies into the real world. This is not without difficulties, especially when you try to understand emotions.

Putting research apps into android market changes the game, large numbers of users become within reach. Higher numbers of participants require a clear purpose of the applications (Nielse Henze provide in [2] a nice recipe of how to do this). Her experience is that user engagement through gamification really worked. Even if the duration of participation of individuals is limited to weeks or months this generates very useful information. A short introduction to social sensing by Cecilia can be found in [3].

Different sensors have different energy and privacy cost and also different types of contributions. Correlating the accelerometer and happiness is really interesting. Users who are more active (not just movement, “being out and about”) are happier. Clustering accelerometer data and correlating it with other high level data opens exciting questions, e.g. health. Similarly correlating happiness and location leads to more surprising results: less happy at home and work, more when out and active. Looking at people’s personally and demographics shows that gender, age, employment, etc. has a clear correlation with activity and usage of communication.

Physical space matters! Using active badges they looked at how the change of physical space can impact peoples interactions [4]. The sensing approach allowed to understand how changes in physical space changes the behavior on a really fine grained level.

[1] Rachuri, K. K., Musolesi, M., Mascolo, C., Rentfrow, P. J., Longworth, C., & Aucinas, A. (2010, September). EmotionSense: a mobile phones based adaptive platform for experimental social psychology research. In Proceedings of the 12th ACM international conference on Ubiquitous computing (pp. 281-290). ACM.
[2] Henze, N., Shirazi, A. S., Schmidt, A., Pielot, M., & Michahelles, F. (2013). Empirical research through ubiquitous data collection. Computer, 46(6), 0074-76.
[3] Mascolo, C. (2010). The power of mobile computing in a social era. IEEE Internet Computing, 14(6), 76.
[4] Brown, C., Efstratiou, C., Leontiadis, I., Quercia, D., Mascolo, C., Scott, J., & Key, P. (2014, September). The architecture of innovation: Tracking face-to-face interactions with ubicomp technologies. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (pp. 811-822). ACM.