Homework Assignment #1 — Paper Presentation Sign-up

HW1 is due Sunday, September 15, at 11:59PM Central time. No grace period is allowed. If your submission is late, you get zero points.

Students are required to select a research paper (see more details below "How to Submit HW1") from the list provided by the instructor (see Research Paper List below) and present the paper in an assigned lecture (listed on Lectures).

How to Submit HW1

Each team is required to email the instructor and the GSI with the subject CS8395-HW1 before the deadline. In the email, you must list at least 3 papers in order (indicating your preference order) so that we can assign all the papers based on preference and conflicts. Note that the paper assignment is FCFS. If we haven't received an email from your team before the deadline, we will randomly assign a paper to your team.

Presentation Format

There are 15 teams in this course in Fall 2024. Each team will only present one research paper (selected from the Research Paper List below). In one lecture, two papers will be presented (i.e., two teams present in one lecture).

Each paper presentation is expected to follow the rules:

Research Paper List

We provide 15 papers for the 15 teams to select from:.

  1. *[LeGoues2012] A Systematic Study of Automated Program Repair: Fixing 55 out of 105 bugs for $8 Each One of the symbolic publications of GenProg, the pioneering work for automated program repair. Published in ICSE2012.

    [automated program repair]

    Presenter: Sreynit Khatt

  2. *[Terrel2017] Gender Difference and Bias in Open Source: Pull Request Acceptance of Women versus Men This was the first paper that investigated the acceptance rate of pull requests regarding contributors' genders to reveal a systematic bias against female developers in open source.

    [OSS, human factors]

    Presenter: Ruiqing Lan, Tanvi Hadgaonkar, Tutku Nazli

  3. *[Luo2014] An Empirical Analysis of Flaky Test This was one of the first papers on exploring flaky test in programs. Published in FSE14.

    [testing]

    Presenter: Haowen Yao, Kun Chen, Xindong Zheng

  4. *[Sun2024] ACAV: A Framework for Automatic Causality Analysis in Autonomous Vehicle Accident Recordings A framework to automatically analyze the causality of autonomous vehicles accidents. Published in ICSE2024.

    [testing, cyber physical systems]

    Presenter: Cai Lemieux Mack, Tiancheng He

  5. *[Parnin2011] Are Automated Debugging Techniques Actually Helping Programmers? While researchers in software engineering provide so many tools every year, do they actually help develpers? If not, why? How can we learn from developers' behaviors to imprivde tool designs? This paper is a great example of human factor studies in SE that looked at tool usage problems.

    [human factors, fault localization]

    Presenter: Noah Dahle

  6. *[Wardat2022] DeepDiagnosis: Automatically Diagnosing Faults and Recommending Actionable Fixes in Deep Learning Programs A framework of fault localization for deep neural networks. Published in ICSE2022.

    [SE4AI, testing, fault localization]

    Presenter: Yuze Gao, Zhenyu Liu, Zhiting Zhou

  7. *[Fang2023] A Four-Year Study of Student Contribution to OSS vs. OSS4SG with a Lightweight Intervention Open source software are not only for technical purposes. More and more OSS projects aim to solve societal issues. Meanwhile, people have suggested societal problems might better motivate the participation of programming. This paper studied how students contribute to general OSS and social good OSS projects over 4 years and investigated if the participation in social good projects has an impact on students' contribution to OSS in general. It also presented an intervention in college class settings that might improve students' contribution to social good projects. This paper was published in FSE2023.

    [OSS, human factors, CS education]

    Presenter: Jiahao Zhang, Zixuan Liu

  8. *[Karas2024] A Tale of Two Comprehensions? Analyzing Student Programmer Attention during Code Summarization Learn how eye tracking can be used to reveal programmers' attention patterns in code comprehension - not only on the source code level, but also on the AST level. This paper includes the best and advanced practices on eye tracking in programmiing. Published in TOSEM in 2024.

    [human factors, eye tracking, program comprehension]

    Presenter: Mel Krusniak

  9. *[Zhang2024] EyeTrans: Merging Human and Machine Attention for Neural Code Summarization Very first work in SE that uses programmers' cognitive patterns to empower AI models for code. Published in FSE 2024.

    [AI4SE, human factors, code summarization]

    Presenter: Syed Ali, Muhammad Rahman, Arnav Chahal

  10. *[Burnett2016] GenderMag: A Method for Evaluating Software’s Gender Inclusiveness A milestone work by Prof. Burnett to allow software design with inclusivity. There is a whole series of studies and open source support for GenderMag, please check The GenderMag Project. Published in Interacting with Computers in 2016.

    [human computer interaction, SE]

    Presenter: Qinwen Ge, Xueqi Cheng

  11. *[Kou2024] Do Large Language Models Pay Similar Attention Like Human Programmers When Generating Code? An interesting exploration for the attention alignment between human and LLMs in coding tasks. Published in FSE2024.

    [AI4SE, human factors]

    Presenter: Jialin Yue, Jieyu Li, Zhengyi Lu

  12. *[Huang2024] Your Code Secret Belongs to Me: Neural Code Completion Tools Can Memorize Hard-Coded Credentials Surprisingly (or not), LLMs can potentially leak credential information from reading your prompts. Published in FSE2024.

    [Security & Privacy, AI4SE]

    Presenter: Shreeya Arora, Carol He, Marcus Kamen

  13. *[Ahmed2024] Causal Relationships and Programming Outcomes: A Transcranial Magnetic Stimulation Experiment Using an medical imaging technique, TMS, to disrupt brain regions to understand the cognitive processes of programming. Published in ICSE2024.

    [human factors, medical imaging, programming]

    Presenter: Ian Miller

  14. *[Rafi2022] Towards Automatic Oracle Prediction of Object Placements in Augmented Reality Testing A paper that uses predicted human ratings for object placement as testing oracles in AR/VR application. Published in ASE2022.

    [testing, AR/VR]

    Presenter: Sodabe Bandali

  15. *[Kazemitabaar2024] How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a Self-Paced Learning Environment An example focusing on LLM usage in CS education. Published in KOLI-CALLING2023, a conference for computing education.

    [AI, CS education]

    Presenter: Yizhou Guo, Kun Peng, Yan zhang

Presentation Content and Grading

This section should not surprise you about what a "good presentation" looks like. However, it is always useful to list the criterion explicitly.

Here I want to gratefully acknowledge Prof. Kevin Moran at Univeristy of Central Florida for sharing the presentation criteria in his course with me.

Presentation Content

A research paper presentation should include the following content:

  1. Overview of Motivation (and Background) : Here you should tell the story of why the problem that the paper is tackling is important. This part should also include necessary background introduction. As we discussed in our lectures, it is very important to know your audience first. Do they have enough background to understand the motivation?
  2. Key Idea of Research : Here, crystallize the key idea or novelty behind the paper for the audience in an engaging way.
  3. Approach Description or Study Design: Here, describe the details of the approach or designed study in a way that is relatable to the audience. Include any necessary background required to understand the approach or study methods.
  4. Evaluation & Results: Provide an overview of the most important takeaway results with supporting evidence. Provide any necessary background about the evaluation methods used.
  5. Discussion Questions: At the end of the presentation, you are encouraged to include a set of discussion questions to drive the course of class discussion. As the presenter, you are also responsible to lead the Q&A session after the presentation. Thus, some prepared dicussion questions can be very useful.

Grading

This paper presentation will be graded out of 100 points (it takes 20% in the final grade) using the criteria below.

Category

Professional (100%)

Adequate (75%)

Needs Work (50%)

Serious Problems (25%)

Grade

Content Full grasp (more than needed) of material in initial presentation and in answering questions later, includes interesting discussion questions Solid presentation of material and answers all questions adequately but without elaboration, adequate discussion questions Less than a full grasp of the information revealed rudimentary presentation and answers to questions, discussion questions unclear No grasp of information, some misinformation, and unable to answer questions accurately, no discussion questions 40%
Visual Aids Visuals explain and reinforce the rest of the presentation, presentation has text on slides only where needed Visuals relate to rest of presentation, but fall short in explaining key topics, too much text on slides Visuals are too few or not sufficiently related to the rest of the presentation Visuals not used or are superfluous 20%
Organization Information presented in a logical interesting sequence that is easy for the audience to follow and tells the story of the paper Information is presented in a logical sequence that is easy for the audience to follow but is not engaging or exciting Presentation jumps between topics making it difficult to follow the story of the paper. Audience cannot follow presentations because it follows no logical sequence 10%
English No misspelled words or grammatical errors No more than two misspelled words or grammatical errors Three-five more misspelled words or grammatical errors More than 5 misspelled words or grammatical errors 10%
Elocution Speaks clearly, correctly and precisely, loud enough for audience to hear and slowly enough for easy understanding Speaks clearly, pronounces most words correctly, loud enough to be easily heard, and slow enough to be understood Speaks unclearly, mispronounces many major terms, and speaks too softly or rapidly to be easily understood Mumbles, mispronounces most important terms, and speaks too softly or rapidly to be understood 10%
Eye Contact Eye contact constant; minimal or no reading of notes Eye contact maintained except when consulting notes, which is too often Some eye contact but mostly reading from notes No eye contact, reads from notes exclusively 10%

Peer Reviews: Every student except for the presenters will be required to review the presentation ("Peer Reviews" takes 15% in your final grade). You will receive an online form at the beginning of the presentation via Piazza. For each grading criterion, you are asked to grade for the presenters. There are also a few extra questions for you to answer in the form. Your feedback will be shared with the presenters anonymously after the presentation. For the audience, every peer review is 5 points for your participation. But the instructor has the right to cut off at most 2 points if your feedback is meaningless. Students must fill evaluation forms independantly (even if you have a team of two).

The presenters are still required to complete the evalution for other teams (they are only exempted for their own evaluation).

Students must submit the evaluation form by 5pm the day of the presentations. Any late submission will receive 0 points. Also, there will be a passcode question in the form to check attendance (code will be shared in the lecture of the day). If you get the passcode wrong, you will also receive 0 points for the evalution.

Finally, the grading for each presentation will be 50% (getting rid off one highest and one lowest score, then take the average) from the audience and 50% from the instructors.