Cours / Classes
- Tuesdays 09:30 – 11:29 S1-125 Pav. Jean Coutu
- Thursdays 09:30 – 11:29 S1-139 Pav. Jean Coutu
Paper presentation: 30%
Discussion and participation: 10%
Paper reviews: 20%
- All course-specific communications should be on the course Slack.
- Announcements posted on the channel #annonces on Slack will be assumed to have been seen by all students!
- Slides will be posted on StudiUM.
- I will always try to answer questions within the next business day. I cannot guarantee availability outside 9-5 Mon-Fri. I also don’t assume your availability outside such hours.
- Contact me on email (email@example.com) for non-course related issues and confidential matters
Each student will present four papers over the course of the term. We will devote 50 minutes of class time to each paper. The presenter will first give a 20-minute conference style presentation of the paper, and then will lead the class in a discussion of the paper’s contributions.
When preparing your presentation, keep in mind that the audience will have already read the paper that you are presenting. Also, some general introductory topics may have already been covered in lectures or in previous presentations. Your presentation should:
- remind your audience of what problem is addressed in the paper and why that problem is important
- identify what you feel are the most significant and important observations, ideas, and results in the paper,
- put the paper in context of the course
- raise issues for discussion
The slides for your presentation are due by 9:00am on the day you are presenting. Send me a Slack message with a single Powerpoint or PDF file containing your slides.
All students (not just the presenter!) are expected participate in the discussion of your paper. Come to class prepared with some questions for the class and comments.
Paper presentations are worth 30% of your final grade. Presentations will be evaluated according to the following criteria:
- Organization: talk includes an introduction, middle section, and conclusions; parts of the presentation follow from each other; topic(s) placed in the context of the course; problem and solution clearly identified; evaluation of the solution included; presentation is appropriately aimed at audience; key points of the presentation summarized at the end.
- Presentation ability: well rehearsed; speaks clearly, audibly, and not too fast; makes eye contact with the class; mannerisms suitable; finishes within allotted time; enthusiasm in voice; smooth speech.
- Discussion and Knowledge: the talk stimulates interesting questions; the presenter poses interesting questions, presenter is familiar with the paper’s content; able to answer questions about the material during discussion period.
For each criterion, you will receive a mark between 1 and 5:
1: Poor, 2: Needs Work, 3: Satisfactory, 4: Very Good, 5: Outstanding
Your grade is a combination of an evaluation by myself and your classmates (70%-30%). The formula for computing your paper presentation mark is: (0.7*IN+0.3*ST)/2+50 where IN and ST are the percentage marks from the instructor and the classmates respectively. Thus, it is impossible to get below 50% for a presentation unless you don’t show up!
During “Paper presentation” sessions, one student presents a paper and the rest must prepare a review, which will be used as the basis of further discussion. There is a plain text review form (available via the above link) which should be used for all reviews.
When completing your review, pay particular attention to the section that asks you to summarize the paper’s contributions and to justify your rating of the paper. State what you consider to be the main contributions of the paper, and summarize your rationale for your ratings. Keep the whole thing short: two short paragraphs is the upper limit. Think of this section as an “executive summary” of both the paper and your review of the paper. Any additional detail belongs in the final section of the review, which can be as long and as detailed as necessary.
Reviews are due by 8:00am on the day in which the corresponding paper is being discussed in class. Send your completed review to me via a Slack message.
Paper reviews are worth 20% of your final grade. Presentations will be evaluated according to the following criteria:
- Summary: the summary accurately reflects the content of the paper, makes it obvious that the student read the paper
- Clarity: the review is understandable and clearly written
- Validity: the ratings for the paper follow from the list of strengths/weaknesses identified by the student
- Non-triviality: the review does more than scratching the surface of the paper
For each criterion, you will receive a mark between 1 and 5:
1: Poor, 2: Needs Work, 3: Satisfactory, 4: Very Good, 5: Outstanding
The formula for computing your paper review mark is: 0.8*R+20 where R is percentage mark. Thus, it is impossible to get below 20% for a presentation, unless you don’t submit one on time.
- Work on an open research problem (individual)
- Plan an empirical study and run a pilot (may include archive research)
- Develop / implement / analyze a software system / process / context
The timeline is:
- January 30 : one page project proposals
- (At least) two milestones (“around” February 15, March 15)
- Final project presentations on the week of April 10th
- At the completion of the project, you will be required to also submit:
- A written report, if applicable
- A copy of any software developed as part of your project, with appropriate documentation and with a short (1 page) usage guide
- A copy of all empirical data collected, if applicable
Project marks will depend on the depth and novelty of the ideas embodied in your project, on the quality of your methodology, and on the quality of your written report, in-class presentation, and software (if applicable).
Students are strongly encouraged to propose and discuss their own project topics. The ideal project topic should help Phd and MSc/thesis students towards their degree and MSc/stage students towards building a strong portfolio. In principle, projects should include an empirical and a formal component.
Some “seed” ideas are given below:
- Plan an empirical study, and configure and extend a model management platform (AtomPM, MMINT, Epsilon, …), demonstrating how it can be used as the experimental platform.
- Creation of a non-trivial model of a real system in formal modeling language (e.g., Alloy), development/implementation of some analysis technique.
- Information extraction, reverse engineering and generation of analytics form open source projects (e.g. on GitHub).
- Extend Calico with a reasoning engine back-end to support formal rapid prototyping.
- Create a tool for analyzing PlantUML models using OCL.
- Extract formal specifications of security policies from ISO documents.
- Extract formal specifications of security policies from publicly available documents of online software applications (such as WordPress, Moodle, etc).
- Use ground theory to analyse design discussions (e.g. for the Rust compiler, for GitHub pull requests, etc)
- Try to understand how developers express uncertainty about design decisions in code (abusing notation, using stubs, using explicit comments)
- Model and visualize software evolution using developer diaries published for video games.
- How are State instances in UML-RT models named? How should they be named such that they are good descriptions of the actual program state?
[A] Modelling and specification
- Harel, D, Rumpe, B. Meaningful Modeling: What’s the Semantics of “Semantics”?, IEEE Computer, 2004
- Daniel Jackson. 2002. Alloy: a lightweight object modelling notation. ACM Trans. Softw. Eng. Methodol. 11, 2 (April 2002), 256-290.
- Kang E, Jackson E, Schulte W. An approach for effective design space exploration. In Monterey Workshop 2010 Mar 31 (pp. 33-54). Springer Berlin Heidelberg.
- Michael Jackson and Pamela Zave. 1995. Deriving specifications from requirements: an example. In Proceedings of ICSE’95.
- Robert Allen and David Garlan. 1994. Formalizing architectural connection. In Proceedings of ICSE’94
- Crane, M.L. and Dingel, J., 2005, October. UML vs. classical vs. Rhapsody statecharts: Not all models are created equal. In Proceedings of MODELS’05.
- K. Czarnecki, Chang Hwan, P. Kim and K. T. Kalleberg, “Feature models are views on ontologies,” 10th International Software Product Line Conference (SPLC’06), Baltimore, MD, 2006, pp. 41-51.
- J. de Lara, E. Guerra, J.S. Cuadrado. When and How to Use Multilevel Modelling. ACM Transactions on Software Engineering Methodology. 24(2), 2014.
- Benjamin Behringer, Jochen Palz, Thorsten Berger. PEoPL: Projectional Editing of Product Lines. In Proceedings of the 39th International Conference on Software Engineering (ICSE), pages 563-574. ACM, May, 2017.
- Erdweg S. et al. (2013) The State of the Art in Language Workbenches. In: Erwig M., Paige R.F., Van Wyk E. (eds) Software Language Engineering. SLE 2013
- S. Easterbrook, R. Lutz, R. Covington, J. Kelly, Y. Ampo and D. Hamilton, “Experiences using lightweight formal methods for requirements modeling,” in IEEE Transactions on Software Engineering, vol. 24, no. 1, pp. 4-14, Jan 1998.
- Ernesto Posse and Juergen Dingel. 2016. An executable formal semantics for UML-RT. Softw. Syst. Model. 15, 1 (February 2016), 179-217.
- John Mylopoulos, Alex Borgida, Matthias Jarke, and Manolis Koubarakis. 1990. Telos: representing knowledge about information systems. ACM Trans. Inf. Syst.
[B] Reasoning and analysis
- H. Goldsby, B. Cheng. Automatically Discovering Properties that Specify the Latent Behavior of UML Models. In Proceedings of MoDELS’10, 2010
- S. Maoz, J.O. Ringert, B. Rumpe. CD2Alloy: Class Diagrams Analysis Using Alloy Revisited. In Proceedings of MoDELS’11, 2011
- M. Famelis, R. Salay and M. Chechik, Partial models: Towards modeling and reasoning with uncertainty, 2012 34th International Conference on Software Engineering (ICSE), Zurich, 2012, pp. 573-583.
- Classen, A., Heymans, P., Schobbens, P. Y., Legay, A., & Raskin, J. F. (2010, May). Model checking lots of systems: efficient verification of temporal properties in software product lines. In Proceedings of ICSE’10.
- Beschastnikh, I., Brun, Y., Schneider, S., Sloan, M., & Ernst, M. D. (2011, September). Leveraging existing instrumentation to automatically infer invariant-constrained models. In Proceedings of the FSE’11.
- Aleksandar Milicevic, Joseph P. Near, Eunsuk Kang, and Daniel Jackson. 2015. Alloy*: a general-purpose higher-order relational constraint solver. In Proceedings of ICSE 2015.
- Alfredo Motta, Nicolas Mangano, and André van der Hoek. 2013. Lightweight analysis of software design models at the whiteboard. In Proceedings of the 5th International Workshop on Modeling in Software Engineering (MiSE ’13). IEEE Press, Piscataway, NJ, USA, 18-23.
M. B. Dwyer, G. S. Avrunin and J. C. Corbett, “Patterns in property specifications for finite-state verification,” Proceedings of the 1999 International Conference on Software Engineering (IEEE Cat. No.99CB37002), Los Angeles, CA, USA, 1999, pp. 411-420.
B. Meyers, R. Deshayes, L. Lucio, E. Syriani, H. Vangheluwe, M. Wimmer. ProMoBox: A Framework for Generating Domain-Specific Property Languages. Proc. of Software Language Engineering (SLE’14), LNCS 8706, pp. 1-20, 2014.
- F. Gauthier, D. Letarte, T. Lavoie and E. Merlo, “Extraction and comprehension of moodle’s access control model: A case study,” 2011 Ninth Annual International Conference on Privacy, Security and Trust, Montreal, QC, 2011, pp. 44-51.
- Karolina Zurowska, Juergen Dingel: Language-specific model checking of UML-RT models. Software and System Modeling 16(2): 393-415 (2017)
- Moussa Amrani, Levi Lucio, Gehan Selim, Benoît Combemale, Jurgen Dingel, Hans Vangheluwe, Yves Le Traon, James R Cordy, “A Tridimensional Approach for Studying the Formal Verification of Model Transformations,” 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation, Montreal, QC, 2012, pp. 921-928.
[C] Empirical methods
- Easterbrook S., Singer J., Storey MA., Damian D. (2008) Selecting Empirical Methods for Software Engineering Research. In: Shull F., Singer J., Sjøberg D.I.K. (eds) Guide to Advanced Empirical Software Engineering. Springer, London
- Thomas Zimmermann, Peter Weisgerber, Stephan Diehl, and Andreas Zeller. 2004. Mining Version Histories to Guide Software Changes. In Proceedings of ICSE’04, 2004
- Regina Hebig, Truong Ho Quang, Michel R. V. Chaudron, Gregorio Robles, and Miguel Angel Fernandez. 2016. The quest for open source projects that use UML: mining GitHub. In Proceedings of MODELS’16, 2016
- Michael Szvetits and Uwe Zdun. 2016. Controlled experiment on the comprehension of runtime phenomena using models created at design time. In Proceedings of MODELS’16, 2016
- Dubinsky, Y., Rubin, J., Berger, T., Duszynski, S., Becker, M., & Czarnecki, K. (2013, March). An exploratory study of cloning in industrial software product lines. In Software Maintenance and Reengineering (CSMR), 2013 17th European Conference on (pp. 25-34). IEEE.
- Mathieu Lavallée and Pierre N. Robillard. 2015. Why good developers write bad code: an observational case study of the impacts of organizational factors on software quality. In Proceedings of ICSE’15.
- Petre, M., 2014. “No shit” or “Oh, shit!”: responses to observations on the use of UML in professional practice. Software & Systems Modeling, 13(4), pp.1225-1235.
- S. Jeanmart, Y. G. Gueheneuc, H. Sahraoui and N. Habra, “Impact of the visitor pattern on program comprehension and maintenance,” 2009 3rd International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, FL, 2009, pp. 69-78.
- Paulo Sérgio Medeiros dos Santos, Guilherme Horta Travassos, On the Representation and Aggregation of Evidence in Software Engineering: A Theory and Belief-based Perspective, Electronic Notes in Theoretical Computer Science, Volume 292, 2013, Pages 95-118
- P. S. M. d. Santos and G. H. Travassos, “Action research use in software engineering: An initial survey,” 2009 3rd International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, FL, 2009, pp. 414-417.
- Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M. German, and Daniela Damian. 2014. The promises and perils of mining GitHub. In Proceedings of the 11th Working Conference on Mining Software Repositories (MSR 2014)
- Georgios Gousios and Diomidis Spinellis. 2012. GHTorrent: GitHub’s data from a firehose. In Proceedings of the 9th IEEE Working Conference on Mining Software Repositories (MSR ’12).
[D] Software engineering practise
- L. Osterweil. 1987. Software processes are software too. In Proceedings of ICSE’87.
- Davor Čubranić and Gail C. Murphy. 2003. Hipikat: recommending pertinent software development artifacts. In Proceedings of ICSE’03, 2003.
- Audris Mockus, Roy T. Fielding, and James Herbsleb. 2000. A case study of open source software development: the Apache server. In Proceedings of ICSE’00, 2000.
- Hamed Esfahani, Jonas Fietz, Qi Ke, Alexei Kolomiets, Erica Lan, Erik Mavrinac, Wolfram Schulte, Newton Sanches, and Srikanth Kandula. 2016. CloudBuild: Microsoft’s distributed and caching build service. In Proceedings of ICSE’16
- Xiao He, Paris Avgeriou, Peng Liang, and Zengyang Li. 2016. Technical debt in MDE: a case study on GMF/EMF-based projects. In Proceedings of MODELS’16, 2016
- Fahad R. Golra, Antoine Beugnard, Fabien Dagnat, Sylvain Guerin, and Christophe Guychard. 2016. Using free modeling as an agile method for developing domain specific modeling languages. In Proceedings of MODELS’16, 2016
- Viktoria Stray, Dag I.K. Sjøberg, and Tore Dybå. 2016. The daily stand-up meeting: A grounded theory study. J. Syst. Softw.114, C (April 2016), 101-124.
- B. Morin, O. Barais, J-M. Jezequel, F. Fleurey, A. Solberg. Models@Run.time to Support Dynamic Adaptation, IEEE Computer, 2009.
- Kokaly, S., Salay, R., Cassano, V., Maibaum, T., Chechik, M. A model management approach for assurance case reuse due to system evolution. In Proceedings of International Conference on Model-Driven Languages and Systems (MoDELS), pp. 196-206, 2016.
- Hartmann, T., Moawad, A., Fouquet, F., Le Traon, Y. The next evolution of MDE: a seamless integration of machine learning into domain modeling. J. of Software and Systems Modeling, pp, 1-20, 2017.
- Jason Tsay, Laura Dabbish, and James Herbsleb. 2014. Let’s talk about it: evaluating contributions through discussion in GitHub. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014).
- Wassyng A., Lawford M. (2003) Lessons Learned from a Successful Implementation of Formal Methods in an Industrial Project. In: Araki K., Gnesi S., Mandrioli D. (eds) FME 2003: Formal Methods. FME 2003. Lecture Notes in Computer Science, vol 2805. Springer, Berlin, Heidelberg
- S. Rastkar and G. C. Murphy, “Why did this code change?,” 2013 35th International Conference on Software Engineering (ICSE), San Francisco, CA, 2013, pp. 1193-1196.