Satellite workshop of

ICSE 2014

5th International Workshop on Emerging Trends in Software Metrics (WeTSOM 2014)

June 3, 2014 – Hyderabad, India


The International Workshop on Emerging Trends in Software Metrics, aims at gathering together researchers and practitioners to discuss the progress of software metrics. The motivation for this workshop is the low impact that software metrics has on current software development. The goals of this workshop includes critically examining the evidence for the effectiveness of existing metrics and identifying new directions for metrics. Evidence for existing metrics includes how the metrics have been used in practice and studies showing their effectiveness. Identifying new directions includes use of new theories, such as complex network theory, on which to base metrics.

Important Dates

Paper submission (EXTENDED):
January 24, 2014
January 31, 2014
Paper acceptance notification: February 24, 2014
Camera-ready version: March 14, 2014
Workshop date: June 3, 2014

Call for Papers

The topics of interest in the discussion include:

  • New software metrics to assess quality and effort
  • Complex network theory and its application to software structure and growth
  • Micro-patterns and other code-style related metrics
  • Metrics for agile products, processes and teams
  • Dynamic tracking of software projects using metrics
  • Metrics-based quality prediction
  • Metrics related to Cloud Computing
  • Software architecture metrics, including service oriented architectures, and their application to software maintenance and evolution
  • Quality of Service and Service Level Agreement metrics
  • Metrics related to software for mobile terminals
  • Metrics for Open Source software products and processes
  • Metrics for measuring the security level of software applications
  • Metrics for increased understanding of low level design with quality mapping of metrics
  • Effective usage of metrics to aid peer code review
  • Metrics that facilitate maintenance and modification of code in legacy systems
  • Aspect-oriented metrics

Download the Call for Papers
Papers must be submitted electronically by 23:59:59 UTC/GMT - 11 hours (= Pago Pago, American Samoa time).


Please prepare your paper according to the formatting instructions given below.
If you are using LaTeX, make sure that the "compsocconf" option is set and that you are using the right IEEEtran class (see "Important note for LaTeX users" below).
When your paper is finished, submit it via

WETSoM 2014 Submission and Formatting Instructions

Papers submitted to WETSoM 2013 must not have been published elsewhere and must not be under review or submitted for review elsewhere while under consideration for WETSoM 2014. ACM plagiarism policies and procedures shall be followed for cases of double submission.
All submissions must be in English.

Paper Preparation and Formatting

All papers must conform, at time of submission, to the IEEE Computer Society Formatting Guidelines . Please use either the Word Template or the LaTeX package provided by IEEE CS. (In some browsers, an empty page will display when clicking these links. Nevertheless, the documents will download; check your download folder.) Make sure that you use US letter page format (don't use A4!). Submissions must be in PDF format. Author names and affiliations shall not be suppressed on the title page of the paper. Papers, when properly formatted, must not exceed the size limits stated for the paper categories as follows:
  • Short position papers: 4 pages
  • Full papers: 7 pages

Non-Conforming Submissions

Submissions that do not comply with the foregoing instructions and size limits will be desk rejected without review.


Steve Counsell, Brunel University, UK
Michele Marchesi, University of Cagliari, Italy
Radhika Venkatasubramanyam, Siemens, India
Aaron Visaggio, University of Sannio, Italy
Hongyu Zhang, Tsinghua University, China

Accepted Papers

Full Papers

What can Changes Tell about Software Processes? .pdf

Barbara Russo, Maximilian Steff

A New Metric for predicting Software Change using Gene Expression Programming .pdf

Ruchika Malhotra, Megha Khanna

Why are industrial agile teams using metrics and how do they use them? .pdf

Eetu Kupiainen, Mika Mäntylä, Juha Itkonen

A Replicated Study on Correlating Agile Team Velocity Measured in Function and Story Points .pdf

Hennie Huijgens and Rini van Solingen

"May the Fork Be with You" : Novel Metrics to Analyze Collaboration on GitHub

Marco Biazzini and Benoit Baudry

Design Test Process in Component-based Software Engineering: an Analysis of Requirements Scalability .pdf

Mariem Haoues, Asma Sellami, Hanêne Ben-Abdallah

Clustering of Defects in Java Software Systems

Giulio Concas, Cristina Monni, Matteo Orrù and Roberto Tonelli

A Security Metric Based on Security Arguments .pdf

Benjamin Rodes, John Knight, Kimberly Wasson

Short Papers

Using Fine-grained Code Change Metrics To Simulate Software Evolution

Zhongpeng Lin, Jim Whitehead

Structural Evolution of Software: A Social Network Perspective .pdf

Naveen Kulkarni, Satya Prateek Bommaraju, Madhuri Dasa

Towards a Catalog Format for Software Metrics .pdf

Eric Bouwers, Arie van Deursen, Joost Visser

In-depth Measurement and Analysis on Densification Power Law of Software Execution

Yu Qu, Qinghua Zheng, Ting Liu, Jian Li and Xiaohong Guan

Program committee

Alain Abran, ETS, Université du Québec, Canada
Francesca Arcelli Fontana, University of Milano Bicocca, Italy
Stefan Blom, Univ. of Twente, Netherlands
Luigi Buglione, Engineering.IT / ETS Montreal, Italy
Giovanni Cantone, DISP - University of Rome "Tor Vergata", Italy
Damien Challet, Fribourg University, Germany
Peter J. Clarke, Florida International University,
Juan J. Cuadrado Gallego, University of Alcalá, Spain
Massimiliano Di Penta, RCOST - University of Sannio, Italy
Kecia Ferreira, CEFET-MG, Brasil
Yossi GiI, Technion Institute, Israel
Tracy Hall, Brunel University
Israel Herraiz, Technical University of Madrid, Spain
Ségla Kpodjedo, SOCCER Lab, DGIGL, École Polytechnique de Montréal, Québec
Sandro Morasca, University of Insubria, Italy
James Power, National University of Ireland, Ireland
Steve Riddle, Newcastle University, UK
Jean-Guy Schneider, Swinburne University of Technology, Australia
Alexander Serebrenik, Eindhoven University of Technology, Netherlands
Marco Tulio Valente, Universidade Federal de Minas Gerais, Brasil



Tim Menzies

North Carolina State University

What Metrics Matter? (And the answer may surprise you)


In 2003, the speaker presented a paper "Metrics that Matter" at an IEEE workshop at NASA Goddard.
A decade later, it is useful to review that work to see what, if any, of that paper is still believed today.
It turns out that much has changed in the field of data science for SE- so much so that that paper now requires extensive revision. For example:
(1) Sampling results show that some metric matters, in general, then it may not be relevant or useful to consider for any specific project.
(2) Locality results show no one set of metrics "matter" since models built from metrics are rarely stable/useful across different projects.
(3) Data ablation results show that the core signal of these metrics is not to be found in their precise values.
(4) Privatization results show that we can significantly "mess with" those values and still maintain the efficacy of models learned from those values.
(5) Active learning results show that the core signal in a table of SE data can be held within a tiny region of that table (which makes most of the other values irrelevant).
(6) Spectral learning results show that these the signal within N metrics can be better modeled by a much lower dimensional space (often, less than two synthesized dimensions).
(7) Transfer learning results show that if we ever manually divide data into two regions (according to some context variable) then that boundary is routinely ignored by the learner (i.e. some of the ways we describe different projects is, in fact, irrelevant to generating useful models from those projects).
(8) Optimization results show that if we bias the learning according to exist an entire other spectrum of metrics (relating to user goals) then different metrics "matter" most, according to different biases.
But this talk is not a counsel of despair. Rather than ask "in all contexts, what metrics matter?", the engineering question should now me "how can we find, in a cost- and time- effective manner, the metrics that matter most for particular projects and for particular users?". And if particular metrics and particular models are not useful in multiple contexts, perhaps there exits model generation methods that are useful in many situations. These questions are fertile area for future research in SE.


Tim Menzies (P.hD., UNSW) is a Professor in CS at NcState University; the author of over 230 referred publications; and is one of the 50100 most cited authors in software engineering (out of , 50,000+ researchers). At he has served as lead researcher on projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work with private companies. He teaches data mining and artificial intelligence and programming languages.
Prof. Menzies is the co-founder of the PROMISE conference series devoted to reproducible experiments in software engineering (see PromiseData ). He is an associate editor of IEEE Transactions on Software Engineering, Empirical Software Engineering and the Automated Software Engineering Journal. In 2012, he served as co-chair of the program committee for the IEEE Automated Software Engineering conference. In 2015, he will serve as co-chair for the ICSE'15 NIER track. For more information, see his web site or his vita or his list of pubs.

Previous Editions