Call for Papers

Over the last decade, research on Cyber-Physical Systems(CPS) and Internet of Things (IoT) has led to smart systems at different scales and environments, from smart homes to smart cities and smart factories. Despite many successes, it is difficult to measure and compare the utility of these results due to a lack of standard evaluation criteria and methodologies. This problem inhibits evaluation against the state-of-the-art, the comparability of different integrated designs (e.g., control and networking), and the applicability of tested scenarios to present and future real-world cyber-physical applications and deployments. This state of affairs is alarming, it significantly hinders further progress in CPS and IoT systems research.

The Workshop on Benchmarking Cyber-Physical Systems and Internet of Things (CPS-IoTBench) brings together researchers from the different sub-communities to engage in a lively debate on all facets of rigorously evaluating and comparing cyber-physical networks and systems. Unique to this workshop, we seek reports describing the reproduction of experimental results reported in published papers. Examples of this include researchers looking to validate another piece of work before comparing their own to it, or industrial organizations evaluating a new technology or demonstrating the robustness of their technology before customer delivery. 

We invite researchers and practitioners from academia and industry to submit papers focusing on one of the following topics:

  • describe an attempt to reproduce experimental results from published work;
  • identify fundamental challenges and open questions in rigorous benchmarking and evaluation of cyber-physical networks and systems;
  • report on success stories or failures with using standard evaluation criteria;
  • present example benchmark systems and approaches from any of the relevant communities (embedded systems, networking, control, robotics, machine learning, etc.);
  • propose new research directions, methodologies, or tools to increase the level of reproducibility and comparability of evaluation results;
  • report on examples of best practices in different CPS sub-communities towards achieving the repeatability of results;
  • present models to capture and compare the properties of algorithms and systems.

Well-reasoned arguments or preliminary evaluations are sufficient to support a paper’s claims. 

Open Access and Open Review

The organizers of CPS-IoTBench are advocates of Open Science, which principles we want to promote with this workshop. Therefore, all submitted papers will be reviewed and published openly on OpenReview.net. Concretely,

  • All submitted papers will be publicly visible after the submission deadline;
  • All reviews will be published together with the corresponding paper;
  • Submissions are double-blind. After publication, the authors are revealed but the reviewers remain anonymous;
  • Authors can reply to the reviewers and publicly discuss the paper;
  • Anyone can comment on the papers (non-anonymously);
  • Accepted papers will be permanently accessible on OpenReview.net.;
  • Authors of rejected papers may decide to keep their submission online or not.

Do not hesitate to contact the organizers if you have any question regarding the submission process.

[ submission website ]

Submission Instructions (short)

  • up to 6 pages;
  • 9pt font double-column;
  • double-blind;

Refer to the Submission Instructions page for details.