MLST 2019

Welcome to MLST 2019

The International Workshop on Machine Learning and Software Testing (MLST 2019) seeks to bring together researchers and practitioners to exchange and discuss the most recent synergistic machine learning (ML) and software testing (ST) techniques and practices. With the recent tremendous success of machine learning in many cutting-edge applications, machine learning has become a key driving force of the next-generation innovated technology. However, the quality assurance of ML software is still at a very early stage. This year’s MLST centers around two key scopes to bring researchers with diverse background (e.g., ST, AI, Security) to come up with in-depth discussion and solutions for both ML and ST: (1) How to better take the advantage for ML to further improve ST, and (2) How to better test and analyze ML towards creating ML software with better quality. As we see that machine learning has already significantly contributed to SE communities, with some initial work to ST. On the other hand, ST for ML is still at a very early stage.

MLST 2019 will, therefore, be a workshop, which seeks to develop a cross-domain community that systematically looks into both areas from the new perspective. The workshop will explore not only how we apply machine learning to ST, but also the emerging ST techniques and tools for assessing, predicting, and improving the safety, security, quality, and reliability of machine learning based software that in turn help the development of artificial intelligence. We hope MLST could facilitate to create intelligent software with high quality, as well as accelerate the process of software development and quality assurance with intelligence.

Workshop theme, goals, and relevance

Theme: The theme of the workshop is to leverage traditional software testing (ST) to better understand machine learning based software and draw strong connections between the two. We aim to apply ST to guide test case generation for machine learning software, as well as use machine learning techniques to scale up ST tasks with higher intelligence.

Goals: Our main goal is to shed light on the direction of applying the principles of software testing to machine learning and therefore evaluate the robustness of machine learning especially deep learning software. The workshop also aims to leverage machine learning techniques to advance the efficiency, accuracy, effectiveness, and usefulness of current ST techniques.

Relevance: The audience of ICST focuses on general software test generation, which is highly related and would be benefited by leveraging machine learning techniques to accelerate the process. In addition, it would be helpful for the ICST community to know how to apply current software testing generation principles to guide testing generation for machine learning software and make a broader impact to the community.


Keynote Speaker

Title: Testing of AI Systems - Challenges Ahead

Shin Yoo (Korea Advanced Institute of Science and Technology)


Artificial Intelligence, and especially Machine Learning, is being rapidly adopted by various software systems, including safety critical systems such as autonomous driving and medical imaging. This calls for an urgent need to test these systems, but the task can be very different from testing of traditional softweare systems. What can we, softeware testing researchers, bring to these challenges? We will briefly examine various testing techniques that have been applied to AI systems so far, and survey the problem landscape to highlight areas that need further exploration.


Shin Yoo is an associate professor in the School of Computing at Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea. From 2012 to 2015, he was a lecturer of software engineering in Centre for Research on Evolution, Search, and Testing (CREST) at University College London, UK. He received PhD in Computer Science from King’s College London, UK, in 2009. He received MSc and BSc from King’s College London and Seoul National University, Korea, respectively. His main research interest lies in Search Based Software Engineering, i.e. the use of meta- heuristics and computational intelligence, such as genetic algorithm, to automatically solve various problems in software engineering, especially those related to testing and debugging.

Invited Talk

Title: Challenges in quality assurance for machine learning-based systems

Fuyuki Ishikawa (National Institute of Informatics)


There is increasing interest in machine learning (ML) techniques and their applications. With machine learning techniques, we generate the system behavior in an inductive way from training data. This shift of software development paradigm introduces unique challenges in quality assurance. ML-based systems often accompany other difficulties, such as with unbounded requirements and environments in the open world. In this talk, I overview and discuss challenges in quality assurance of ML-based systems. I try to take a wide-ranging perspective, not limited to the recent active testing research, referring to our recent activities with the Japanese industry, including questionnaire surveys as well as construction of guidelines. Thus, I invite testing researchers to much further extend the research area for testing ML-based systems.


Fuyuki Ishikawa is Associate Professor at Information Systems Architecture Science Research Division, and also Deputy Director at GRACE Center, in National Institute of Informatics, Japan. His interests are in software engineering for dependability, including formal methods, testing, and optimization, especially for smart and autonomous systems. His publications can be found in key journals and conferences, such as TSE, TPDS, TEVC, TSC, FM, ISSTA, WWW, and ICSOC. Currently, he is leading research groups on dependability of automotive systems and machine learning-based systems, respectively, in two projects (JST ERATO-MMSD and JST MIRAI-QAML). He is also leading key activities in the Japanese industry for quality assurance of machine learning-based systems.

Title: Learning to Restrict Test Range for Compiler Testing

Junhua Zhu


Junhua Zhu is a senior software test engineer at Huawei Technologies Co., Ltd, and also the team leader of quality assurance for compiler, programming language and runtime. His interest s are in software testing with artificial intelligence, and seeking the optimun balance between cost, quality and time requirements. He and his colleagues have ensured several successful commercial applications of compilers with different chip architectures. Their test objects range from open source compilers, such as "clang", "gcc", to some other DSL which self-developed by Huawei engineers. Currently, they are focusing on the quality assurance of language virtual machine and runtime management.

April 23, 2019
Session I (Chair: Lei Ma)
8:50 - 10:00 Opening
Keynote Talks (60 mins)
Testing of AI Systems - Challenges Ahead.
Shin Yoo, Associate Professor, Korea Advanced Institute of Science and Technology, Korea
10:00 - 10:30 Coffee Break
Session II (Chair: Lei Ma)
10:30 - 11:40 Challenges in quality assurance for machine learning-based systems. (Academic Invited Talk, 45 mins)
Fuyuki Ishikawa, Associate Professor, National Institute of Informatics
Learning to Restrict Test Range for Compiler Testing. (Industrial Invited Talk, 25 mins)
Junhua Zhu, Compiler senior test engineer, Huawei
11:40 - 11:45 Short Break
Session III (Chair: Jie Zhang)
11:45 - 12:30 Coverage-guided Learning-assisted Grammar-based Fuzzing. (15 mins)
Yuma Jitsunari and Yoshitaka Arahori.
Learning Performance Optimization from Code Changes for Android Apps. (15 mins)
Ruitao Feng, Guozhu Meng, Xiaofei Xie, Ting Su, Yang Liu and Shang-Wei Lin.
Variable Strength Combinatorial Testing for Deep Neural Networks. (10 mins)
Yanshan Chen, Ziyuan Wang, Dong Wang, Chunrong Fang and Zhenyu Chen.
12:30 - 13:30 Lunch


MLST workshop aims to cover the interdisciplinary topics as it relates to both ML and ST. Prospective participants are expected to focus on recent progress, breakthroughs in ML and ST, or a research vision/position statement, or industrial relevance, empirical studies as well as experience reports on either or both of the following perspectives:

  • Applying ML to ST – including but not limited to empirical studies, experience report, test requirements, test design, test automation, debugging, etc, with the ML techniques involved.
  • Applying ST to ML – including but not limited to formal verification, test design, test criteria, measurement, performance, reliability, test automation, debugging, theory of software testing, empirical studies, experience report, visions of ML software.
  • Scope and Topics

    Specific topics of interest include, but are not limited to the following subject categories:

  • Testing and verification of machine learning systems
  • Machine learning robustness, adversarial attack, defense
  • Defects, errors, failures, defects, bugs of both ML model and framework
  • Reliability, availability and safety of machine learning
  • Machine learning quality and productivity
  • Machine learning security
  • Systems (software and hardware) engineering of machine learning
  • Metrics, measurements and prediction of machine learning software quality
  • Machine learning software interpretation and understanding
  • Empirical studies using qualitative, quantitative of machine learning
  • Supporting tools and automation
  • Industry best practices
  • Machine learning for software defect prediction
  • Machine learning for test case management
  • Machine learning for debugging
  • Applications of machine learning to software verification and validation
  • This workshop accepts regular research papers within 6 pages, and short papers (new idea and position) within 4 pages.

    Submitted papers must conform to the two-column IEEE conference publication format. Templates for LaTeX and Microsoft Word are available from please use the letter format template and conference option.

    Papers should be submitted in the PDF format: they must not exceed page limit. Submissions will be handled via EasyChair. Papers must neither have been previously accepted for publication nor be under submission in another conference or journal. For your paper to be published in the proceedings, at least one of the authors of the paper must register for the conference and confirm that she/he will present the paper in person. All accepted papers will be part of the ICST joint workshop proceedings published in the IEEE Digital Library.


    All papers will be evaluated in terms of the following criteria:

  • Originality or potential for impact: The submission presents a particularly novel collation of historical work, insight or approach towards new/future work, and/or is potentially disruptive of current practice or common knowledge.
  • Soundness: The submission makes a coherent argument, substantiated by historical analysis, cogent analytical argument, or appropriately-scoped initial empirical results.
  • Relevance: The submission appropriately considers and puts itself in context with respect to the relevant literature.
  • Important Dates

  • Paper submission: January 15, 2019 January 22, 2019
  • Notification: February 5, 2019
  • Camera-ready: February 15, 2019
  • Submission Site

    Submissions will be handled via EasyChair: